Chapter 8. Virtual machines
8.1. Creating virtual machines
Use one of these procedures to create a virtual machine:
- Quick Start guided tour
- Running the wizard
- Pasting a pre-configured YAML file with the virtual machine wizard
- Using the CLI
- Importing a VMware virtual machine or template with the virtual machine wizard
Do not create virtual machines in openshift-*
namespaces. Instead, create a new namespace or use an existing namespace without the openshift
prefix.
When you create virtual machines from the web console, select a virtual machine template that is configured with a boot source. Virtual machine templates with a boot source are labeled as Available boot source or they display a customized label text. Using templates with an available boot source expedites the process of creating virtual machines.
Templates without a boot source are labeled as Boot source required. You can use these templates if you complete the steps for adding a boot source to the virtual machine.
8.1.1. Using a Quick Start to create a virtual machine
The web console provides Quick Starts with instructional guided tours for creating virtual machines. You can access the Quick Starts catalog by selecting the Help menu in the Administrator perspective to view the Quick Starts catalog. When you click on a Quick Start tile and begin the tour, the system guides you through the process.
Tasks in a Quick Start begin with selecting a Red Hat template. Then, you can add a boot source and import the operating system image. Finally, you can save the custom template and use it to create a virtual machine.
Prerequisites
- Access to the website where you can download the URL link for the operating system image.
Procedure
- In the web console, select Quick Starts from the Help menu.
- Click on a tile in the Quick Starts catalog. For example: Creating a Red Hat Linux Enterprise Linux virtual machine.
- Follow the instructions in the guided tour and complete the tasks for importing an operating system image and creating a virtual machine. The Virtual Machines tab displays the virtual machine.
8.1.2. Running the virtual machine wizard to create a virtual machine
The web console features a wizard that guides you through the process of selecting a virtual machine template and creating a virtual machine. Red Hat virtual machine templates are preconfigured with an operating system image, default settings for the operating system, flavor (CPU and memory), and workload type (server). When templates are configured with a boot source, they are labeled with a customized label text or the default label text Available boot source. These templates are then ready to be used for creating virtual machines.
You can select a template from the list of preconfigured templates, review the settings, and create a virtual machine in the Create virtual machine from template wizard. If you choose to customize your virtual machine, the wizard guides you through the General, Networking, Storage, Advanced, and Review steps. All required fields displayed by the wizard are marked by a *.
Create network interface controllers (NICs) and storage disks later and attach them to virtual machines.
Procedure
-
Click Workloads
Virtualization from the side menu. - From the Virtual Machines tab or the Templates tab, click Create and select Virtual Machine with Wizard.
- Select a template that is configured with a boot source.
- Click Next to go to the Review and create step.
- Clear the Start this virtual machine after creation checkbox if you do not want to start the virtual machine now.
- Click Create virtual machine and exit the wizard or continue with the wizard to customize the virtual machine.
Click Customize virtual machine to go to the General step.
- Optional: Edit the Name field to specify a custom name for the virtual machine.
- Optional: In the Description field, add a description.
Click Next to go to the Networking step. A
nic0
NIC is attached by default.- Optional: Click Add Network Interface to create additional NICs.
- Optional: You can remove any or all NICs by clicking the Options menu and selecting Delete. A virtual machine does not need a NIC attached to be created. You can create NICs after the virtual machine has been created.
Click Next to go to the Storage step.
- Optional: Click Add Disk to create additional disks. These disks can be removed by clicking the Options menu and selecting Delete.
- Optional: Click the Options menu to edit the disk and save your changes.
Click Next to go to the Advanced step to review the details for Cloud-init and configure SSH access.
NoteStatically inject an SSH key by using the custom script in cloud-init or in the wizard. This allows you to securely and remotely manage virtual machines and manage and transfer information. This step is strongly recommended to secure your VM.
- Click Next to go to the Review step and review the settings for the virtual machine.
- Click Create Virtual Machine.
Click See virtual machine details to view the Overview for this virtual machine.
The virtual machine is listed in the Virtual Machines tab.
Refer to the virtual machine wizard fields section when running the web console wizard.
8.1.2.1. Virtual machine wizard fields
Name | Parameter | Description |
---|---|---|
Name |
The name can contain lowercase letters ( | |
Description | Optional description field. | |
Operating System | The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template. | |
Boot Source | Import via URL (creates PVC) | Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image. |
Clone existing PVC (creates PVC) | Select an existent persistent volume claim available on the cluster and clone it. | |
Import via Registry (creates PVC) |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: | |
PXE (network boot - adds network interface) | Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition. | |
Persistent Volume Claim project | Project name that you want to use for cloning the PVC. | |
Persistent Volume Claim name | PVC name that should apply to this virtual machine template if you are cloning an existing PVC. | |
Mount this as a CD-ROM boot source | A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later. | |
Flavor | Tiny, Small, Medium, Large, Custom | Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template.
If you choose a default template, you can override the |
Workload Type Note If you choose the incorrect Workload Type, there could be performance or resource utilization issues (such as a slow UI). | Desktop | A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. |
Server | Balances performance and it is compatible with a wide range of server workloads. | |
High-Performance | A virtual machine configuration that is optimized for high-performance workloads. | |
Start this virtual machine after creation. | This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created. |
8.1.2.2. Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
8.1.2.3. Storage fields
Name | Selection | Description |
---|---|---|
Source | Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). | |
Use an existing PVC | Use a PVC that is already available in the cluster. | |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. | |
Import via Registry (creates PVC) | Import content via container registry. | |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. | |
Name |
Name of the disk. The name can contain lowercase letters ( | |
Size | Size of the disk in GiB. | |
Type | Type of disk. Example: Disk or CD-ROM | |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. | |
Storage Class | The storage class that is used to create the disk. | |
Advanced | Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem. | |
Advanced | Access mode of the persistent volume. Supported access modes are Single User (RWO), Shared Access (RWX), and Read Only (ROX). |
Advanced storage settings
The following advanced storage settings are available for Blank, Import via URL, and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults
config map.
Name | Parameter | Description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode | Single User (RWO) | The disk can be mounted as read/write by a single node. |
Shared Access (RWX) | The disk can be mounted as read/write by many nodes. Note This is required for some features, such as live migration of virtual machines between nodes. | |
Read Only (ROX) | The disk can be mounted as read-only by many nodes. |
8.1.2.4. Cloud-init fields
Name | Description |
---|---|
Hostname | Sets a specific hostname for the virtual machine. |
Authorized SSH Keys | The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine. |
Custom script | Replaces other options with a field in which you paste a custom cloud-init script. |
To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.
8.1.2.5. Pasting in a pre-configured YAML file to create a virtual machine
Create a virtual machine by writing or pasting a YAML configuration file. A valid example
virtual machine configuration is provided by default whenever you open the YAML edit screen.
If your YAML configuration is invalid when you click Create, an error message indicates the parameter in which the error occurs. Only one error is shown at a time.
Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Click Create and select Virtual Machine With YAML.
Write or paste your virtual machine configuration in the editable window.
-
Alternatively, use the
example
virtual machine provided by default in the YAML screen.
-
Alternatively, use the
- Optional: Click Download to download the YAML configuration file in its present state.
- Click Create to create the virtual machine.
The virtual machine is listed in the Virtual Machines tab.
8.1.3. Using the CLI to create a virtual machine
You can create a virtual machine from a virtualMachine
manifest.
Procedure
Edit the
VirtualMachine
manifest for your VM. For example, the following manifest configures a Red Hat Enterprise Linux (RHEL) VM:Example 8.1. Example manifest for a RHEL VM
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 2 chpasswd: { expire: False } name: cloudinitdisk
Create a virtual machine by using the manifest file:
$ oc create -f <vm_manifest_file>.yaml
Optional: Start the virtual machine:
$ virtctl start <vm_name>
8.1.4. Virtual machine storage volume types
Storage volume type | Description |
---|---|
ephemeral | A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. |
persistentVolumeClaim | Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. |
dataVolume |
Data volumes build on the
Specify |
cloudInitNoCloud | Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. |
containerDisk | References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched.
A Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note
A |
emptyDisk | Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. |
8.1.5. About RunStrategies for virtual machines
A RunStrategy
for virtual machines determines a virtual machine instance’s (VMI) behavior, depending on a series of conditions. The spec.runStrategy
setting exists in the virtual machine configuration process as an alternative to the spec.running
setting. The spec.runStrategy
setting allows greater flexibility for how VMIs are created and managed, in contrast to the spec.running
setting with only true
or false
responses. However, the two settings are mutually exclusive. Only either spec.running
or spec.runStrategy
can be used. An error occurs if both are used.
There are four defined RunStrategies.
Always
-
A VMI is always present when a virtual machine is created. A new VMI is created if the original stops for any reason, which is the same behavior as
spec.running: true
. RerunOnFailure
- A VMI is re-created if the previous instance fails due to an error. The instance is not re-created if the virtual machine stops successfully, such as when it shuts down.
Manual
-
The
start
,stop
, andrestart
virtctl client commands can be used to control the VMI’s state and existence. Halted
-
No VMI is present when a virtual machine is created, which is the same behavior as
spec.running: false
.
Different combinations of the start
, stop
and restart
virtctl commands affect which RunStrategy
is used.
The following table follows a VM’s transition from different states. The first column shows the VM’s initial RunStrategy
. Each additional column shows a virtctl command and the new RunStrategy
after that command is run.
Initial RunStrategy | start | stop | restart |
---|---|---|---|
Always | - | Halted | Always |
RerunOnFailure | - | Halted | RerunOnFailure |
Manual | Manual | Manual | Manual |
Halted | Always | - | - |
In OpenShift Virtualization clusters installed using installer-provisioned infrastructure, when a node fails the MachineHealthCheck and becomes unavailable to the cluster, VMs with a RunStrategy of Always
or RerunOnFailure
are rescheduled on a new node.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
RunStrategy: Always 1
template:
...
- 1
- The VMI’s current
RunStrategy
setting.
8.1.6. Additional resources
The
VirtualMachineSpec
definition in the KubeVirt v0.41.0 API Reference provides broader context for the parameters and hierarchy of the virtual machine specification.NoteThe KubeVirt API Reference is the upstream project reference and might contain parameters that are not supported in OpenShift Virtualization.
-
Prepare a container disk before adding it to a virtual machine as a
containerDisk
volume. - See Deploying machine health checks for further details on deploying and enabling machine health checks.
- See Installer-provisioned infrastructure overview for further details on installer-provisioned infrastructure.
- See Configuring the SR-IOV Network Operator for further details on the SR-IOV Network Operator.
8.2. Editing virtual machines
You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift CLI on the command line. You can also update a subset of the parameters in the Virtual Machine Details screen.
8.2.1. Editing a virtual machine in the web console
Edit select values of a virtual machine in the web console by clicking the pencil icon next to the relevant field. Other values can be edited using the CLI.
Labels and annotations are editable for both preconfigured Red Hat templates and your custom virtual machine templates. All other values are editable only for custom virtual machine templates that users have created using the Red Hat templates or the Create Virtual Machine Template wizard.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine.
- Click the Details tab.
- Click the pencil icon to make a field editable.
- Make the relevant changes and click Save.
If the virtual machine is running, changes to Boot Order or Flavor will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the relevant field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.2.2. Editing a virtual machine YAML configuration using the web console
You can edit the YAML configuration of a virtual machine in the web console. Some parameters cannot be modified. If you click Save with an invalid configuration, an error message indicates the parameter that cannot be changed.
If you edit the YAML configuration while the virtual machine is running, changes will not take effect until you restart the virtual machine.
Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.
Procedure
-
Click Workloads
Virtualization from the side menu. - Select a virtual machine.
- Click the YAML tab to display the editable configuration.
- Optional: You can click Download to download the YAML file locally in its current state.
- Edit the file and click Save.
A confirmation message shows that the modification has been successful and includes the updated version number for the object.
8.2.3. Editing a virtual machine YAML configuration using the CLI
Use this procedure to edit a virtual machine YAML configuration using the CLI.
Prerequisites
- You configured a virtual machine with a YAML object configuration file.
-
You installed the
oc
CLI.
Procedure
Run the following command to update the virtual machine configuration:
$ oc edit <object_type> <object_ID>
- Open the object configuration.
- Edit the YAML.
If you edit a running virtual machine, you need to do one of the following:
- Restart the virtual machine.
Run the following command for the new configuration to take effect:
$ oc apply <object_type> <object_ID>
8.2.4. Adding a virtual disk to a virtual machine
Use this procedure to add a virtual disk to a virtual machine.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Disks tab.
In the Add Disk window, specify the Source, Name, Size, Type, Interface, and Storage Class.
-
Optional: In the Advanced list, specify the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the
kubevirt-storage-class-defaults
config map.
-
Optional: In the Advanced list, specify the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the
- Click Add.
If the virtual machine is running, the new disk is in the pending restart state and will not be attached until you restart the virtual machine.
The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
To configure storage class defaults, use storage profiles. For more information, see Customizing the storage profile.
8.2.4.1. Storage fields
Name | Selection | Description |
---|---|---|
Source | Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). | |
Use an existing PVC | Use a PVC that is already available in the cluster. | |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. | |
Import via Registry (creates PVC) | Import content via container registry. | |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. | |
Name |
Name of the disk. The name can contain lowercase letters ( | |
Size | Size of the disk in GiB. | |
Type | Type of disk. Example: Disk or CD-ROM | |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. | |
Storage Class | The storage class that is used to create the disk. | |
Advanced | Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem. | |
Advanced | Access mode of the persistent volume. Supported access modes are Single User (RWO), Shared Access (RWX), and Read Only (ROX). |
Advanced storage settings
The following advanced storage settings are available for Blank, Import via URL, and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults
config map.
Name | Parameter | Description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode | Single User (RWO) | The disk can be mounted as read/write by a single node. |
Shared Access (RWX) | The disk can be mounted as read/write by many nodes. Note This is required for some features, such as live migration of virtual machines between nodes. | |
Read Only (ROX) | The disk can be mounted as read-only by many nodes. |
8.2.5. Adding a network interface to a virtual machine
Use this procedure to add a network interface to a virtual machine.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Network Interfaces tab.
- Click Add Network Interface.
- In the Add Network Interface window, specify the Name, Model, Network, Type, and MAC Address of the network interface.
- Click Add.
If the virtual machine is running, the new network interface is in the pending restart state and changes will not take effect until you restart the virtual machine.
The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.2.5.1. Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
8.2.6. Editing CD-ROMs for Virtual Machines
Use the following procedure to edit CD-ROMs for virtual machines.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Disks tab.
- Click the Options menu for the CD-ROM that you want to edit and select Edit.
- In the Edit CD-ROM window, edit the fields: Source, Persistent Volume Claim, Name, Type, and Interface.
- Click Save.
8.3. Editing boot order
You can update the values for a boot order list by using the web console or the CLI.
With Boot Order in the Virtual Machine Overview page, you can:
- Select a disk or network interface controller (NIC) and add it to the boot order list.
- Edit the order of the disks or NICs in the boot order list.
- Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.
8.3.1. Adding items to a boot order list in the web console
Add items to a boot order list by using the web console.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
- Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine.
- Add any additional disks or NICs to the boot order list.
- Click Save.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.3.2. Editing a boot order list in the web console
Edit the boot order list in the web console.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
Choose the appropriate method to move the item in the boot order list:
- If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
- If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
- Click Save.
If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.3.3. Editing a boot order list in the YAML configuration file
Edit the boot order list in a YAML configuration file by using the CLI.
Procedure
Open the YAML configuration file for the virtual machine by running the following command:
$ oc edit vm example
Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example:
disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default
- Save the YAML file.
- Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console.
8.3.4. Removing items from a boot order list in the web console
Remove items from a boot order list by using the web console.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Boot Order.
- Click the Remove icon next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.
If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine.
You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts.
8.4. Deleting virtual machines
You can delete a virtual machine from the web console or by using the oc
command line interface.
8.4.1. Deleting a virtual machine using the web console
Deleting a virtual machine permanently removes it from the cluster.
When you delete a virtual machine, the data volume it uses is automatically deleted.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
Click the Options menu of the virtual machine that you want to delete and select Delete Virtual Machine.
-
Alternatively, click the virtual machine name to open the Virtual Machine Overview screen and click Actions
Delete Virtual Machine.
-
Alternatively, click the virtual machine name to open the Virtual Machine Overview screen and click Actions
- In the confirmation pop-up window, click Delete to permanently delete the virtual machine.
8.4.2. Deleting a virtual machine by using the CLI
You can delete a virtual machine by using the oc
command line interface (CLI). The oc
client enables you to perform actions on multiple virtual machines.
When you delete a virtual machine, the data volume it uses is automatically deleted.
Prerequisites
- Identify the name of the virtual machine that you want to delete.
Procedure
Delete the virtual machine by running the following command:
$ oc delete vm <vm_name>
NoteThis command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.
8.5. Managing virtual machine instances
If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or the command-line interface (CLI).
8.5.1. About virtual machine instances
A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc
command-line interface (CLI).
A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs:
- List standalone VMIs and their details.
- Edit labels and annotations for a standalone VMI.
- Delete a standalone VMI.
When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects.
Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs.
8.5.2. Listing all virtual machine instances using the CLI
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc
command-line interface (CLI).
Procedure
List all VMIs by running the following command:
$ oc get vmis
8.5.3. Listing standalone virtual machine instances using the web console
Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs).
VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI.
Procedure
-
Click Workloads
Virtualization from the side menu. A list of VMs and standalone VMIs displays. You can identify standalone VMIs by the dark colored badges that display next to the virtual machine instance names.
8.5.4. Editing a standalone virtual machine instance using the web console
You can edit annotations and labels for a standalone virtual machine instance (VMI) using the web console. Other items displayed in the Details page for a standalone VMI are not editable.
Procedure
-
Click Workloads
Virtualization from the side menu. A list of virtual machines (VMs) and standalone VMIs displays. - Click the name of a standalone VMI to open the Virtual Machine Instance Overview screen.
- Click the Details tab.
- Click the pencil icon that is located on the right side of Annotations.
- Make the relevant changes and click Save.
To edit labels for a standalone VMI, click Actions and select Edit Labels. Make the relevant changes and click Save.
8.5.5. Deleting a standalone virtual machine instance using the CLI
You can delete a standalone virtual machine instance (VMI) by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the VMI that you want to delete.
Procedure
Delete the VMI by running the following command:
$ oc delete vmi <vmi_name>
8.5.6. Deleting a standalone virtual machine instance using the web console
Delete a standalone virtual machine instance (VMI) from the web console.
Procedure
-
In the OpenShift Container Platform web console, click Workloads
Virtualization from the side menu. Click the ⋮ button of the standalone virtual machine instance (VMI) that you want to delete and select Delete Virtual Machine Instance.
- Alternatively, click the name of the standalone VMI. The Virtual Machine Instance Overview page displays.
-
Select Actions
Delete Virtual Machine Instance. - In the confirmation pop-up window, click Delete to permanently delete the standalone VMI.
8.6. Controlling virtual machine states
You can stop, start, restart, and unpause virtual machines from the web console.
To control virtual machines from the command-line interface (CLI), use the virtctl
client.
8.6.1. Starting a virtual machine
You can start a virtual machine from the web console.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to start.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row.
To view comprehensive information about the selected virtual machine before you start it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click Actions.
- Select Start Virtual Machine.
- In the confirmation window, click Start to start the virtual machine.
When you start virtual machine that is provisioned from a URL
source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.
8.6.2. Restarting a virtual machine
You can restart a running virtual machine from the web console.
To avoid errors, do not restart a virtual machine while it has a status of Importing.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to restart.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row.
To view comprehensive information about the selected virtual machine before you restart it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click Actions.
- Select Restart Virtual Machine.
- In the confirmation window, click Restart to restart the virtual machine.
8.6.3. Stopping a virtual machine
You can stop a virtual machine from the web console.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to stop.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row.
To view comprehensive information about the selected virtual machine before you stop it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click Actions.
- Select Stop Virtual Machine.
- In the confirmation window, click Stop to stop the virtual machine.
8.6.4. Unpausing a virtual machine
You can unpause a paused virtual machine from the web console.
Prerequisites
At least one of your virtual machines must have a status of Paused.
NoteYou can pause virtual machines by using the
virtctl
client.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Find the row that contains the virtual machine that you want to unpause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- In the Status column, click Paused.
To view comprehensive information about the selected virtual machine before you unpause it:
- Access the Virtual Machine Overview screen by clicking the name of the virtual machine.
- Click the pencil icon that is located on the right side of Status.
- In the confirmation window, click Unpause to unpause the virtual machine.
8.7. Accessing virtual machine consoles
OpenShift Virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the OpenShift Container Platform web console and by using CLI commands.
8.7.1. Accessing virtual machine consoles in the OpenShift Container Platform web console
You can connect to virtual machines by using the serial console or the VNC console in the OpenShift Container Platform web console.
You can connect to Windows virtual machines by using the desktop viewer console, which uses RDP (remote desktop protocol), in the OpenShift Container Platform web console.
8.7.1.1. Connecting to the serial console
Connect to the serial console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview page.
- Click Console. The VNC console opens by default.
- Select Disconnect before switching to ensure that only one console session is open at a time. Otherwise, the VNC console session remains active in the background.
- Click the VNC Console drop-down list and select Serial Console.
- Click Disconnect to end the console session.
- Optional: Open the serial console in a separate window by clicking Open Console in New Window.
8.7.1.2. Connecting to the VNC console
Connect to the VNC console of a running virtual machine from the Console tab in the Virtual Machine Overview screen of the web console.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview page.
- Click the Console tab. The VNC console opens by default.
- Optional: Open the VNC console in a separate window by clicking Open Console in New Window.
- Optional: Send key combinations to the virtual machine by clicking Send Key.
- Click outside the console window and then click Disconnect to end the session.
8.7.1.3. Connecting to a Windows virtual machine with RDP
The desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines.
To connect to a Windows virtual machine with RDP, download the console.rdp
file for the virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console and supply it to your preferred RDP client.
Prerequisites
-
A running Windows virtual machine with the QEMU guest agent installed. The
qemu-guest-agent
is included in the VirtIO drivers. - A layer-2 NIC attached to the virtual machine.
- An RDP client installed on a machine on the same network as the Windows virtual machine.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a Windows virtual machine to open the Virtual Machine Overview screen.
- Click the Console tab.
- In the Console list, select Desktop Viewer.
- In the Network Interface list, select the layer-2 NIC.
-
Click Launch Remote Desktop to download the
console.rdp
file. Open an RDP client and reference the
console.rdp
file. For example, using remmina:$ remmina --connect /path/to/console.rdp
- Enter the Administrator user name and password to connect to the Windows virtual machine.
8.7.1.4. Copying the SSH command from the web console
Copy the command to access a running virtual machine (VM) via SSH from the Actions list in the web console.
Procedure
-
In the OpenShift Container Platform console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview page.
-
From the Actions list, select Copy SSH Command. You can now paste this command onto the OpenShift CLI (
oc
).
8.7.2. Accessing virtual machine consoles by using CLI commands
8.7.2.1. Accessing a virtual machine instance via SSH
You can use SSH to access a virtual machine (VM) after you expose port 22 on it.
The virtctl expose
command forwards a virtual machine instance (VMI) port to a node port and creates a service for enabled access. The following example creates the fedora-vm-ssh
service that forwards traffic from a specific port of cluster nodes to port 22 of the <fedora-vm>
virtual machine.
Prerequisites
- You must be in the same project as the VMI.
-
The VMI you want to access must be connected to the default pod network by using the
masquerade
binding method. - The VMI you want to access must be running.
-
Install the OpenShift CLI (
oc
).
Procedure
Run the following command to create the
fedora-vm-ssh
service:$ virtctl expose vm <fedora-vm> --port=22 --name=fedora-vm-ssh --type=NodePort 1
- 1
<fedora-vm>
is the name of the VM that you run thefedora-vm-ssh
service on.
Check the service to find out which port the service acquired:
$ oc get svc
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 22:32551/TCP 6s
+ In this example, the service acquired the 32551
port.
Log in to the VMI via SSH. Use the
ipAddress
of any of the cluster nodes and the port that you found in the previous step:$ ssh username@<node_IP_address> -p 32551
8.7.2.2. Accessing a virtual machine via SSH with YAML configurations
You can enable an SSH connection to a virtual machine (VM) without the need to run the virtctl expose
command. When the YAML file for the VM and the YAML file for the service are configured and applied, the service forwards the SSH traffic to the VM.
The following examples show the configurations for the VM’s YAML file and the service YAML file.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Create a namespace for the VM’s YAML file by using the
oc create namespace
command and specifying a name for the namespace.
Procedure
In the YAML file for the VM, add the label and a value for exposing the service for SSH connections. Enable the
masquerade
feature for the interface:Example
VirtualMachine
definition... apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: namespace: ssh-ns 1 name: vm-ssh spec: running: false template: metadata: labels: kubevirt.io/vm: vm-ssh special: vm-ssh 2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} 3 name: testmasquerade 4 rng: {} machine: type: "" resources: requests: memory: 1024M networks: - name: testmasquerade pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin ...
- 1
- Name of the namespace created by the
oc create namespace
command. - 2
- Label used by the service to identify the virtual machine instances that are enabled for SSH traffic connections. The label can be any
key:value
pair that is added as alabel
to this YAML file and as aselector
in the service YAML file. - 3
- The interface type is
masquerade
. - 4
- The name of this interface is
testmasquerade
.
Create the VM:
$ oc create -f <path_for_the_VM_YAML_file>
Start the VM:
$ virtctl start vm-ssh
In the YAML file for the service, specify the service name, port number, and the target port.
Example
Service
definition... apiVersion: v1 kind: Service metadata: name: svc-ssh 1 namespace: ssh-ns 2 spec: ports: - targetPort: 22 3 protocol: TCP port: 27017 selector: special: vm-ssh 4 type: NodePort ...
Create the service:
$ oc create -f <path_for_the_service_YAML_file>
Verify that the VM is running:
$ oc get vmi
Example output
NAME AGE PHASE IP NODENAME vm-ssh 6s Running 10.244.196.152 node01
Check the service to find out which port the service acquired:
$ oc get svc
Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-ssh NodePort 10.106.236.208 <none> 27017:30093/TCP 22s
In this example, the service acquired the port number 30093.
Run the following command to obtain the IP address for the node:
$ oc get node <node_name> -o wide
Example output
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.20.0+5f82cdb 192.168.55.101 <none>
Log in to the VM via SSH by specifying the IP address of the node where the VM is running and the port number. Use the port number displayed by the
oc get svc
command and the IP address of the node displayed by theoc get node
command. The following example shows thessh
command with the username, node’s IP address, and the port number:$ ssh fedora@192.168.55.101 -p 30093
8.7.2.3. Accessing the serial console of a virtual machine instance
The virtctl console
command opens a serial console to the specified virtual machine instance.
Prerequisites
-
The
virt-viewer
package must be installed. - The virtual machine instance you want to access must be running.
Procedure
Connect to the serial console with
virtctl
:$ virtctl console <VMI>
8.7.2.4. Accessing the graphical console of a virtual machine instances with VNC
The virtctl
client utility can use the remote-viewer
function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer
package.
Prerequisites
-
The
virt-viewer
package must be installed. - The virtual machine instance you want to access must be running.
If you use virtctl
via SSH on a remote machine, you must forward the X session to your machine.
Procedure
Connect to the graphical interface with the
virtctl
utility:$ virtctl vnc <VMI>
If the command failed, try using the
-v
flag to collect troubleshooting information:$ virtctl vnc <VMI> -v 4
8.7.2.5. Connecting to a Windows virtual machine with an RDP console
The Remote Desktop Protocol (RDP) provides a better console experience for connecting to Windows virtual machines.
To connect to a Windows virtual machine with RDP, specify the IP address of the attached L2 NIC to your RDP client.
Prerequisites
-
A running Windows virtual machine with the QEMU guest agent installed. The
qemu-guest-agent
is included in the VirtIO drivers. - A layer 2 NIC attached to the virtual machine.
- An RDP client installed on a machine on the same network as the Windows virtual machine.
Procedure
Log in to the OpenShift Virtualization cluster through the
oc
CLI tool as a user with an access token.$ oc login -u <user> https://<cluster.example.com>:8443
Use
oc describe vmi
to display the configuration of the running Windows virtual machine.$ oc describe vmi <windows-vmi-name>
Example output
... spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net ... status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net ...
-
Identify and copy the IP address of the layer 2 network interface. This is
192.0.2.0
in the above example, or2001:db8::
if you prefer IPv6. - Open an RDP client and use the IP address copied in the previous step for the connection.
- Enter the Administrator user name and password to connect to the Windows virtual machine.
8.8. Triggering virtual machine failover by resolving a failed node
If a node fails and machine health checks are not deployed on your cluster, virtual machines (VMs) with RunStrategy: Always
configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node
object.
If you installed your cluster by using installer-provisioned infrastructure and you properly configured machine health checks:
- Failed nodes are automatically recycled.
-
Virtual machines with
RunStrategy
set toAlways
orRerunOnFailure
are automatically scheduled on healthy nodes.
8.8.1. Prerequisites
-
A node where a virtual machine was running has the
NotReady
condition. -
The virtual machine that was running on the failed node has
RunStrategy
set toAlways
. -
You have installed the OpenShift CLI (
oc
).
8.8.2. Deleting nodes from a bare metal cluster
When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods.
Procedure
Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps:
Mark the node as unschedulable:
$ oc adm cordon <node_name>
Drain all pods on the node:
$ oc adm drain <node_name> --force=true
This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed.
Delete the node from the cluster:
$ oc delete node <node_name>
Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node.
- If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster.
8.8.3. Verifying virtual machine failover
After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc
CLI.
8.8.3.1. Listing all virtual machine instances using the CLI
You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc
command-line interface (CLI).
Procedure
List all VMIs by running the following command:
$ oc get vmis
8.9. Installing the QEMU guest agent on virtual machines
The QEMU guest agent is a daemon that runs on the virtual machine and passes information to the host about the virtual machine, users, file systems, and secondary networks.
8.9.1. Installing QEMU guest agent on a Linux virtual machine
The qemu-guest-agent
is widely available and available by default in Red Hat virtual machines. Install the agent and start the service
Procedure
- Access the virtual machine command line through one of the consoles or by SSH.
Install the QEMU guest agent on the virtual machine:
$ yum install -y qemu-guest-agent
Ensure the service is persistent and start it:
$ systemctl enable --now qemu-guest-agent
You can also install and start the QEMU guest agent by using the custom script field in the cloud-init section of the wizard when creating either virtual machines or virtual machines templates in the web console.
8.9.2. Installing QEMU guest agent on a Windows virtual machine
For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers, which can be installed using one of the following procedures:
8.9.2.1. Installing VirtIO drivers on an existing Windows virtual machine
Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.
This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.
Procedure
- Start the virtual machine and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
-
Open the
Device Properties
to identify the unknown device. Right-click the device and select Properties. - Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
-
Open the
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the virtual machine to complete the driver installation.
8.9.2.2. Installing VirtIO drivers during Windows installation
Install the VirtIO drivers from the attached SATA CD driver during Windows installation.
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Procedure
- Start the virtual machine and connect to a graphical console.
- Begin the Windows installation process.
- Select the Advanced installation.
-
The storage destination will not be recognized until the driver is loaded. Click
Load driver
. - The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Repeat the previous two steps for all required drivers.
- Complete the Windows installation.
8.10. Viewing the QEMU guest agent information for virtual machines
When the QEMU guest agent runs on the virtual machine, you can use the web console to view information about the virtual machine, users, file systems, and secondary networks.
8.10.1. Prerequisites
- Install the QEMU guest agent on the virtual machine.
8.10.2. About the QEMU guest agent information in the web console
When the QEMU guest agent is installed, the Details pane within the Virtual Machine Overview tab and the Details tab display information about the hostname, operating system, time zone, and logged in users.
The Virtual Machine Overview shows information about the guest operating system installed on the virtual machine. The Details tab displays a table with information for logged in users. The Disks tab displays a table with information for file systems.
If the QEMU guest agent is not installed, the Virtual Machine Overview tab and the Details tab display information about the operating system that was specified when the virtual machine was created.
8.10.3. Viewing the QEMU guest agent information in the web console
You can use the web console to view information for virtual machines that is passed by the QEMU guest agent to the host.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine name to open the Virtual Machine Overview screen and view the Details pane.
- Click Logged in users to view the Details tab that shows information for users.
- Click the Disks tab to view information about the file systems.
8.11. Managing config maps, secrets, and service accounts in virtual machines
You can use secrets, config maps, and service accounts to pass configuration data to virtual machines. For example, you can:
- Give a virtual machine access to a service that requires credentials by adding a secret to the virtual machine.
- Store non-confidential configuration data in a config map so that a pod or another object can consume the data.
- Allow a component to access the API server by associating a service account with that component.
OpenShift Virtualization exposes secrets, config maps, and service accounts as virtual machine disks so that you can use them across platforms without additional overhead.
8.11.1. Adding a secret, config map, or service account to a virtual machine
Add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console.
Prerequisites
- The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Environment tab.
- Click Select a resource and select a secret, config map, or service account from the list. A six character serial number is automatically generated for the selected resource.
- Click Save.
- Optional. Add another object by clicking Add Config Map, Secret or Service Account.
- You can reset the form to the last saved state by clicking Reload.
- The Environment resources are added to the virtual machine as disks. You can mount the secret, config map, or service account as you would mount any other disk.
- If the virtual machine is running, changes will not take effect until you restart the virtual machine. The newly added resources are marked as pending changes for both the Environment and Disks tab in the Pending Changes banner at the top of the page.
Verification
- From the Virtual Machine Overview page, click the Disks tab.
- Check to ensure that the secret, config map, or service account is included in the list of disks.
Optional. Choose the appropriate method to apply your changes:
-
If the virtual machine is running, restart the virtual machine by clicking Actions
Restart Virtual Machine. -
If the virtual machine is stopped, start the virtual machine by clicking Actions
Start Virtual Machine.
-
If the virtual machine is running, restart the virtual machine by clicking Actions
You can now mount the secret, config map, or service account as you would mount any other disk.
8.11.2. Removing a secret, config map, or service account from a virtual machine
Remove a secret, config map, or service account from a virtual machine by using the OpenShift Container Platform web console.
Prerequisites
- You must have at least one secret, config map, or service account that is attached to a virtual machine.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Environment tab.
- Find the item that you want to delete in the list, and click Remove on the right side of the item.
- Click Save.
You can reset the form to the last saved state by clicking Reload.
Verification
- From the Virtual Machine Overview page, click the Disks tab.
- Check to ensure that the secret, config map, or service account that you removed is no longer included in the list of disks.
8.11.3. Additional resources
8.12. Installing VirtIO driver on an existing Windows virtual machine
8.12.1. Understanding VirtIO drivers
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win
container disk of the Red Hat Ecosystem Catalog.
The container-native-virtualization/virtio-win
container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the virtual machine.
See also: Installing Virtio drivers on a new Windows virtual machine.
8.12.2. Supported VirtIO drivers for Microsoft Windows virtual machines
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes displays as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
8.12.3. Adding VirtIO drivers container disk to a virtual machine
OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win
container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.
Prerequisites
-
Download the
container-native-virtualization/virtio-win
container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as acdrom
disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- OpenShift Virtualization boots virtual machine disks in the order defined in the
VirtualMachine
configuration file. You can either define other disks for the virtual machine before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the virtual machine boots from the correct disk. If you specify thebootOrder
for a disk, it must be specified for all disks in the configuration.
The disk is available once the virtual machine has started:
-
If you add the container disk to a running virtual machine, use
oc apply -f <vm.yaml>
in the CLI or reboot the virtual machine for the changes to take effect. -
If the virtual machine is not running, use
virtctl start <vm>
.
-
If you add the container disk to a running virtual machine, use
After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.
8.12.4. Installing VirtIO drivers on an existing Windows virtual machine
Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.
This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. See the installation documentation for your version of Windows for specific installation steps.
Procedure
- Start the virtual machine and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
-
Open the
Device Properties
to identify the unknown device. Right-click the device and select Properties. - Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
-
Open the
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the virtual machine to complete the driver installation.
8.12.5. Removing the VirtIO container disk from a virtual machine
After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win
container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win
container disk from the virtual machine configuration file.
Procedure
Edit the configuration file and remove the
disk
and thevolume
.$ oc edit vm <vm-name>
spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- Reboot the virtual machine for the changes to take effect.
8.13. Installing VirtIO driver on a new Windows virtual machine
8.13.1. Prerequisites
- Windows installation media accessible by the virtual machine, such as importing an ISO into a data volume and attaching it to the virtual machine.
8.13.2. Understanding VirtIO drivers
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in OpenShift Virtualization. The supported drivers are available in the container-native-virtualization/virtio-win
container disk of the Red Hat Ecosystem Catalog.
The container-native-virtualization/virtio-win
container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the virtual machine.
See also: Installing VirtIO driver on an existing Windows virtual machine.
8.13.3. Supported VirtIO drivers for Microsoft Windows virtual machines
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes displays as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
8.13.4. Adding VirtIO drivers container disk to a virtual machine
OpenShift Virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Ecosystem Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win
container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.
Prerequisites
-
Download the
container-native-virtualization/virtio-win
container disk from the Red Hat Ecosystem Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as acdrom
disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- OpenShift Virtualization boots virtual machine disks in the order defined in the
VirtualMachine
configuration file. You can either define other disks for the virtual machine before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the virtual machine boots from the correct disk. If you specify thebootOrder
for a disk, it must be specified for all disks in the configuration.
The disk is available once the virtual machine has started:
-
If you add the container disk to a running virtual machine, use
oc apply -f <vm.yaml>
in the CLI or reboot the virtual machine for the changes to take effect. -
If the virtual machine is not running, use
virtctl start <vm>
.
-
If you add the container disk to a running virtual machine, use
After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.
8.13.5. Installing VirtIO drivers during Windows installation
Install the VirtIO drivers from the attached SATA CD driver during Windows installation.
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing.
Procedure
- Start the virtual machine and connect to a graphical console.
- Begin the Windows installation process.
- Select the Advanced installation.
-
The storage destination will not be recognized until the driver is loaded. Click
Load driver
. - The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Repeat the previous two steps for all required drivers.
- Complete the Windows installation.
8.13.6. Removing the VirtIO container disk from a virtual machine
After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win
container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win
container disk from the virtual machine configuration file.
Procedure
Edit the configuration file and remove the
disk
and thevolume
.$ oc edit vm <vm-name>
spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- Reboot the virtual machine for the changes to take effect.
8.14. Advanced virtual machine management
8.14.1. Working with resource quotas for virtual machines
Create and manage resource quotas for virtual machines.
8.14.1.1. Setting resource quota limits for virtual machines
Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests.
Procedure
Set limits for a VM by editing the
VirtualMachine
manifest. For example:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1
- 1
- This configuration is supported because the
limits.memory
value is at least100Mi
larger than therequests.memory
value.
-
Save the
VirtualMachine
manifest.
8.14.1.2. Additional resources
8.14.2. Specifying nodes for virtual machines
You can place virtual machines (VMs) on specific nodes by using node placement rules.
8.14.2.1. About node placement for virtual machines
To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if:
- You have several VMs. To ensure fault tolerance, you want them to run on different nodes.
- You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node.
- Your VMs require specific hardware features that are not present on all available nodes.
- You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities.
Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes.
You can use the following rule types in the spec
field of a VirtualMachine
manifest:
nodeSelector
- Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs.
affinity
Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the
VirtualMachine
workload type is based on thePod
object.NoteAffinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met.
tolerations
- Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint.
8.14.2.2. Node placement examples
The following example YAML file snippets use nodePlacement
, affinity
, and tolerations
fields to customize node placement for virtual machines.
8.14.2.2.1. Example: VM node placement with nodeSelector
In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1
and example-key-2 = example-value-2
labels.
If there are no nodes that fit this description, the virtual machine is not scheduled.
Example VM manifest
metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 ...
8.14.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity
In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1
. If there is no such pod running on any node, the VM is not scheduled.
If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2
. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.14.2.2.3. Example: VM node placement with node affinity
In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1
or the label example.io/example-key = example-value-2
. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled.
If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value
. However, if all candidate nodes have this label, the scheduler ignores this constraint.
Example VM manifest
metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value ...
- 1
- If you use the
requiredDuringSchedulingIgnoredDuringExecution
rule type, the VM is not scheduled if the constraint is not met. - 2
- If you use the
preferredDuringSchedulingIgnoredDuringExecution
rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met.
8.14.2.2.4. Example: VM node placement with tolerations
In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule
taint. Because this virtual machine has matching tolerations
, it can schedule onto the tainted nodes.
A virtual machine that tolerates a taint is not required to schedule onto a node with that taint.
Example VM manifest
metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" ...
8.14.2.3. Additional resources
8.14.3. Configuring certificate rotation
Configure certificate rotation parameters to replace existing certificates.
8.14.3.1. Configuring certificate rotation
You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged
custom resource (CR).
Procedure
Open the
HyperConverged
CR by running the following command:$ oc edit hco -n openshift-cnv kubevirt-hyperconverged
Edit the
spec.certConfig
fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golangParseDuration
format.apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3
- Apply the YAML file to your cluster.
8.14.3.2. Troubleshooting certificate rotation parameters
Deleting one or more certConfig
values causes them to revert to the default values, unless the default values conflict with one of the following conditions:
-
The value of
ca.renewBefore
must be less than or equal to the value ofca.duration
. -
The value of
server.duration
must be less than or equal to the value ofca.duration
. -
The value of
server.renewBefore
must be less than or equal to the value ofserver.duration
.
If the default values conflict with these conditions, you will receive an error.
If you remove the server.duration
value in the following example, the default value of 24h0m0s
is greater than the value of ca.duration
, conflicting with the specified conditions.
Example
certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s
This results in the following error message:
error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration
The error message only mentions the first conflict. Review all certConfig values before you proceed.
8.14.4. Automating management tasks
You can automate OpenShift Virtualization management tasks by using Red Hat Ansible Automation Platform. Learn the basics by using an Ansible Playbook to create a new virtual machine.
8.14.4.1. About Red Hat Ansible Automation
Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for OpenShift Virtualization, and Ansible modules enable you to automate cluster management tasks such as template, persistent volume claim, and virtual machine operations.
Ansible provides a way to automate OpenShift Virtualization management, which you can also accomplish by using the oc
CLI tool or APIs. Ansible is unique because it allows you to integrate KubeVirt modules with other Ansible modules.
8.14.4.2. Automating virtual machine creation
You can use the kubevirt_vm
Ansible Playbook to create virtual machines in your OpenShift Container Platform cluster using Red Hat Ansible Automation Platform.
Prerequisites
- Red Hat Ansible Engine version 2.8 or newer
Procedure
Edit an Ansible Playbook YAML file so that it includes the
kubevirt_vm
task:kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus:
NoteThis snippet only includes the
kubevirt_vm
portion of the playbook.Edit the values to reflect the virtual machine you want to create, including the
namespace
, the number ofcpu_cores
, thememory
, and thedisks
. For example:kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio
If you want the virtual machine to boot immediately after creation, add
state: running
to the YAML file. For example:kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1
- 1
- Changing this value to
state: absent
deletes the virtual machine, if it already exists.
Run the
ansible-playbook
command, using your playbook’s file name as the only argument:$ ansible-playbook create-vm.yaml
Review the output to determine if the play was successful:
Example output
(...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
If you did not include
state: running
in your playbook file and you want to boot the VM now, edit the file so that it includesstate: running
and run the playbook again:$ ansible-playbook create-vm.yaml
To verify that the virtual machine was created, try to access the VM console.
8.14.4.3. Example: Ansible Playbook for creating virtual machines
You can use the kubevirt_vm
Ansible Playbook to automate virtual machine creation.
The following YAML file is an example of the kubevirt_vm
playbook. It includes sample values that you must replace with your own information if you run the playbook.
--- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio
Additional information
8.14.5. Using EFI mode for virtual machines
You can boot a virtual machine (VM) in Extensible Firmware Interface (EFI) mode.
8.14.5.1. About EFI mode for virtual machines
Extensible Firmware Interface (EFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. EFI supports more modern features and customization options than BIOS, enabling faster boot times.
It stores all the information about initialization and startup in a file with a .efi
extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer.
OpenShift Virtualization only supports a virtual machine (VM) with Secure Boot when using EFI mode. If Secure Boot is not enabled, the VM crashes repeatedly. However, the VM might not support Secure Boot. Before you boot a VM, verify that it supports Secure Boot by checking the VM settings.
8.14.5.2. Booting virtual machines in EFI mode
You can configure a virtual machine to boot in EFI mode by editing the VM or VMI manifest.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Create a YAML file that defines a VM object or a Virtual Machine Instance (VMI) object. Use the firmware stanza of the example YAML file:
Booting in EFI mode with secure boot active
apiversion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: labels: special: vmi-secureboot name: vmi-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 ...
- 1
- OpenShift Virtualization requires System Management Mode (
SMM
) to be enabled for Secure Boot in EFI mode to occur. - 2
- OpenShift Virtualization only supports a virtual machine (VM) with Secure Boot when using EFI mode. If Secure Boot is not enabled, the VM crashes repeatedly. However, the VM might not support Secure Boot. Before you boot a VM, verify that it supports Secure Boot by checking the VM settings.
Apply the manifest to your cluster by running the following command:
$ oc create -f <file_name>.yaml
8.14.6. Configuring PXE booting for virtual machines
PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
8.14.6.1. Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
8.14.6.2. PXE booting with a specified MAC address
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition
object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the network attachment definition file for PXE network
pxe-net-conf
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ "cniVersion": "0.3.1", "name": "pxe-net-conf", "plugins": [ { "type": "cnv-bridge", "bridge": "br1", "vlan": 1 1 }, { "type": "cnv-tuning" 2 } ] }'
NoteThe virtual machine instance will be attached to the bridge
br1
through an access port with the requested VLAN.
Create the network attachment definition by using the file you created in the previous step:
$ oc create -f pxe-net-conf.yaml
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1
NoteBoot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2
Specify that the network is connected to the previously created network attachment definition. In this scenario,
<pxe-net>
is connected to the network attachment definition called<pxe-net-conf>
:networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yaml
Example output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
View the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot
- Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
$ virtctl console vmi-pxe-boot
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from OpenShift Container Platform.$ ip addr
Example output
... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
8.14.6.3. Template: Virtual machine instance configuration file for PXE booting
apiVersion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: creationTimestamp: null labels: special: vmi-pxe-boot name: vmi-pxe-boot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 machine: type: "" resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: pxe-net-conf name: pxe-net terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin name: cloudinitdisk status: {}
8.14.7. Managing guest memory
If you want to adjust guest memory settings to suit a specific use case, you can do so by editing the guest’s YAML configuration file. OpenShift Virtualization allows you to configure guest memory overcommitment and disable guest memory overhead accounting.
The following procedures increase the chance that virtual machine processes will be killed due to memory pressure. Proceed only if you understand the risks.
8.14.7.1. Configuring guest memory overcommitment
If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host’s memory to your virtual machine instances (VMIs). Enabling memory overcommitment means that you can maximize resources that are normally reserved for the host.
For example, if the host has 32 GB RAM, you can use memory overcommitment to fit 8 virtual machines (VMs) with 4 GB RAM each. This allocation works under the assumption that the virtual machines will not use all of their memory at the same time.
Memory overcommitment increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).
The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.
Procedure
To explicitly tell the virtual machine instance that it has more memory available than was requested from the cluster, edit the virtual machine configuration file and set
spec.domain.memory.guest
to a higher value thanspec.domain.resources.requests.memory
. This process is called memory overcommitment.In this example,
1024M
is requested from the cluster, but the virtual machine instance is told that it has2048M
available. As long as there is enough free memory available on the node, the virtual machine instance will consume up to 2048M.kind: VirtualMachine spec: template: domain: resources: requests: memory: 1024M memory: guest: 2048M
NoteThe same eviction rules as those for pods apply to the virtual machine instance if the node is under memory pressure.
Create the virtual machine:
$ oc create -f <file_name>.yaml
8.14.7.2. Disabling guest memory overhead accounting
A small amount of memory is requested by each virtual machine instance in addition to the amount that you request. This additional memory is used for the infrastructure that wraps each VirtualMachineInstance
process.
Though it is not usually advisable, it is possible to increase the virtual machine instance density on the node by disabling guest memory overhead accounting.
Disabling guest memory overhead accounting increases the potential for virtual machine processes to be killed due to memory pressure (OOM killed).
The potential for a VM to be OOM killed varies based on your specific configuration, node memory, available swap space, virtual machine memory consumption, the use of kernel same-page merging (KSM), and other factors.
Procedure
To disable guest memory overhead accounting, edit the YAML configuration file and set the
overcommitGuestOverhead
value totrue
. This parameter is disabled by default.kind: VirtualMachine spec: template: domain: resources: overcommitGuestOverhead: true requests: memory: 1024M
NoteIf
overcommitGuestOverhead
is enabled, it adds the guest overhead to memory limits, if present.Create the virtual machine:
$ oc create -f <file_name>.yaml
8.14.8. Using huge pages with virtual machines
You can use huge pages as backing memory for virtual machines in your cluster.
8.14.8.1. Prerequisites
- Nodes must have pre-allocated huge pages configured.
8.14.8.2. What huge pages do
Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size.
A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP.
In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages.
8.14.8.3. Configuring huge pages for virtual machines
You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize
and resources.requests.memory
parameters in your virtual machine configuration.
The memory request must be divisible by the page size. For example, you cannot request 500Mi
memory with a page size of 1Gi
.
The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance.
If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect.
Prerequisites
- Nodes must have pre-allocated huge pages configured.
Procedure
In your virtual machine configuration, add the
resources.requests.memory
andmemory.hugepages.pageSize
parameters to thespec.domain
. The following configuration snippet is for a virtual machine that requests a total of4Gi
memory with a page size of1Gi
:kind: VirtualMachine ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 ...
Apply the virtual machine configuration:
$ oc apply -f <virtual_machine>.yaml
8.14.9. Enabling dedicated resources for virtual machines
To improve performance, you can dedicate node resources, such as CPU, to a virtual machine.
8.14.9.1. About dedicated resources
When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.
8.14.9.2. Prerequisites
-
The CPU Manager must be configured on the node. Verify that the node has the
cpumanager = true
label before scheduling virtual machine workloads. - The virtual machine must be powered off.
8.14.9.3. Enabling dedicated resources for a virtual machine
You can enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created by using a Red Hat template or the wizard can be enabled with dedicated resources.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Select a virtual machine to open the Virtual Machine tab.
- Click the Details tab.
- Click the pencil icon to the right of the Dedicated Resources field to open the Dedicated Resources window.
- Select Schedule this workload with dedicated resources (guaranteed policy).
- Click Save.
8.14.10. Scheduling virtual machines
You can schedule a virtual machine (VM) on a node by ensuring that the VM’s CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node.
8.14.10.1. Understanding policy attributes
You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node.
Policy attribute | Description |
---|---|
force | The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM’s CPU. |
require | Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM’s CPU or the hypervisor must be able to emulate the supported CPU model. |
optional | The VM is added to a node if that VM is supported by the host’s physical machine CPU. |
disable | The VM cannot be scheduled with CPU node discovery. |
forbid | The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. |
8.14.10.2. Setting a policy attribute and CPU feature
You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor.
Procedure
Edit the
domain
spec of your VM configuration file. The following example sets the CPU feature and therequire
policy for a virtual machine instance (VMI):apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvmi spec: domain: cpu: features: - name: apic 1 policy: require 2
8.14.10.3. Scheduling virtual machines with the supported CPU model
You can configure a CPU model for a virtual machine (VM) or a virtual machine instance (VMI) to schedule it on a node where its CPU model is supported.
Procedure
Edit the
domain
spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VMI:apiVersion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: name: myvmi spec: domain: cpu: model: Conroe 1
- 1
- CPU model for the VMI.
8.14.10.4. Scheduling virtual machines with the host model
When the CPU model for a virtual machine (VM) is set to host-model
, the VM inherits the CPU model of the node where it is scheduled.
Procedure
Edit the
domain
spec of your VM configuration file. The following example showshost-model
being specified for the virtual machine instance (VMI):apiVersion: kubevirt/v1alpha3 kind: VirtualMachineInstance metadata: name: myvmi spec: domain: cpu: model: host-model 1
- 1
- The VM or VMI that inherits the CPU model of the node where it is scheduled.
8.14.11. Configuring PCI passthrough
The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine. When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system.
Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc
command-line interface (CLI).
8.14.11.1. About preparing a host device for PCI passthrough
To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig
object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices
field of the HyperConverged
custom resource (CR). The permittedHostDevices
list is empty when you first install the OpenShift Virtualization Operator.
To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged
CR.
8.14.11.1.1. Adding kernel arguments to enable the IOMMU driver
To enable the IOMMU (Input-Output Memory Management Unit) driver in the kernel, create the MachineConfig
object and add the kernel arguments.
Prerequisites
- Administrative privilege to a working OpenShift Container Platform cluster.
- Intel or AMD CPU hardware.
- Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS (Basic Input/Output System) is enabled.
Procedure
Create a
MachineConfig
object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU.apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 ...
Create the new
MachineConfig
object:$ oc create -f 100-worker-kernel-arg-iommu.yaml
Verification
Verify that the new
MachineConfig
object was added.$ oc get MachineConfig
8.14.11.1.2. Binding PCI devices to the VFIO driver
To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID
and device-ID
from each device and create a list with the values. Add this list to the MachineConfig
object. The MachineConfig
Operator generates the /etc/modprobe.d/vfio.conf
on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver.
Prerequisites
- You added kernel arguments to enable IOMMU for the CPU.
Procedure
Run the
lspci
command to obtain thevendor-ID
and thedevice-ID
for the PCI device.$ lspci -nnv | grep -i nvidia
Example output
02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
Create a Butane config file,
100-worker-vfiopci.bu
, binding the PCI device to the VFIO driver.NoteSee "Creating machine configs with Butane" for information about Butane.
Example
variant: openshift version: 4.8.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci
- 1
- Applies the new kernel argument only to worker nodes.
- 2
- Specify the previously determined
vendor-ID
value (10de
) and thedevice-ID
value (1eb8
) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. - 3
- The file that loads the vfio-pci kernel module on the worker nodes.
Use Butane to generate a
MachineConfig
object file,100-worker-vfiopci.yaml
, containing the configuration to be delivered to the worker nodes:$ butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml
Apply the
MachineConfig
object to the worker nodes:$ oc apply -f 100-worker-vfiopci.yaml
Verify that the
MachineConfig
object was added.$ oc get MachineConfig
Example output
NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s
Verification
Verify that the VFIO driver is loaded.
$ lspci -nnk -d 10de:
The output confirms that the VFIO driver is being used.
Example output
04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau
8.14.11.1.3. Exposing PCI host devices in the cluster using the CLI
To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices
array of the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the PCI device information to the
spec.permittedHostDevices.pciHostDevices
array. For example:Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 ...
- 1
- The host devices that are permitted to be used in the cluster.
- 2
- The list of PCI devices available on the node.
- 3
- The
vendor-ID
and thedevice-ID
required to identify the PCI device. - 4
- The name of a PCI host device.
- 5
- Optional: Setting this field to
true
indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin.
NoteThe above example snippet shows two PCI host devices that are named
nvidia.com/GV100GL_Tesla_V100
andnvidia.com/TU104GL_Tesla_T4
added to the list of permitted host devices in theHyperConverged
CR. These devices have been tested and verified to work with OpenShift Virtualization.- Save your changes and exit the editor.
Verification
Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the
nvidia.com/GV100GL_Tesla_V100
,nvidia.com/TU104GL_Tesla_T4
, andintel.com/qat
resource names.$ oc describe node <node_name>
Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250
8.14.11.1.4. Removing PCI host devices from the cluster using the CLI
To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged
custom resource (CR).
Procedure
Edit the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Remove the PCI device information from the
spec.permittedHostDevices.pciHostDevices
array by deleting thepciDeviceSelector
,resourceName
andexternalResourceProvider
(if applicable) fields for the appropriate device. In this example, theintel.com/qat
resource has been deleted.Example configuration file
apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" ...
- Save your changes and exit the editor.
Verification
Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the
intel.com/qat
resource name.$ oc describe node <node_name>
Example output
Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250
8.14.11.2. Configuring virtual machines for PCI passthrough
After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines.
8.14.11.2.1. Assigning a PCI device to a virtual machine
When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough.
Procedure
Assign the PCI device to a virtual machine as a host device.
Example
apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1
- 1
- The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device.
Verification
Use the following command to verify that the host device is available from the virtual machine.
$ lspci -nnk | grep NVIDIA
Example output
$ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)
8.14.11.3. Additional resources
8.14.12. Configuring a watchdog
Expose a watchdog by configuring the virtual machine (VM) for a watchdog device, installing the watchdog, and starting the watchdog service.
8.14.12.1. Prerequisites
-
The virtual machine must have kernel support for an
i6300esb
watchdog device. Red Hat Enterprise Linux (RHEL) images supporti6300esb
.
8.14.12.2. Defining a watchdog device
Define how the watchdog proceeds when the operating system (OS) no longer responds.
Table 8.3. Available actions
|
The virtual machine (VM) powers down immediately. If |
| The VM reboots in place and the guest OS cannot react. Because the length of time required for the guest OS to reboot can cause liveness probes to timeout, use of this option is discouraged. This timeout can extend the time it takes the VM to reboot if cluster-level protections notice the liveness probe failed and forcibly reschedule it. |
| The VM gracefully powers down by stopping all services. |
Procedure
Create a YAML file with the following contents:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: "poweroff" 1 ...
- 1
- Specify the
watchdog
action (poweroff
,reset
, orshutdown
).
The example above configures the
i6300esb
watchdog device on a RHEL8 VM with the poweroff action and exposes the device as/dev/watchdog
.This device can now be used by the watchdog binary.
Apply the YAML file to your cluster by running the following command:
$ oc apply -f <file_name>.yaml
This procedure is provided for testing watchdog functionality only and must not be run on production machines.
Run the following command to verify that the VM is connected to the watchdog device:
$ lspci | grep watchdog -i
Run one of the following commands to confirm the watchdog is active:
Trigger a kernel panic:
# echo c > /proc/sysrq-trigger
Terminate the watchdog service:
# pkill -9 watchdog
8.14.12.3. Installing a watchdog device
Install the watchdog
package on your virtual machine and start the watchdog service.
Procedure
As a root user, install the
watchdog
package and dependencies:# yum install watchdog
Uncomment the following line in the
/etc/watchdog.conf
file, and save the changes:#watchdog-device = /dev/watchdog
Enable the watchdog service to start on boot:
# systemctl enable --now watchdog.service
8.14.12.4. Additional resources
8.15. Importing virtual machines
8.15.1. TLS certificates for data volume imports
8.15.1.1. Adding TLS certificates for authenticating data volume imports
TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume.
Create the config map by referencing the relative file path for the TLS certificate.
Procedure
Ensure you are in the correct namespace. The config map can only be referenced by data volumes if it is in the same namespace.
$ oc get ns
Create the config map:
$ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>
8.15.1.2. Example: Config map created from a TLS certificate
The following example is of a config map created from ca.pem
TLS certificate.
apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> ... -----END CERTIFICATE-----
8.15.2. Importing virtual machine images with data volumes
Use the Containerized Data Importer (CDI) to import a virtual machine image into a persistent volume claim (PVC) by using a data volume. You can attach a data volume to a virtual machine for persistent storage.
The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry.
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.
The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details.
8.15.2.1. Prerequisites
- If the endpoint requires a TLS certificate, the certificate must be included in a config map in the same namespace as the data volume and referenced in the data volume configuration.
To import a container disk:
- You might need to prepare a container disk from a virtual machine image and store it in your container registry before importing it.
-
If the container registry does not have TLS, you must add the registry to the
insecureRegistries
field of theHyperConverged
custom resource before you can import a container disk from it.
- You might need to define a storage class or prepare CDI scratch space for this operation to complete successfully.
8.15.2.2. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.
8.15.2.3. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.15.2.4. Importing a virtual machine image into storage by using a data volume
You can import a virtual machine image into storage by using a data volume.
The virtual machine image can be hosted at an HTTP or HTTPS endpoint or the image can be built into a container disk and stored in a container registry.
You specify the data source for the image in a VirtualMachine
configuration file. When the virtual machine is created, the data volume with the virtual machine image is imported into storage.
Prerequisites
To import a virtual machine image you must have the following:
-
A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. - An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source.
-
A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using
- To import a container disk, you must have a virtual machine image built into a container disk and stored in a container registry, along with any authentication credentials needed to access the data source.
- If the virtual machine must communicate with servers that use self-signed certificates or certificates not signed by the system CA bundle, you must create a config map in the same namespace as the data volume.
Procedure
If your data source requires authentication, create a
Secret
manifest, specifying the data source credentials, and save it asendpoint-secret.yaml
:apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3
Apply the
Secret
manifest:$ oc apply -f endpoint-secret.yaml
Edit the
VirtualMachine
manifest, specifying the data source for the virtual machine image you want to import, and save it asvm-fedora-datavolume.yaml
:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 4 secretRef: endpoint-secret 5 certConfigMap: "" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 60 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}
- 1
- Specify the name of the virtual machine.
- 2
- Specify the name of the data volume.
- 3
- Specify
http
for an HTTP or HTTPS endpoint. Specifyregistry
for a container disk image imported from a registry. - 4
- The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTPS endpoint. An example of a container registry endpoint is
url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest"
. - 5
- Required if you created a
Secret
for the data source. - 6
- Optional: Specify a CA certificate config map.
Create the virtual machine:
$ oc create -f vm-fedora-datavolume.yaml
NoteThe
oc create
command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes toSucceeded
. You can start the virtual machine.Data volume provisioning happens in the background, so there is no need to monitor the process.
Verification
The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command:
$ oc get pods
Monitor the data volume until its status is
Succeeded
by running the following command:$ oc describe dv fedora-dv 1
- 1
- Specify the data volume name that you defined in the
VirtualMachine
manifest.
Verify that provisioning is complete and that the virtual machine has started by accessing its serial console:
$ virtctl console vm-fedora-datavolume
8.15.2.5. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.15.3. Importing virtual machine images into block storage with data volumes
You can import an existing virtual machine image into your OpenShift Container Platform cluster. OpenShift Virtualization uses data volumes to automate the import of data and the creation of an underlying persistent volume claim (PVC).
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.
The resizing procedure varies based on the operating system that is installed on the virtual machine. See the operating system documentation for details.
8.15.3.1. Prerequisites
- If you require scratch space according to the CDI supported operations matrix, you must first define a storage class or prepare CDI scratch space for this operation to complete successfully.
8.15.3.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.15.3.3. About block persistent volumes
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and persistent volume claim (PVC) specification.
8.15.3.4. Creating a local block persistent volume
Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
manifest that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The file name of the persistent volume created in the previous step.
8.15.3.5. Importing a virtual machine image into block storage by using a data volume
You can import a virtual machine image into block storage by using a data volume. You reference the data volume in a VirtualMachine
manifest before you create a virtual machine.
Prerequisites
-
A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. - An HTTP or HTTPS endpoint where the image is hosted, along with any authentication credentials needed to access the data source.
Procedure
If your data source requires authentication, create a
Secret
manifest, specifying the data source credentials, and save it asendpoint-secret.yaml
:apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 2 secretKey: "" 3
Apply the
Secret
manifest:$ oc apply -f endpoint-secret.yaml
Create a
DataVolume
manifest, specifying the data source for the virtual machine image andBlock
forstorage.volumeMode
.apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi
- 1
- Specify the name of the data volume.
- 2
- Optional: Set the storage class or omit it to accept the cluster default.
- 3
- Specify the HTTP or HTTPS URL of the image to import.
- 4
- Required if you created a
Secret
for the data source. - 5
- The volume mode and access mode are detected automatically for known storage provisioners. Otherwise, specify
Block
.
Create the data volume to import the virtual machine image:
$ oc create -f import-pv-datavolume.yaml
You can reference this data volume in a VirtualMachine
manifest before you create a virtual machine.
8.15.3.6. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
CDI now uses the OpenShift Container Platform cluster-wide proxy configuration.
8.15.3.7. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.15.4. Importing a single Red Hat Virtualization virtual machine
You can import a single Red Hat Virtualization (RHV) virtual machine into OpenShift Virtualization by using the VM Import wizard or the CLI.
Importing a RHV VM is a deprecated feature. Deprecated functionality is still included in OpenShift Virtualization and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Virtualization, refer to the Deprecated and removed features section of the OpenShift Virtualization release notes.
This feature will be replaced by the Migration Toolkit for Virtualization.
8.15.4.1. OpenShift Virtualization storage feature matrix
The following table describes the OpenShift Virtualization storage types that support VM import.
RHV VM import | |
---|---|
OpenShift Container Storage: RBD block-mode volumes | Yes |
OpenShift Virtualization hostpath provisioner | No |
Other multi-node writable storage | Yes [1] |
Other single-node writable storage | Yes [2] |
- PVCs must request a ReadWriteMany access mode.
- PVCs must request a ReadWriteOnce access mode.
8.15.4.2. Prerequisites for importing a virtual machine
Importing a virtual machine from Red Hat Virtualization (RHV) into OpenShift Virtualization has the following prerequisites:
- You must have admin user privileges.
Storage:
- The OpenShift Virtualization local and shared persistent storage classes must support VM import.
- If you are using Ceph RBD block-mode volumes and the available storage space is too small for the virtual disk, the import process bar stops at 75% for more than 20 minutes and the migration does not succeed. No error message is displayed in the web console. BZ#1910019
Networks:
- The RHV and OpenShift Virtualization networks must either have the same name or be mapped to each other.
-
The RHV VM network interface must be
e1000
,rtl8139
, orvirtio
.
VM disks:
-
The disk interface must be
sata
,virtio_scsi
, orvirtio
. - The disk must not be configured as a direct LUN.
-
The disk status must not be
illegal
orlocked
. -
The storage type must be
image
. - SCSI reservation must be disabled.
-
ScsiGenericIO
must be disabled.
-
The disk interface must be
VM configuration:
- If the VM uses GPU resources, the nodes providing the GPUs must be configured.
- The VM must not be configured for vGPU resources.
-
The VM must not have snapshots with disks in an
illegal
state. - The VM must not have been created with OpenShift Container Platform and subsequently added to RHV.
- The VM must not be configured for USB devices.
-
The watchdog model must not be
diag288
.
8.15.4.3. Importing a virtual machine with the VM Import wizard
You can import a single virtual machine with the VM Import wizard.
Procedure
-
In the web console, click Workloads
Virtual Machines. - Click Create Virtual Machine and select Import with Wizard.
- Select Red Hat Virtualization (RHV) from the Provider list.
Select Connect to New Instance or a saved RHV instance.
If you select Connect to New Instance, fill in the following fields:
-
API URL: For example,
https://<RHV_Manager_FQDN>/ovirt-engine/api
CA certificate: Click Browse to upload the RHV Manager CA certificate or paste the CA certificate into the field.
View the CA certificate by running the following command:
$ openssl s_client -connect <RHV_Manager_FQDN>:443 -showcerts < /dev/null
The CA certificate is the second certificate in the output.
-
Username: RHV Manager user name, for example,
ocpadmin@internal
- Password: RHV Manager password
-
API URL: For example,
- If you select a saved RHV instance, the wizard connects to the RHV instance using the saved credentials.
Click Check and Save and wait for the connection to complete.
NoteThe connection details are stored in a secret. If you add a provider with an incorrect URL, user name, or password, click Workloads
Secrets and delete the provider secret. - Select a cluster and a virtual machine.
- Click Next.
- In the Review screen, review your settings.
- Optional: You can select Start virtual machine on creation.
Click Edit to update the following settings:
-
General
Name: The VM name is limited to 63 characters. General
Description: Optional description of the VM. Storage Class: Select NFS or ocs-storagecluster-ceph-rbd.
If you select ocs-storagecluster-ceph-rbd, you must set the Volume Mode of the disk to Block.
-
Advanced
Volume Mode: Select Block.
-
Advanced
Volume Mode: Select Block. -
Networking
Network: You can select a network from a list of available network attachment definition objects.
-
General
Click Import or Review and Import, if you have edited the import settings.
A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in Workloads
Virtual Machines.
Virtual machine wizard fields
Name | Parameter | Description |
---|---|---|
Name |
The name can contain lowercase letters ( | |
Description | Optional description field. | |
Operating System | The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template. | |
Boot Source | Import via URL (creates PVC) | Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image. |
Clone existing PVC (creates PVC) | Select an existent persistent volume claim available on the cluster and clone it. | |
Import via Registry (creates PVC) |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: | |
PXE (network boot - adds network interface) | Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition. | |
Persistent Volume Claim project | Project name that you want to use for cloning the PVC. | |
Persistent Volume Claim name | PVC name that should apply to this virtual machine template if you are cloning an existing PVC. | |
Mount this as a CD-ROM boot source | A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later. | |
Flavor | Tiny, Small, Medium, Large, Custom | Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template.
If you choose a default template, you can override the |
Workload Type Note If you choose the incorrect Workload Type, there could be performance or resource utilization issues (such as a slow UI). | Desktop | A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. |
Server | Balances performance and it is compatible with a wide range of server workloads. | |
High-Performance | A virtual machine configuration that is optimized for high-performance workloads. | |
Start this virtual machine after creation. | This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created. |
Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
Storage fields
Name | Selection | Description |
---|---|---|
Source | Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). | |
Use an existing PVC | Use a PVC that is already available in the cluster. | |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. | |
Import via Registry (creates PVC) | Import content via container registry. | |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. | |
Name |
Name of the disk. The name can contain lowercase letters ( | |
Size | Size of the disk in GiB. | |
Type | Type of disk. Example: Disk or CD-ROM | |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. | |
Storage Class | The storage class that is used to create the disk. | |
Advanced | Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem. |
Advanced storage settings
Name | Parameter | Description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode [1] | Single User (RWO) | The disk can be mounted as read/write by a single node. |
Shared Access (RWX) | The disk can be mounted as read/write by many nodes. | |
Read Only (ROX) | The disk can be mounted as read-only by many nodes. |
- You can change the access mode by using the command line interface.
8.15.4.4. Importing a virtual machine with the CLI
You can import a virtual machine with the CLI by creating the Secret
and VirtualMachineImport
custom resources (CRs). The Secret
CR stores the RHV Manager credentials and CA certificate. The VirtualMachineImport
CR defines the parameters of the VM import process.
Optional: You can create a ResourceMapping
CR that is separate from the VirtualMachineImport
CR. A ResourceMapping
CR provides greater flexibility, for example, if you import additional RHV VMs.
The default target storage class must be NFS. Cinder does not support RHV VM import.
Procedure
Create the
Secret
CR by running the following command:$ cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rhv-credentials namespace: default 1 type: Opaque stringData: ovirt: | apiUrl: <api_endpoint> 2 username: ocpadmin@internal password: 3 caCert: | -----BEGIN CERTIFICATE----- 4 -----END CERTIFICATE----- EOF
- 1
- Optional. You can specify a different namespace in all the CRs.
- 2
- Specify the API endpoint of the RHV Manager, for example,
\"https://www.example.com:8443/ovirt-engine/api"
- 3
- Specify the password for
ocpadmin@internal
. - 4
- Specify the RHV Manager CA certificate. You can obtain the CA certificate by running the following command:
$ openssl s_client -connect :443 -showcerts < /dev/null
Optional: Create a
ResourceMapping
CR if you want to separate the resource mapping from theVirtualMachineImport
CR by running the following command:$ cat <<EOF | kubectl create -f - apiVersion: v2v.kubevirt.io/v1alpha1 kind: ResourceMapping metadata: name: resourcemapping_example namespace: default spec: ovirt: networkMappings: - source: name: <rhv_logical_network>/<vnic_profile> 1 target: name: <target_network> 2 type: pod storageMappings: 3 - source: name: <rhv_storage_domain> 4 target: name: <target_storage_class> 5 volumeMode: <volume_mode> 6 EOF
- 1
- Specify the RHV logical network and vNIC profile.
- 2
- Specify the OpenShift Virtualization network.
- 3
- If storage mappings are specified in both the
ResourceMapping
and theVirtualMachineImport
CRs, theVirtualMachineImport
CR takes precedence. - 4
- Specify the RHV storage domain.
- 5
- Specify
nfs
orocs-storagecluster-ceph-rbd
. - 6
- If you specified the
ocs-storagecluster-ceph-rbd
storage class, you must specifyBlock
as the volume mode.
Create the
VirtualMachineImport
CR by running the following command:$ cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: VirtualMachineImport metadata: name: vm-import namespace: default spec: providerCredentialsSecret: name: rhv-credentials namespace: default # resourceMapping: 1 # name: resourcemapping-example # namespace: default targetVmName: vm_example 2 startVm: true source: ovirt: vm: id: <source_vm_id> 3 name: <source_vm_name> 4 cluster: name: <source_cluster_name> 5 mappings: 6 networkMappings: - source: name: <source_logical_network>/<vnic_profile> 7 target: name: <target_network> 8 type: pod storageMappings: 9 - source: name: <source_storage_domain> 10 target: name: <target_storage_class> 11 accessMode: <volume_access_mode> 12 diskMappings: - source: id: <source_vm_disk_id> 13 target: name: <target_storage_class> 14 EOF
- 1
- If you create a
ResourceMapping
CR, uncomment theresourceMapping
section. - 2
- Specify the target VM name.
- 3
- Specify the source VM ID, for example,
80554327-0569-496b-bdeb-fcbbf52b827b
. You can obtain the VM ID by enteringhttps://www.example.com/ovirt-engine/api/vms/
in a web browser on the Manager machine to list all VMs. Locate the VM you want to import and its corresponding VM ID. You do not need to specify a VM name or cluster name. - 4
- If you specify the source VM name, you must also specify the source cluster. Do not specify the source VM ID.
- 5
- If you specify the source cluster, you must also specify the source VM name. Do not specify the source VM ID.
- 6
- If you create a
ResourceMapping
CR, comment out themappings
section. - 7
- Specify the logical network and vNIC profile of the source VM.
- 8
- Specify the OpenShift Virtualization network.
- 9
- If storage mappings are specified in both the
ResourceMapping
and theVirtualMachineImport
CRs, theVirtualMachineImport
CR takes precedence. - 10
- Specify the source storage domain.
- 11
- Specify the target storage class.
- 12
- Specify
ReadWriteOnce
,ReadWriteMany
, orReadOnlyMany
. If no access mode is specified, {virt} determines the correct volume access mode based on the HostMigration mode setting of the RHV VM or on the virtual disk access mode: -
If the RHV VM migration mode is
Allow manual and automatic migration
, the default access mode isReadWriteMany
. -
If the RHV virtual disk access mode is
ReadOnly
, the default access mode isReadOnlyMany
. -
For all other settings, the default access mode is
ReadWriteOnce
.
-
If the RHV VM migration mode is
- 13
- Specify the source VM disk ID, for example,
8181ecc1-5db8-4193-9c92-3ddab3be7b05
. You can obtain the disk ID by enteringhttps://www.example.com/ovirt-engine/api/vms/vm23
in a web browser on the Manager machine and reviewing the VM details. - 14
- Specify the target storage class.
Follow the progress of the virtual machine import to verify that the import was successful:
$ oc get vmimports vm-import -n default
The output indicating a successful import resembles the following example:
Example output
... status: conditions: - lastHeartbeatTime: "2020-07-22T08:58:52Z" lastTransitionTime: "2020-07-22T08:58:52Z" message: Validation completed successfully reason: ValidationCompleted status: "True" type: Valid - lastHeartbeatTime: "2020-07-22T08:58:52Z" lastTransitionTime: "2020-07-22T08:58:52Z" message: 'VM specifies IO Threads: 1, VM has NUMA tune mode specified: interleave' reason: MappingRulesVerificationReportedWarnings status: "True" type: MappingRulesVerified - lastHeartbeatTime: "2020-07-22T08:58:56Z" lastTransitionTime: "2020-07-22T08:58:52Z" message: Copying virtual machine disks reason: CopyingDisks status: "True" type: Processing dataVolumes: - name: fedora32-b870c429-11e0-4630-b3df-21da551a48c0 targetVmName: fedora32
8.15.4.4.1. Creating a config map for importing a VM
You can create a config map to map the Red Hat Virtualization (RHV) virtual machine operating system to an OpenShift Virtualization template if you want to override the default vm-import-controller
mapping or to add additional mappings.
The default vm-import-controller
config map contains the following RHV operating systems and their corresponding common OpenShift Virtualization templates.
RHV VM operating system | OpenShift Virtualization template |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Procedure
In a web browser, identify the REST API name of the RHV VM operating system by navigating to
http://<RHV_Manager_FQDN>/ovirt-engine/api/vms/<VM_ID>
. The operating system name appears in the<os>
section of the XML output, as shown in the following example:... <os> ... <type>rhel_8x64</type> </os>
View a list of the available OpenShift Virtualization templates:
$ oc get templates -n openshift --show-labels | tr ',' '\n' | grep os.template.kubevirt.io | sed -r 's#os.template.kubevirt.io/(.*)=.*#\1#g' | sort -u
Example output
fedora31 fedora32 ... rhel8.1 rhel8.2 ...
- If an OpenShift Virtualization template that matches the RHV VM operating system does not appear in the list of available templates, create a template with the OpenShift Virtualization web console.
Create a config map to map the RHV VM operating system to the OpenShift Virtualization template:
$ cat <<EOF | oc create -f - apiVersion: v1 kind: ConfigMap metadata: name: os-configmap namespace: default 1 data: guestos2common: | "Red Hat Enterprise Linux Server": "rhel" "CentOS Linux": "centos" "Fedora": "fedora" "Ubuntu": "ubuntu" "openSUSE": "opensuse" osinfo2common: | "<rhv-operating-system>": "<vm-template>" 2 EOF
Config map example
$ cat <<EOF | oc apply -f - apiVersion: v1 kind: ConfigMap metadata: name: os-configmap namespace: default data: osinfo2common: | "other_linux": "fedora31" EOF
Verify that the custom config map was created:
$ oc get cm -n default os-configmap -o yaml
Patch the
vm-import-controller-config
config map to apply the new config map:$ oc patch configmap vm-import-controller-config -n openshift-cnv --patch '{ "data": { "osConfigMap.name": "os-configmap", "osConfigMap.namespace": "default" 1 } }'
- 1
- Update the namespace if you changed it in the config map.
Verify that the template appears in the OpenShift Virtualization web console:
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machine Templates tab and find the template in the list.
-
Click Workloads
8.15.4.5. Troubleshooting a virtual machine import
8.15.4.5.1. Logs
You can check the VM Import Controller pod log for errors.
Procedure
View the VM Import Controller pod name by running the following command:
$ oc get pods -n <namespace> | grep import 1
- 1
- Specify the namespace of your imported virtual machine.
Example output
vm-import-controller-f66f7d-zqkz7 1/1 Running 0 4h49m
View the VM Import Controller pod log by running the following command:
$ oc logs <vm-import-controller-f66f7d-zqkz7> -f -n <namespace> 1
- 1
- Specify the VM Import Controller pod name and the namespace.
8.15.4.5.2. Error messages
The following error message might appear:
The following error message is displayed in the VM Import Controller pod log and the progress bar stops at 10% if the OpenShift Virtualization storage PV is not suitable:
Failed to bind volumes: provisioning failed for PVC
You must use a compatible storage class. The Cinder storage class is not supported.
8.15.4.5.3. Known issues
- If you are using Ceph RBD block-mode volumes and the available storage space is too small for the virtual disk, the import process bar stops at 75% for more than 20 minutes and the migration does not succeed. No error message is displayed in the web console. BZ#1910019
8.15.5. Importing a single VMware virtual machine or template
You can import a VMware vSphere 6.5, 6.7, or 7.0 VM or VM template into OpenShift Virtualization by using the VM Import wizard. If you import a VM template, OpenShift Virtualization creates a virtual machine based on the template.
Importing a VMware VM is a deprecated feature. Deprecated functionality is still included in OpenShift Virtualization and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.
For the most recent list of major functionality that has been deprecated or removed within OpenShift Virtualization, refer to the Deprecated and removed features section of the OpenShift Virtualization release notes.
This feature will be replaced by the Migration Toolkit for Virtualization.
8.15.5.1. OpenShift Virtualization storage feature matrix
The following table describes the OpenShift Virtualization storage types that support VM import.
VMware VM import | |
---|---|
OpenShift Container Storage: RBD block-mode volumes | Yes |
OpenShift Virtualization hostpath provisioner | Yes |
Other multi-node writable storage | Yes [1] |
Other single-node writable storage | Yes [2] |
- PVCs must request a ReadWriteMany access mode.
- PVCs must request a ReadWriteOnce access mode.
8.15.5.2. Preparing a VDDK image
The import process uses the VMware Virtual Disk Development Kit (VDDK) to copy the VMware virtual disk.
You can download the VDDK SDK, create a VDDK image, upload the image to an image registry, and add it to the spec.vddkInitImage
field of the HyperConverged
custom resource (CR).
You can configure either an internal OpenShift Container Platform image registry or a secure external image registry for the VDDK image. The registry must be accessible to your OpenShift Virtualization environment.
Storing the VDDK image in a public registry might violate the terms of the VMware license.
8.15.5.2.1. Configuring an internal image registry
You can configure the internal OpenShift Container Platform image registry on bare metal by updating the Image Registry Operator configuration.
You can access the registry directly, from within the OpenShift Container Platform cluster, or externally, by exposing the registry with a route.
Changing the image registry’s management state
To start the image registry, you must change the Image Registry Operator configuration’s managementState
from Removed
to Managed
.
Procedure
Change
managementState
Image Registry Operator configuration fromRemoved
toManaged
. For example:$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'
Configuring registry storage for bare metal and other manual installations
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
-
You have access to the cluster as a user with the
cluster-admin
role. - You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal.
You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Container Storage.
ImportantOpenShift Container Platform supports
ReadWriteOnce
access for image registry storage when you have only one replica.ReadWriteOnce
access also requires that the registry uses theRecreate
rollout strategy. To deploy an image registry that supports high availability with two or more replicas,ReadWriteMany
access is required.- Must have 100Gi capacity.
Procedure
To configure your registry to use storage, change the
spec.storage.pvc
in theconfigs.imageregistry/cluster
resource.NoteWhen using shared storage, review your security settings to prevent outside access.
Verify that you do not have a registry pod:
$ oc get pod -n openshift-image-registry -l docker-registry=default
Example output
No resourses found in openshift-image-registry namespace
NoteIf you do have a registry pod in your output, you do not need to continue with this procedure.
Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io
Example output
storage: pvc: claim:
Leave the
claim
field blank to allow the automatic creation of animage-registry-storage
PVC.Check the
clusteroperator
status:$ oc get clusteroperator image-registry
Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m
Ensure that your registry is set to managed to enable building and pushing of images.
Run:
$ oc edit configs.imageregistry/cluster
Then, change the line
managementState: Removed
to
managementState: Managed
Accessing registry directly from the cluster
You can access the registry from inside the cluster.
Procedure
Access the registry from the cluster by using internal routes:
Access the node by getting the node’s name:
$ oc get nodes
$ oc debug nodes/<node_name>
To enable access to tools such as
oc
andpodman
on the node, change your root directory to/host
:sh-4.2# chroot /host
Log in to the container image registry by using your access token:
sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443
sh-4.2# podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
You should see a message confirming login, such as:
Login Succeeded!
NoteYou can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure.
Since the Image Registry Operator creates the route, it will likely be similar to
default-route-openshift-image-registry.<cluster_name>
.Perform
podman pull
andpodman push
operations against your registry:ImportantYou can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.
In the following examples, use:
Component Value <registry_ip>
172.30.124.220
<port>
5000
<project>
openshift
<image>
image
<tag>
omitted (defaults to
latest
)Pull an arbitrary image:
sh-4.2# podman pull name.io/image
Tag the new image with the form
<registry_ip>:<port>/<project>/<image>
. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry:sh-4.2# podman tag name.io/image image-registry.openshift-image-registry.svc:5000/openshift/image
NoteYou must have the
system:image-builder
role for the specified project, which allows the user to write or push an image. Otherwise, thepodman push
in the next step will fail. To test, you can create a new project to push the image.Push the newly tagged image to your registry:
sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/image
Exposing a secure registry manually
Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images to an existing project by using the route host.
Prerequisites:
The following prerequisites are automatically performed:
- Deploy the Registry Operator.
- Deploy the Ingress Operator.
Procedure
You can expose the route by using DefaultRoute
parameter in the configs.imageregistry.operator.openshift.io
resource or by using custom routes.
To expose the registry using DefaultRoute
:
Set
DefaultRoute
toTrue
:$ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
Log in with
podman
:$ HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
$ podman login -u kubeadmin -p $(oc whoami -t) --tls-verify=false $HOST 1
- 1
--tls-verify=false
is needed if the cluster’s default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator.
To expose the registry using custom routes:
Create a secret with your route’s TLS keys:
$ oc create secret tls public-route-tls \ -n openshift-image-registry \ --cert=</path/to/tls.crt> \ --key=</path/to/tls.key>
This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator.
On the Registry Operator:
spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls ...
NoteOnly set
secretName
if you are providing a custom TLS configuration for the registry’s route.
8.15.5.2.2. Configuring an external image registry
If you use an external image registry for the VDDK image, you can add the external image registry’s certificate authorities to the OpenShift Container Platform cluster.
Optionally, you can create a pull secret from your Docker credentials and add it to your service account.
Adding certificate authorities to the cluster
You can add certificate authorities (CA) to the cluster for use when pushing and pulling images with the following procedure.
Prerequisites
- You must have cluster administrator privileges.
-
You must have access to the public certificates of the registry, usually a
hostname/ca.crt
file located in the/etc/docker/certs.d/
directory.
Procedure
Create a
ConfigMap
in theopenshift-config
namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in theConfigMap
is the hostname of the registry in thehostname[..port]
format:$ oc create configmap registry-cas -n openshift-config \ --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \ --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
Update the cluster image configuration:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
Allowing pods to reference images from other secured registries
The .dockercfg
$HOME/.docker/config.json
file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry.
To pull a secured container image that is not from OpenShift Container Platform’s internal registry, you must create a pull secret from your Docker credentials and add it to your service account.
Procedure
If you already have a
.dockercfg
file for the secured registry, you can create a secret from that file by running:$ oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<path/to/.dockercfg> \ --type=kubernetes.io/dockercfg
Or if you have a
$HOME/.docker/config.json
file:$ oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson
If you do not already have a Docker credentials file for the secured registry, you can create a secret by running:
$ oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email>
To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is
default
:$ oc secrets link default <pull_secret_name> --for=pull
8.15.5.2.3. Creating and using a VDDK image
You can download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You then add the VDDK image to the spec.vddkInitImage
field of the HyperConverged
custom resource (CR).
Prerequisites
- You must have access to an OpenShift Container Platform internal image registry or a secure external registry.
Procedure
Create and navigate to a temporary directory:
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
- In a browser, navigate to VMware code and click SDKs.
- Under Compute Virtualization, click Virtual Disk Development Kit (VDDK).
- Select the VDDK version that corresponds to your VMware vSphere version, for example, VDDK 7.0 for vSphere 7.0, click Download, and then save the VDDK archive in the temporary directory.
Extract the VDDK archive:
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
Create a
Dockerfile
:$ cat > Dockerfile <<EOF FROM busybox:latest COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOF
Build the image:
$ podman build . -t <registry_route_or_server_path>/vddk:<tag> 1
- 1
- Specify your image registry:
-
For an internal OpenShift Container Platform registry, use the internal registry route, for example,
image-registry.openshift-image-registry.svc:5000/openshift/vddk:<tag>
. -
For an external registry, specify the server name, path, and tag, for example,
server.example.com:5000/vddk:<tag>
.
-
For an internal OpenShift Container Platform registry, use the internal registry route, for example,
Push the image to the registry:
$ podman push <registry_route_or_server_path>/vddk:<tag>
- Ensure that the image is accessible to your OpenShift Virtualization environment.
Edit the
HyperConverged
CR in the openshift-cnv project:$ oc edit hco -n openshift-cnv kubevirt-hyperconverged
Add the
vddkInitImage
parameter to thespec
stanza:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: vddkInitImage: <registry_route_or_server_path>/vddk:<tag>
8.15.5.3. Importing a virtual machine with the VM Import wizard
You can import a single virtual machine with the VM Import wizard.
You can also import a VM template. If you import a VM template, OpenShift Virtualization creates a virtual machine based on the template.
Prerequisites
- You must have admin user privileges.
- The VMware Virtual Disk Development Kit (VDDK) image must be in an image registry that is accessible to your OpenShift Virtualization environment.
-
The VDDK image must be added to the
spec.vddkInitImage
field of theHyperConverged
custom resource (CR). - The VM must be powered off.
- Virtual disks must be connected to IDE or SCSI controllers. If virtual disks are connected to a SATA controller, you can change them to IDE controllers and then migrate the VM.
- The OpenShift Virtualization local and shared persistent storage classes must support VM import.
The OpenShift Virtualization storage must be large enough to accommodate the virtual disk.
WarningIf you are using Ceph RBD block-mode volumes, the storage must be large enough to accommodate the virtual disk. If the disk is too large for the available storage, the import process fails and the PV that is used to copy the virtual disk is not released. You will not be able to import another virtual machine or to clean up the storage because there are insufficient resources to support object deletion. To resolve this situation, you must add more object storage devices to the storage back end.
The OpenShift Virtualization egress network policy must allow the following traffic:
Destination Protocol Port VMware ESXi hosts
TCP
443
VMware ESXi hosts
TCP
902
VMware vCenter
TCP
5840
Procedure
-
In the web console, click Workloads
Virtual Machines. - Click Create Virtual Machine and select Import with Wizard.
- Select VMware from the Provider list.
Select Connect to New Instance or a saved vCenter instance.
- If you select Connect to New Instance, enter the vCenter hostname, Username, and Password.
- If you select a saved vCenter instance, the wizard connects to the vCenter instance using the saved credentials.
Click Check and Save and wait for the connection to complete.
NoteThe connection details are stored in a secret. If you add a provider with an incorrect hostname, user name, or password, click Workloads
Secrets and delete the provider secret. - Select a virtual machine or a template.
- Click Next.
- In the Review screen, review your settings.
Click Edit to update the following settings:
General:
- Description
- Operating System
- Flavor
- Memory
- CPUs
- Workload Profile
Networking:
- Name
- Model
- Network
- Type
- MAC Address
Storage: Click the Options menu of the VM disk and select Edit to update the following fields:
- Name
- Source: For example, Import Disk.
- Size
- Interface
Storage Class: Select NFS or ocs-storagecluster-ceph-rbd (ceph-rbd).
If you select ocs-storagecluster-ceph-rbd, you must set the Volume Mode of the disk to Block.
Other storage classes might work, but they are not officially supported.
-
Advanced
Volume Mode: Select Block. -
Advanced
Access Mode
Advanced
Cloud-init: - Form: Enter the Hostname and Authenticated SSH Keys.
-
Custom script: Enter the
cloud-init
script in the text field.
-
Advanced
Virtual Hardware: You can attach a virtual CD-ROM to the imported virtual machine.
Click Import or Review and Import, if you have edited the import settings.
A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The virtual machine appears in Workloads
Virtual Machines.
Virtual machine wizard fields
Name | Parameter | Description |
---|---|---|
Name |
The name can contain lowercase letters ( | |
Description | Optional description field. | |
Operating System | The operating system that is selected for the virtual machine in the template. You cannot edit this field when creating a virtual machine from a template. | |
Boot Source | Import via URL (creates PVC) | Import content from an image available from an HTTP or HTTPS endpoint. Example: Obtaining a URL link from the web page with the operating system image. |
Clone existing PVC (creates PVC) | Select an existent persistent volume claim available on the cluster and clone it. | |
Import via Registry (creates PVC) |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: | |
PXE (network boot - adds network interface) | Boot an operating system from a server on the network. Requires a PXE bootable network attachment definition. | |
Persistent Volume Claim project | Project name that you want to use for cloning the PVC. | |
Persistent Volume Claim name | PVC name that should apply to this virtual machine template if you are cloning an existing PVC. | |
Mount this as a CD-ROM boot source | A CD-ROM requires an additional disk for installing the operating system. Select the checkbox to add a disk and customize it later. | |
Flavor | Tiny, Small, Medium, Large, Custom | Presets the amount of CPU and memory in a virtual machine template with predefined values that are allocated to the virtual machine, depending on the operating system associated with that template.
If you choose a default template, you can override the |
Workload Type Note If you choose the incorrect Workload Type, there could be performance or resource utilization issues (such as a slow UI). | Desktop | A virtual machine configuration for use on a desktop. Ideal for consumption on a small scale. Recommended for use with the web console. |
Server | Balances performance and it is compatible with a wide range of server workloads. | |
High-Performance | A virtual machine configuration that is optimized for high-performance workloads. | |
Start this virtual machine after creation. | This checkbox is selected by default and the virtual machine starts running after creation. Clear the checkbox if you do not want the virtual machine to start when it is created. |
Cloud-init fields
Name | Description |
---|---|
Hostname | Sets a specific hostname for the virtual machine. |
Authorized SSH Keys | The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine. |
Custom script | Replaces other options with a field in which you paste a custom cloud-init script. |
Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
Storage fields
Name | Selection | Description |
---|---|---|
Source | Blank (creates PVC) | Create an empty disk. |
Import via URL (creates PVC) | Import content via URL (HTTP or HTTPS endpoint). | |
Use an existing PVC | Use a PVC that is already available in the cluster. | |
Clone existing PVC (creates PVC) | Select an existing PVC available in the cluster and clone it. | |
Import via Registry (creates PVC) | Import content via container registry. | |
Container (ephemeral) | Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. | |
Name |
Name of the disk. The name can contain lowercase letters ( | |
Size | Size of the disk in GiB. | |
Type | Type of disk. Example: Disk or CD-ROM | |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. | |
Storage Class | The storage class that is used to create the disk. | |
Advanced | Defines whether the persistent volume uses a formatted file system or raw block state. Default is Filesystem. | |
Advanced | Access mode of the persistent volume. Supported access modes are Single User (RWO), Shared Access (RWX), and Read Only (ROX). |
Advanced storage settings
The following advanced storage settings are available for Blank, Import via URL, and Clone existing PVC disks. These parameters are optional. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults
config map.
Name | Parameter | Description |
---|---|---|
Volume Mode | Filesystem | Stores the virtual disk on a file system-based volume. |
Block |
Stores the virtual disk directly on the block volume. Only use | |
Access Mode | Single User (RWO) | The disk can be mounted as read/write by a single node. |
Shared Access (RWX) | The disk can be mounted as read/write by many nodes. Note This is required for some features, such as live migration of virtual machines between nodes. | |
Read Only (ROX) | The disk can be mounted as read-only by many nodes. |
8.15.5.3.1. Updating the NIC name of an imported virtual machine
You must update the NIC name of a virtual machine that you imported from VMware to conform to OpenShift Virtualization naming conventions.
Procedure
- Log in to the virtual machine.
-
Navigate to the
/etc/sysconfig/network-scripts
directory. Rename the network configuration file:
$ mv vmnic0 ifcfg-eth0 1
- 1
- The first network configuration file is named
ifcfg-eth0
. Additional network configuration files are numbered sequentially, for example,ifcfg-eth1
,ifcfg-eth2
.
Update the
NAME
andDEVICE
parameters in the network configuration file:NAME=eth0 DEVICE=eth0
Restart the network:
$ systemctl restart network
8.15.5.4. Troubleshooting a virtual machine import
8.15.5.4.1. Logs
You can check the V2V Conversion pod log for errors.
Procedure
View the V2V Conversion pod name by running the following command:
$ oc get pods -n <namespace> | grep v2v 1
- 1
- Specify the namespace of your imported virtual machine.
Example output
kubevirt-v2v-conversion-f66f7d-zqkz7 1/1 Running 0 4h49m
View the V2V Conversion pod log by running the following command:
$ oc logs <kubevirt-v2v-conversion-f66f7d-zqkz7> -f -n <namespace> 1
- 1
- Specify the VM Conversion pod name and the namespace.
8.15.5.4.2. Error messages
The following error messages might appear:
If the VMware VM is not shut down before import, the imported virtual machine displays the error message,
Readiness probe failed
in the OpenShift Container Platform console and the V2V Conversion pod log displays the following error message:INFO - have error: ('virt-v2v error: internal error: invalid argument: libvirt domain ‘v2v_migration_vm_1’ is running or paused. It must be shut down in order to perform virt-v2v conversion',)"
The following error message is displayed in the OpenShift Container Platform console if a non-admin user tries to import a VM:
Could not load config map vmware-to-kubevirt-os in kube-public namespace Restricted Access: configmaps "vmware-to-kubevirt-os" is forbidden: User cannot get resource "configmaps" in API group "" in the namespace "kube-public"
Only an admin user can import a VM.
8.16. Cloning virtual machines
8.16.1. Enabling user permissions to clone data volumes across namespaces
The isolating nature of namespaces means that users cannot by default clone resources between namespaces.
To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin
role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace.
8.16.1.1. Prerequisites
-
Only a user with the
cluster-admin
role can create cluster roles.
8.16.1.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.16.1.3. Creating RBAC resources for cloning data volumes
Create a new cluster role that enables permissions for all actions for the datavolumes
resource.
Procedure
Create a
ClusterRole
manifest:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"]
- 1
- Unique name for the cluster role.
Create the cluster role in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the
ClusterRole
manifest created in the previous step.
Create a
RoleBinding
manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io
Create the role binding in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the
RoleBinding
manifest created in the previous step.
8.16.2. Cloning a virtual machine disk into a new data volume
You can clone the persistent volume claim (PVC) of a virtual machine disk into a new data volume by referencing the source PVC in your data volume configuration file.
Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block
to a PV with volumeMode: Filesystem
.
However, you can only clone between different volume modes if they are of the contentType: kubevirt
.
When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.
8.16.2.1. Prerequisites
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
8.16.2.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.16.2.3. Cloning the persistent volume claim of a virtual machine disk into a new data volume
You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine.
When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
).
Procedure
- Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC, and the size of the new data volume.
For example:
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4
Start cloning the PVC by creating the data volume:
$ oc create -f <cloner-datavolume>.yaml
NoteData volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.
8.16.2.4. Template: Data volume clone configuration file
example-clone-dv.yaml
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: "example-clone-dv" spec: source: pvc: name: source-pvc namespace: example-ns pvc: accessModes: - ReadWriteOnce resources: requests: storage: "1G"
8.16.2.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.16.3. Cloning a virtual machine by using a data volume template
You can create a new virtual machine by cloning the persistent volume claim (PVC) of an existing VM. By including a dataVolumeTemplate
in your virtual machine configuration file, you create a new data volume from the original PVC.
Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block
to a PV with volumeMode: Filesystem
.
However, you can only clone between different volume modes if they are of the contentType: kubevirt
.
When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.
8.16.3.1. Prerequisites
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
8.16.3.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.16.3.3. Creating a new virtual machine from a cloned persistent volume claim by using a data volume template
You can create a virtual machine that clones the persistent volume claim (PVC) of an existing virtual machine into a data volume. Reference a dataVolumeTemplate
in the virtual machine manifest and the source
PVC is cloned to a data volume, which is then automatically used for the creation of the virtual machine.
When a data volume is created as part of the data volume template of a virtual machine, the lifecycle of the data volume is then dependent on the virtual machine. If the virtual machine is deleted, the data volume and associated PVC are also deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
).
Procedure
- Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a
VirtualMachine
object. The following virtual machine example clonesmy-favorite-vm-disk
, which is located in thesource-namespace
namespace. The2Gi
data volume calledfavorite-clone
is created frommy-favorite-vm-disk
.For example:
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: "source-namespace" name: "my-favorite-vm-disk"
- 1
- The virtual machine to create.
Create the virtual machine with the PVC-cloned data volume:
$ oc create -f <vm-clone-datavolumetemplate>.yaml
8.16.3.4. Template: Data volume virtual machine configuration file
example-dv-vm.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: example-vm
name: example-vm
spec:
dataVolumeTemplates:
- metadata:
name: example-dv
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
source:
http:
url: "" 1
running: false
template:
metadata:
labels:
kubevirt.io/vm: example-vm
spec:
domain:
cpu:
cores: 1
devices:
disks:
- disk:
bus: virtio
name: example-dv-disk
machine:
type: q35
resources:
requests:
memory: 1G
terminationGracePeriodSeconds: 0
volumes:
- dataVolume:
name: example-dv
name: example-dv-disk
- 1
- The
HTTP
source of the image you want to import, if applicable.
8.16.3.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.16.4. Cloning a virtual machine disk into a new block storage data volume
You can clone the persistent volume claim (PVC) of a virtual machine disk into a new block data volume by referencing the source PVC in your data volume configuration file.
Cloning operations between different volume modes are supported, such as cloning from a persistent volume (PV) with volumeMode: Block
to a PV with volumeMode: Filesystem
.
However, you can only clone between different volume modes if they are of the contentType: kubevirt
.
When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.
8.16.4.1. Prerequisites
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
8.16.4.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.16.4.3. About block persistent volumes
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and persistent volume claim (PVC) specification.
8.16.4.4. Creating a local block persistent volume
Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
manifest that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The file name of the persistent volume created in the previous step.
8.16.4.5. Cloning the persistent volume claim of a virtual machine disk into a new data volume
You can clone a persistent volume claim (PVC) of an existing virtual machine disk into a new data volume. The new data volume can then be used for a new virtual machine.
When a data volume is created independently of a virtual machine, the lifecycle of the data volume is independent of the virtual machine. If the virtual machine is deleted, neither the data volume nor its associated PVC is deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
). - At least one available block persistent volume (PV) that is the same size as or larger than the source PVC.
Procedure
- Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a data volume that specifies the name of the new data volume, the name and namespace of the source PVC,
volumeMode: Block
so that an available block PV is used, and the size of the new data volume.For example:
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5
- 1
- The name of the new data volume.
- 2
- The namespace where the source PVC exists.
- 3
- The name of the source PVC.
- 4
- The size of the new data volume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
- 5
- Specifies that the destination is a block PV
Start cloning the PVC by creating the data volume:
$ oc create -f <cloner-datavolume>.yaml
NoteData volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.
8.16.4.6. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.17. Virtual machine networking
8.17.1. Configuring the virtual machine for the default pod network
You can connect a virtual machine to the default internal pod network by configuring its network interface to use the masquerade
binding mode
8.17.1.1. Configuring masquerade mode from the command line
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
- The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.
Procedure
Edit the
interfaces
spec of your virtual machine configuration file:kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}
- 1
- Connect using masquerade mode.
- 2
- Optional: List the ports that you want to expose from the virtual machine, each specified by the
port
field. Theport
value must be a number between 0 and 65536. When theports
array is not used, all ports in the valid range are open to incoming traffic. In this example, incoming traffic is allowed on port80
.
NotePorts 49152 and 49153 are reserved for use by the libvirt platform and all other incoming traffic to these ports is dropped.
Create the virtual machine:
$ oc create -f <vm-name>.yaml
8.17.1.2. Configuring masquerade mode with dual-stack (IPv4 and IPv6)
You can configure a new virtual machine to use both IPv6 and IPv4 on the default pod network by using cloud-init.
The IPv6 network address must be statically set to fd10:0:2::2/120
with a default gateway of fd10:0:2::1
in the virtual machine configuration. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally.
When the virtual machine is running, incoming and outgoing traffic for the virtual machine is routed to both the IPv4 address and the unique IPv6 address of the virt-launcher pod. The virt-launcher pod then routes the IPv4 traffic to the DHCP address of the virtual machine, and the IPv6 traffic to the statically set IPv6 address of the virtual machine.
Prerequisites
- The OpenShift Container Platform cluster must use the OVN-Kubernetes Container Network Interface (CNI) network provider configured for dual-stack.
Procedure
In a new virtual machine configuration, include an interface with
masquerade
and configure the IPv6 address and default gateway by using cloud-init.apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 ... interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4
Create the virtual machine in the namespace:
$ oc create -f example-vm-ipv6.yaml
Verification
- To verify that IPv6 has been configured, start the virtual machine and view the interface status of the virtual machine instance to ensure it has an IPv6 address:
$ oc get vmi <vmi-name> -o jsonpath="{.status.interfaces[*].ipAddresses}"
8.17.2. Creating a service to expose a virtual machine
You can expose a virtual machine within the cluster or outside the cluster by using a Service
object.
8.17.2.1. About services
A Kubernetes service is an abstract way to expose an application running on a set of pods as a network service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a spec.type
in the Service
object:
- ClusterIP
-
Exposes the service on an internal IP address within the cluster.
ClusterIP
is the default servicetype
. - NodePort
-
Exposes the service on the same port of each selected node in the cluster.
NodePort
makes a service accessible from outside the cluster. - LoadBalancer
- Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service.
8.17.2.1.1. Dual-stack support
If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the spec.ipFamilyPolicy
and the spec.ipFamilies
fields in the Service
object.
The spec.ipFamilyPolicy
field can be set to one of the following values:
- SingleStack
- The control plane assigns a cluster IP address for the service based on the first configured service cluster IP range.
- PreferDualStack
- The control plane assigns both IPv4 and IPv6 cluster IP addresses for the service on clusters that have dual-stack configured.
- RequireDualStack
-
This option fails for clusters that do not have dual-stack networking enabled. For clusters that have dual-stack configured, the behavior is the same as when the value is set to
PreferDualStack
. The control plane allocates cluster IP addresses from both IPv4 and IPv6 address ranges.
You can define which IP family to use for single-stack or define the order of IP families for dual-stack by setting the spec.ipFamilies
field to one of the following array values:
-
[IPv4]
-
[IPv6]
-
[IPv4, IPv6]
-
[IPv6, IPv4]
8.17.2.2. Exposing a virtual machine as a service
Create a ClusterIP
, NodePort
, or LoadBalancer
service to connect to a running virtual machine (VM) from within or outside the cluster.
Procedure
Edit the
VirtualMachine
manifest to add the label for service creation:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ...
- 1
- Add the label
special: key
in thespec.template.metadata.labels
section.
NoteLabels on a virtual machine are passed through to the pod. The
special: key
label must match the label in thespec.selector
attribute of theService
manifest.-
Save the
VirtualMachine
manifest file to apply your changes. Create a
Service
manifest to expose the VM:apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7
- 1
- The name of the
Service
object. - 2
- The namespace where the
Service
object resides. This must match themetadata.namespace
field of theVirtualMachine
manifest. - 3
- Optional: Specifies how the nodes distribute service traffic that is received on external IP addresses. This only applies to
NodePort
andLoadBalancer
service types. The default value isCluster
which routes traffic evenly to all cluster endpoints. - 4
- Optional: When set, the
nodePort
value must be unique across all services. If not specified, a value in the range above30000
is dynamically allocated. - 5
- Optional: The VM port to be exposed by the service. It must reference an open port if a port list is defined in the VM manifest. If
targetPort
is not specified, it takes the same value asport
. - 6
- The reference to the label that you added in the
spec.template.metadata.labels
stanza of theVirtualMachine
manifest. - 7
- The type of service. Possible values are
ClusterIP
,NodePort
andLoadBalancer
.
-
Save the
Service
manifest file. Create the service by running the following command:
$ oc create -f <service_name>.yaml
- Start the VM. If the VM is already running, restart it.
Verification
Query the
Service
object to verify that it is available:$ oc get service -n example-namespace
Example output for
ClusterIP
serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m
Example output for
NodePort
serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m
Example output for
LoadBalancer
serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s
Choose the appropriate method to connect to the virtual machine:
For a
ClusterIP
service, connect to the VM from within the cluster by using the service IP address and the service port. For example:$ ssh fedora@172.30.3.149 -p 27017
For a
NodePort
service, connect to the VM by specifying the node IP address and the node port outside the cluster network. For example:$ ssh fedora@$NODE_IP -p 30000
-
For a
LoadBalancer
service, use thevinagre
client to connect to your virtual machine by using the public IP address and port. External ports are dynamically allocated.
8.17.2.3. Additional resources
8.17.3. Attaching a virtual machine to a Linux bridge network
By default, OpenShift Virtualization is installed with a single, internal pod network.
You must create a Linux bridge network attachment definition (NAD) in order to connect to additional networks.
To attach a virtual machine to an additional network:
- Create a Linux bridge node network configuration policy.
- Create a Linux bridge network attachment definition.
- Configure the virtual machine, enabling the virtual machine to recognize the network attachment definition.
For more information about scheduling, interface types, and other node networking activities, see the node networking section.
8.17.3.1. Connecting to the network through the network attachment definition
8.17.3.1.1. Creating a Linux bridge node network configuration policy
Use a NodeNetworkConfigurationPolicy
manifest YAML file to create the Linux bridge.
Procedure
Create the
NodeNetworkConfigurationPolicy
manifest. This example includes sample values that you must replace with your own information.apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8
- 1
- Name of the policy.
- 2
- Name of the interface.
- 3
- Optional: Human-readable description of the interface.
- 4
- The type of interface. This example creates a bridge.
- 5
- The requested state for the interface after creation.
- 6
- Disables IPv4 in this example.
- 7
- Disables STP in this example.
- 8
- The node NIC to which the bridge is attached.
8.17.3.2. Creating a Linux bridge network attachment definition
8.17.3.2.1. Prerequisites
- A Linux bridge must be configured and attached on every node. See the node networking section for more information.
Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported.
8.17.3.2.2. Creating a Linux bridge network attachment definition in the web console
The network attachment definition is a custom resource that exposes layer-2 devices to a specific namespace in your OpenShift Virtualization cluster.
Network administrators can create network attachment definitions to provide existing layer-2 networking to pods and virtual machines.
Procedure
-
In the web console, click Networking
Network Attachment Definitions. Click Create Network Attachment Definition.
NoteThe network attachment definition must be in the same namespace as the pod or virtual machine.
- Enter a unique Name and optional Description.
- Click the Network Type list and select CNV Linux bridge.
- Enter the name of the bridge in the Bridge Name field.
- Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
- Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod.
Click Create.
NoteA Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
8.17.3.2.3. Creating a Linux bridge network attachment definition in the CLI
As a network administrator, you can configure a network attachment definition of type cnv-bridge
to provide layer-2 networking to pods and virtual machines.
The network attachment definition must be in the same namespace as the pod or virtual machine.
Procedure
- Create a network attachment definition in the same namespace as the virtual machine.
Add the virtual machine to the network attachment definition, as in the following example:
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ "cniVersion": "0.3.1", "name": "<bridge-network>", 3 "type": "cnv-bridge", 4 "bridge": "<bridge-interface>", 5 "macspoofchk": true, 6 "vlan": 1 7 }'
- 1
- The name for the
NetworkAttachmentDefinition
object. - 2
- Optional: Annotation key-value pair for node selection, where
bridge-interface
is the name of a bridge configured on some nodes. If you add this annotation to your network attachment definition, your virtual machine instances will only run on the nodes that have thebridge-interface
bridge connected. - 3
- The name for the configuration. It is recommended to match the configuration name to the
name
value of the network attachment definition. - 4
- The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. Do not change this field unless you want to use a different CNI.
- 5
- The name of the Linux bridge configured on the node.
- 6
- Optional: Flag to enable MAC spoof check. When set to
true
, you cannot change the MAC address of the pod or guest interface. This attribute provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. - 7
- Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy.
NoteA Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN.
Create the network attachment definition:
$ oc create -f <network-attachment-definition.yaml> 1
- 1
- Where
<network-attachment-definition.yaml>
is the file name of the network attachment definition manifest.
Verification
Verify that the network attachment definition was created by running the following command:
$ oc get network-attachment-definition <bridge-network>
8.17.3.3. Configuring the virtual machine for a Linux bridge network
8.17.3.3.1. Creating a NIC for a virtual machine in the web console
Create and attach additional NICs to a virtual machine from the web console.
Procedure
-
In the correct project in the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click Network Interfaces to display the NICs already attached to the virtual machine.
- Click Add Network Interface to create a new slot in the list.
- Use the Network drop-down list to select the network attachment definition for the additional network.
- Fill in the Name, Model, Type, and MAC Address for the new NIC.
- Click Add to save and attach the NIC to the virtual machine.
8.17.3.3.2. Networking fields
Name | Description |
---|---|
Name | Name for the network interface controller. |
Model | Indicates the model of the network interface controller. Supported values are e1000e and virtio. |
Network | List of available network attachment definitions. |
Type |
List of available binding methods. For the default pod network, |
MAC Address | MAC address for the network interface controller. If a MAC address is not specified, one is assigned automatically. |
8.17.3.3.3. Attaching a virtual machine to an additional network in the CLI
Attach a virtual machine to an additional network by adding a bridge interface and specifying a network attachment definition in the virtual machine configuration.
This procedure uses a YAML file to demonstrate editing the configuration and applying the updated file to the cluster. You can alternatively use the oc edit <object> <name>
command to edit an existing virtual machine.
Prerequisites
- Shut down the virtual machine before editing the configuration. If you edit a running virtual machine, you must restart the virtual machine for the changes to take effect.
Procedure
- Create or edit a configuration of a virtual machine that you want to connect to the bridge network.
Add the bridge interface to the
spec.template.spec.domain.devices.interfaces
list and the network attachment definition to thespec.template.spec.networks
list. This example adds a bridge interface calledbridge-net
that connects to thea-bridge-network
network attachment definition:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 ... networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3 ...
- 1
- The name of the bridge interface.
- 2
- The name of the network. This value must match the
name
value of the correspondingspec.template.spec.domain.devices.interfaces
entry. - 3
- The name of the network attachment definition, prefixed by the namespace where it exists. The namespace must be either the
default
namespace or the same namespace where the VM is to be created. In this case,multus
is used. Multus is a cloud network interface (CNI) plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
Apply the configuration:
$ oc apply -f <example-vm.yaml>
- Optional: If you edited a running virtual machine, you must restart it for the changes to take effect.
8.17.4. Configuring IP addresses for virtual machines
You can configure either dynamically or statically provisioned IP addresses for virtual machines.
Prerequisites
- The virtual machine must connect to an external network.
- You must have a DHCP server available on the additional network to configure a dynamic IP for the virtual machine.
8.17.4.1. Configuring an IP address for a new virtual machine using cloud-init
You can use cloud-init to configure an IP address when you create a virtual machine. The IP address can be dynamically or statically provisioned.
Procedure
Create a virtual machine configuration and include the cloud-init network details in the
spec.volumes.cloudInitNoCloud.networkData
field of the virtual machine configuration:To configure a dynamic IP, specify the interface name and the
dhcp4
boolean:kind: VirtualMachine spec: ... volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2
To configure a static IP, specify the interface name and the IP address:
kind: VirtualMachine spec: ... volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2
8.17.5. Configuring an SR-IOV network device for virtual machines
You can configure a Single Root I/O Virtualization (SR-IOV) device for virtual machines in your cluster. This process is similar but not identical to configuring an SR-IOV device for OpenShift Container Platform.
Live migration is supported for virtual machines that are attached to an SR-IOV network interface only if the sriovLiveMigration
feature gate is enabled in the HyperConverged Cluster custom resource (CR). When the spec.featureGates.sriovLiveMigration
field is set to true
, the virt-launcher
pod runs with the SYS_RESOURCE
capability. This might degrade its security.
8.17.5.1. Prerequisites
- You must have installed the SR-IOV Operator.
- You must have configured the SR-IOV Operator.
8.17.5.2. Automated discovery of SR-IOV network devices
The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device.
The CR is assigned the same name as the worker node. The status.interfaces
list provides information about the network devices on a node.
Do not modify a SriovNetworkNodeState
object. The Operator creates and manages these resources automatically.
8.17.5.2.1. Example SriovNetworkNodeState object
The following YAML is an example of a SriovNetworkNodeState
object created by the SR-IOV Network Operator:
An SriovNetworkNodeState object
apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: "39824" status: interfaces: 2 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: "0000:18:00.0" totalvfs: 8 vendor: 15b3 - deviceID: "1017" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: "0000:18:00.1" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: "8086" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: "8086" syncStatus: Succeeded
8.17.5.3. Configuring SR-IOV network devices
The SR-IOV Network Operator adds the SriovNetworkNodePolicy.sriovnetwork.openshift.io
CustomResourceDefinition to OpenShift Container Platform. You can configure an SR-IOV network device by creating a SriovNetworkNodePolicy custom resource (CR).
When applying the configuration specified in a SriovNetworkNodePolicy
object, the SR-IOV Operator might drain the nodes, and in some cases, reboot nodes.
It might take several minutes for a configuration change to apply.
Prerequisites
-
You installed the OpenShift CLI (
oc
). -
You have access to the cluster as a user with the
cluster-admin
role. - You have installed the SR-IOV Network Operator.
- You have enough available nodes in your cluster to handle the evicted workload from drained nodes.
- You have not selected any control plane nodes for SR-IOV network device configuration.
Procedure
Create an
SriovNetworkNodePolicy
object, and then save the YAML in the<name>-sriov-node-network.yaml
file. Replace<name>
with the name for this configuration.apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: "<vendor_code>" 9 deviceID: "<device_id>" 10 pfNames: ["<pf_name>", ...] 11 rootDevices: ["<pci_bus_id>", "..."] 12 deviceType: vfio-pci 13 isRdma: false 14
- 1
- Specify a name for the CR object.
- 2
- Specify the namespace where the SR-IOV Operator is installed.
- 3
- Specify the resource name of the SR-IOV device plugin. You can create multiple
SriovNetworkNodePolicy
objects for a resource name. - 4
- Specify the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes.
- 5
- Optional: Specify an integer value between
0
and99
. A smaller number gets higher priority, so a priority of10
is higher than a priority of99
. The default value is99
. - 6
- Optional: Specify a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models.
- 7
- Specify the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than
128
. - 8
- The
nicSelector
mapping selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. It is recommended to identify the Ethernet adapter with enough precision to minimize the possibility of selecting an Ethernet device unintentionally. If you specifyrootDevices
, you must also specify a value forvendor
,deviceID
, orpfNames
. If you specify bothpfNames
androotDevices
at the same time, ensure that they point to an identical device. - 9
- Optional: Specify the vendor hex code of the SR-IOV network device. The only allowed values are either
8086
or15b3
. - 10
- Optional: Specify the device hex code of SR-IOV network device. The only allowed values are
158b
,1015
,1017
. - 11
- Optional: The parameter accepts an array of one or more physical function (PF) names for the Ethernet device.
- 12
- The parameter accepts an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format:
0000:02:00.1
. - 13
- The
vfio-pci
driver type is required for virtual functions in OpenShift Virtualization. - 14
- Optional: Specify whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set
isRdma
tofalse
. The default value isfalse
.
NoteIf
isRDMA
flag is set totrue
, you can continue to use the RDMA enabled VF as a normal network device. A device can be used in either mode.-
Optional: Label the SR-IOV capable cluster nodes with
SriovNetworkNodePolicy.Spec.NodeSelector
if they are not already labeled. For more information about labeling nodes, see "Understanding how to update labels on nodes". Create the
SriovNetworkNodePolicy
object:$ oc create -f <name>-sriov-node-network.yaml
where
<name>
specifies the name for this configuration.After applying the configuration update, all the pods in
sriov-network-operator
namespace transition to theRunning
status.To verify that the SR-IOV network device is configured, enter the following command. Replace
<node_name>
with the name of a node with the SR-IOV network device that you just configured.$ oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'
8.17.5.4. Next steps
8.17.6. Defining an SR-IOV network
You can create a network attachment for a Single Root I/O Virtualization (SR-IOV) device for virtual machines.
After the network is defined, you can attach virtual machines to the SR-IOV network.
8.17.6.1. Prerequisites
- You must have configured an SR-IOV device for virtual machines.
8.17.6.2. Configuring SR-IOV additional network
You can configure an additional network that uses SR-IOV hardware by creating a SriovNetwork
object. When you create a SriovNetwork
object, the SR-IOV Operator automatically creates a NetworkAttachmentDefinition
object.
Users can then attach virtual machines to the SR-IOV network by specifying the network in the virtual machine configurations.
Do not modify or delete a SriovNetwork
object if it is attached to any pods or virtual machines in the running
state.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Log in as a user with
cluster-admin
privileges.
Procedure
-
Create the following
SriovNetwork
object, and then save the YAML in the<name>-sriov-network.yaml
file. Replace<name>
with a name for this additional network.
apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: "<spoof_check>" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: "<trust_vf>" 11 capabilities: <capabilities> 12
- 1
- Replace
<name>
with a name for the object. The SR-IOV Network Operator creates aNetworkAttachmentDefinition
object with same name. - 2
- Specify the namespace where the SR-IOV Network Operator is installed.
- 3
- Replace
<sriov_resource_name>
with the value for the.spec.resourceName
parameter from theSriovNetworkNodePolicy
object that defines the SR-IOV hardware for this additional network. - 4
- Replace
<target_namespace>
with the target namespace for the SriovNetwork. Only pods or virtual machines in the target namespace can attach to the SriovNetwork. - 5
- Optional: Replace
<vlan>
with a Virtual LAN (VLAN) ID for the additional network. The integer value must be from0
to4095
. The default value is0
. - 6
- Optional: Replace
<spoof_check>
with the spoof check mode of the VF. The allowed values are the strings"on"
and"off"
.ImportantYou must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.
- 7
- Optional: Replace
<link_state>
with the link state of virtual function (VF). Allowed value areenable
,disable
andauto
. - 8
- Optional: Replace
<max_tx_rate>
with a maximum transmission rate, in Mbps, for the VF. - 9
- Optional: Replace
<min_tx_rate>
with a minimum transmission rate, in Mbps, for the VF. This value should always be less than or equal to Maximum transmission rate.NoteIntel NICs do not support the
minTxRate
parameter. For more information, see BZ#1772847. - 10
- Optional: Replace
<vlan_qos>
with an IEEE 802.1p priority level for the VF. The default value is0
. - 11
- Optional: Replace
<trust_vf>
with the trust mode of the VF. The allowed values are the strings"on"
and"off"
.ImportantYou must enclose the value you specify in quotes or the CR is rejected by the SR-IOV Network Operator.
- 12
- Optional: Replace
<capabilities>
with the capabilities to configure for this network.
To create the object, enter the following command. Replace
<name>
with a name for this additional network.$ oc create -f <name>-sriov-network.yaml
Optional: To confirm that the
NetworkAttachmentDefinition
object associated with theSriovNetwork
object that you created in the previous step exists, enter the following command. Replace<namespace>
with the namespace you specified in theSriovNetwork
object.$ oc get net-attach-def -n <namespace>
8.17.6.3. Next steps
8.17.7. Attaching a virtual machine to an SR-IOV network
You can attach a virtual machine to use a Single Root I/O Virtualization (SR-IOV) network as a secondary network.
8.17.7.1. Prerequisites
- You must have configured an SR-IOV device for virtual machines.
- You must have defined an SR-IOV network.
8.17.7.2. Attaching a virtual machine to an SR-IOV network
You can attach the virtual machine to the SR-IOV network by including the network details in the virtual machine configuration.
Procedure
Include the SR-IOV network details in the
spec.domain.devices.interfaces
andspec.networks
of the virtual machine configuration:kind: VirtualMachine ... spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6 ...
- 1
- A unique name for the interface that is connected to the pod network.
- 2
- The
masquerade
binding to the default pod network. - 3
- A unique name for the SR-IOV interface.
- 4
- The name of the pod network interface. This must be the same as the
interfaces.name
that you defined earlier. - 5
- The name of the SR-IOV interface. This must be the same as the
interfaces.name
that you defined earlier. - 6
- The name of the SR-IOV network attachment definition.
Apply the virtual machine configuration:
$ oc apply -f <vm-sriov.yaml> 1
- 1
- The name of the virtual machine YAML file.
8.17.8. Viewing the IP address of NICs on a virtual machine
You can view the IP address for a network interface controller (NIC) by using the web console or the oc
client. The QEMU guest agent displays additional information about the virtual machine’s secondary networks.
8.17.8.1. Viewing the IP address of a virtual machine interface in the CLI
The network interface configuration is included in the oc describe vmi <vmi_name>
command.
You can also view the IP address information by running ip addr
on the virtual machine, or by running oc get vmi <vmi_name> -o yaml
.
Procedure
Use the
oc describe
command to display the virtual machine interface configuration:$ oc describe vmi <vmi_name>
Example output
... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa
8.17.8.2. Viewing the IP address of a virtual machine interface in the web console
The IP information displays in the Virtual Machine Overview screen for the virtual machine.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine name to open the Virtual Machine Overview screen.
The information for each attached NIC is displayed under IP Address.
8.17.9. Using a MAC address pool for virtual machines
The KubeMacPool component provides a MAC address pool service for virtual machine NICs in a namespace.
8.17.9.1. About KubeMacPool
KubeMacPool provides a MAC address pool per namespace and allocates MAC addresses for virtual machine NICs from the pool. This ensures that the NIC is assigned a unique MAC address that does not conflict with the MAC address of another virtual machine.
Virtual machine instances created from that virtual machine retain the assigned MAC address across reboots.
KubeMacPool does not handle virtual machine instances created independently from a virtual machine.
KubeMacPool is enabled by default when you install OpenShift Virtualization. You can disable a MAC address pool for a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore
label to the namespace. Re-enable KubeMacPool for the namespace by removing the label.
8.17.9.2. Disabling a MAC address pool for a namespace in the CLI
Disable a MAC address pool for virtual machines in a namespace by adding the mutatevirtualmachines.kubemacpool.io=ignore
label to the namespace.
Procedure
Add the
mutatevirtualmachines.kubemacpool.io=ignore
label to the namespace. The following example disables KubeMacPool for two namespaces,<namespace1>
and<namespace2>
:$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore
8.17.9.3. Re-enabling a MAC address pool for a namespace in the CLI
If you disabled KubeMacPool for a namespace and want to re-enable it, remove the mutatevirtualmachines.kubemacpool.io=ignore
label from the namespace.
Earlier versions of OpenShift Virtualization used the label mutatevirtualmachines.kubemacpool.io=allocate
to enable KubeMacPool for a namespace. This is still supported but redundant as KubeMacPool is now enabled by default.
Procedure
Remove the KubeMacPool label from the namespace. The following example re-enables KubeMacPool for two namespaces,
<namespace1>
and<namespace2>
:$ oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-
8.18. Virtual machine disks
8.18.1. Storage features
Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization.
8.18.1.1. OpenShift Virtualization storage feature matrix
Virtual machine live migration | Host-assisted virtual machine disk cloning | Storage-assisted virtual machine disk cloning | Virtual machine snapshots | |
---|---|---|---|---|
OpenShift Container Storage: RBD block-mode volumes | Yes | Yes | Yes | Yes |
OpenShift Virtualization hostpath provisioner | No | Yes | No | No |
Other multi-node writable storage | Yes [1] | Yes | Yes [2] | Yes [2] |
Other single-node writable storage | No | Yes | Yes [2] | Yes [2] |
- PVCs must request a ReadWriteMany access mode.
- Storage provider must support both Kubernetes and CSI snapshot APIs
You cannot live migrate virtual machines that use:
- A storage class with ReadWriteOnce (RWO) access mode
-
Passthrough features such as GPUs or SR-IOV network interfaces that have the
sriovLiveMigration
feature gate disabled
Do not set the evictionStrategy
field to LiveMigrate
for these virtual machines.
8.18.2. Configuring local storage for virtual machines
You can configure local storage for your virtual machines by using the hostpath provisioner feature.
8.18.2.1. About the hostpath provisioner
The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
When you install the OpenShift Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:
Configure SELinux:
-
If you use Red Hat Enterprise Linux CoreOS (RHCOS) 8 workers, you must create a
MachineConfig
object on each node. -
Otherwise, apply the SELinux label
container_file_t
to the persistent volume (PV) backing directory on each node.
-
If you use Red Hat Enterprise Linux CoreOS (RHCOS) 8 workers, you must create a
-
Create a
HostPathProvisioner
custom resource. -
Create a
StorageClass
object for the hostpath provisioner.
The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the persistent volumes that the hostpath provisioner creates.
8.18.2.2. Configuring SELinux for the hostpath provisioner on Red Hat Enterprise Linux CoreOS (RHCOS) 8
You must configure SELinux before you create the HostPathProvisioner
custom resource. To configure SELinux on Red Hat Enterprise Linux CoreOS (RHCOS) 8 workers, you must create a MachineConfig
object on each node.
Prerequisites
Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.
ImportantThe backing directory must not be located in the filesystem’s root directory because the
/
partition is read-only on RHCOS. For example, you can use/var/<directory_name>
but not/<directory_name>
.WarningIf you select a directory that shares space with your operating system, you might exhaust the space on that partition and your node might become non-functional. Create a separate partition and point the hostpath provisioner to the separate partition to avoid interference with your operating system.
Procedure
Create the
MachineConfig
file. For example:$ touch machineconfig.yaml
Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-set-selinux-for-hostpath-provisioner labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux chcon for hostpath provisioner Before=kubelet.service [Service] ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> 1 [Install] WantedBy=multi-user.target enabled: true name: hostpath-provisioner.service
- 1
- Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory (
/
).
Create the
MachineConfig
object:$ oc create -f machineconfig.yaml -n <namespace>
8.18.2.3. Using the hostpath provisioner to enable local storage
To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner
custom resource.
Prerequisites
Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.
ImportantThe backing directory must not be located in the filesystem’s root directory because the
/
partition is read-only on Red Hat Enterprise Linux CoreOS (RHCOS). For example, you can use/var/<directory_name>
but not/<directory_name>
.WarningIf you select a directory that shares space with your operating system, you might exhaust the space on that partition and your node becomes non-functional. Create a separate partition and point the hostpath provisioner to the separate partition to avoid interference with your operating system.
Apply the SELinux context
container_file_t
to the PV backing directory on each node. For example:$ sudo chcon -t container_file_t -R <backing_directory_path>
NoteIf you use Red Hat Enterprise Linux CoreOS (RHCOS) 8 workers, you must configure SELinux by using a
MachineConfig
manifest instead.
Procedure
Create the
HostPathProvisioner
custom resource file. For example:$ touch hostpathprovisioner_cr.yaml
Edit the file, ensuring that the
spec.pathConfig.path
value is the directory where you want the hostpath provisioner to create PVs. For example:apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "<backing_directory_path>" 1 useNamingPrefix: false 2 workload: 3
- 1
- Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory (
/
). - 2
- Change this value to
true
if you want to use the name of the persistent volume claim (PVC) that is bound to the created PV as the prefix of the directory name. - 3
- Optional: You can use the
spec.workload
field to configure node placement rules for the hostpath provisioner.
NoteIf you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the
container_file_t
SELinux context, this can causePermission denied
errors.Create the custom resource in the
openshift-cnv
namespace:$ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv
Additional resources
8.18.2.4. Creating a storage class
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
When using OpenShift Virtualization with OpenShift Container Platform Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block
.
Procedure
Create a YAML file for defining the storage class. For example:
$ touch storageclass.yaml
Edit the file. For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner 1 provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3
- 1
- You can optionally rename the storage class by changing this value.
- 2
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the storage class defaults toDelete
. - 3
- The
volumeBindingMode
value determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.
NoteVirtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using
StorageClass
withvolumeBindingMode
set toWaitForFirstConsumer
, the binding and provisioning of the PV is delayed until aPod
is created using the PVC.Create the
StorageClass
object:$ oc create -f storageclass.yaml
Additional resources
8.18.3. Creating data volumes using profiles
When you create a data volume, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) and populates the PVC with your data. You can create a data volume as either a standalone resource or by using a dataVolumeTemplate
resource in a virtual machine specification. You create a data volume by using either the PVC API or storage APIs.
When using OpenShift Virtualization with OpenShift Container Platform Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block
.
Whenever possible, use the storage API to optimize space allocation and maximize performance.
A storage profile is a custom resource that the CDI manages. It provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class.
Storage profiles enable you to create data volumes quickly while reducing coding and minimizing potential errors.
For recognized storage types, the CDI provides values that optimize the creation of PVCs. However, you can configure automatic settings for a storage class if you customize the storage profile.
8.18.3.1. Creating data volumes using the storage API
When you create a data volume using the storage API, the Containerized Data Interface (CDI) optimizes your persistent volume claim (PVC) allocation based on the type of storage supported by your selected storage class. You only have to specify the data volume name, namespace, and the amount of storage that you want to allocate.
For example:
-
When using Ceph RBD,
accessModes
is automatically set toReadWriteMany
, which enables live migration.volumeMode
is set toBlock
to maximize performance. -
When you are using
volumeMode: Filesystem
, more space will automatically be requested by the CDI, if required to accommodate file system overhead.
In the following YAML, using the storage API requests a data volume with two gigabytes of usable space. The user does not need to know the volumeMode
in order to correctly estimate the required persistent volume claim (PVC) size. The CDI chooses the optimal combination of accessModes
and volumeMode
attributes automatically. These optimal values are based on the type of storage or the defaults that you define in your storage profile. If you want to provide custom values, they override the system-calculated values.
Example DataVolume definition
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7
- 1
- The name of the new data volume.
- 2
- Indicate that the source of the import is an existing persistent volume claim (PVC).
- 3
- The namespace where the source PVC exists.
- 4
- The name of the source PVC.
- 5
- Indicates allocation using the storage API.
- 6
- Specifies the amount of available space that you request for the PVC.
- 7
- Optional: The name of the storage class. If the storage class is not specified, the system default storage class is used.
8.18.3.2. Creating data volumes using the PVC API
When you create a data volume using the PVC API, the Containerized Data Interface (CDI) creates the data volume based on what you specify for the following fields:
-
accessModes
(ReadWriteOnce
,ReadWriteMany
, orReadOnlyMany
) -
volumeMode
(Filesystem
orBlock
) -
capacity
ofstorage
(5Gi
, for example)
In the following YAML, using the PVC API allocates a data volume with a storage capacity of two gigabytes. You specify an access mode of ReadWriteMany
to enable live migration. Because you know the values your system can support, you specify Block
storage instead of the default, Filesystem
.
Example DataVolume definition
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: "<source_namespace>" 3 name: "<my_vm_disk>" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9
- 1
- The name of the new data volume.
- 2
- In the
source
section,pvc
indicates that the source of the import is an existing persistent volume claim (PVC). - 3
- The namespace where the source PVC exists.
- 4
- The name of the source PVC.
- 5
- Indicates allocation using the PVC API.
- 6
accessModes
is required when using the PVC API.- 7
- Specifies the amount of space you are requesting for your data volume.
- 8
- Specifies that the destination is a block PVC.
- 9
- Optionally, specify the storage class. If the storage class is not specified, the system default storage class is used.
When you explicitly allocate a data volume by using the PVC API and you are not using volumeMode: Block
, consider file system overhead.
File system overhead is the amount of space required by the file system to maintain its metadata. The amount of space required for file system metadata is file system dependent. Failing to account for file system overhead in your storage capacity request can result in an underlying persistent volume claim (PVC) that is not large enough to accommodate your virtual machine disk.
If you use the storage API, the CDI will factor in file system overhead and request a larger persistent volume claim (PVC) to ensure that your allocation request is successful.
8.18.3.3. Customizing the storage profile
You can specify default parameters by editing the StorageProfile
object for the provisioner’s storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume
object.
Prerequisites
- Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail.
An empty status
section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by the CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations.
If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created.
Procedure
Edit the storage profile. In this example, the provisioner is not recognized by CDI:
$ oc edit -n openshift-cnv storageprofile <storage_class>
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <some_unknown_provisioner_class> # ... spec: {} status: provisioner: <some_unknown_provisioner> storageClass: <some_unknown_provisioner_class>
Provide the needed attribute values in the storage profile:
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <some_unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <some_unknown_provisioner> storageClass: <some_unknown_provisioner_class>
After you save your changes, the selected values appear in the storage profile
status
element.
8.18.3.4. Additional resources
8.18.4. Reserving PVC space for file system overhead
By default, the Containerized Data Importer (CDI) reserves space for file system overhead data in persistent volume claims (PVCs) that use the Filesystem
volume mode. You can set the percentage that CDI reserves for this purpose globally and for specific storage classes.
8.18.4.1. How file system overhead affects space for virtual machine disks
When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem
volume mode, you must ensure that there is enough space on the PVC for:
- The virtual machine disk.
- The space that the Containerized Data Importer (CDI) reserves for file system overhead, such as metadata.
By default, CDI reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount.
If a different value works better for your use case, you can configure the overhead value by editing the CDI
object. You can change the value globally and you can specify values for specific storage classes.
8.18.4.2. Overriding the default file system overhead value
Change the amount of persistent volume claim (PVC) space that the Containerized Data Importer (CDI) reserves for file system overhead by editing the spec.config.filesystemOverhead
attribute of the CDI
object.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Open the
CDI
object for editing by running the following command:$ oc edit cdi
Edit the
spec.config.filesystemOverhead
fields, populating them with your chosen values:... spec: config: filesystemOverhead: global: "<new_global_value>" 1 storageClass: <storage_class_name>: "<new_value_for_this_storage_class>" 2
- 1
- The file system overhead percentage that CDI uses across the cluster. For example,
global: "0.07"
reserves 7% of the PVC for file system overhead. - 2
- The file system overhead percentage for the specified storage class. For example,
mystorageclass: "0.04"
changes the default overhead value for PVCs in themystorageclass
storage class to 4%.
-
Save and exit the editor to update the
CDI
object.
Verification
View the
CDI
status and verify your changes by running the following command:$ oc get cdi -o yaml
8.18.5. Configuring CDI to work with namespaces that have a compute resource quota
You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
8.18.5.1. About CPU and memory quotas in a namespace
A resource quota, defined by the ResourceQuota
object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.
The HyperConverged
custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0
. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
8.18.5.2. Overriding CPU and memory defaults
Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads
stanza to the HyperConverged
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR by running the following command:$ oc edit hco -n openshift-cnv kubevirt-hyperconverged
Add the
spec.resourceRequirements.storageWorkloads
stanza to the CR, setting the values based on your use case. For example:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi"
-
Save and exit the editor to update the
HyperConverged
CR.
8.18.5.3. Additional resources
8.18.6. Managing data volume annotations
Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
8.18.6.1. Example: Data volume annotations
This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network
annotation causes the pod to use the multus network named bridge-network
as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name>
annotation.
Multus network annotation example
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: dv-ann
annotations:
v1.multus-cni.io/default-network: bridge-network 1
spec:
source:
http:
url: "example.exampleurl.com"
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- 1
- Multus network annotation
8.18.7. Using preallocation for data volumes
The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes.
You can enable preallocation for specific data volumes.
8.18.7.1. About preallocation
The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes.
If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type:
fallocate
-
If the file system supports it, CDI uses the operating system’s
fallocate
call to preallocate space by using theposix_fallocate
function, which allocates blocks and marks them as uninitialized. full
-
If
fallocate
mode cannot be used,full
mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed.
8.18.7.2. Enabling preallocation for a data volume
You can enable preallocation for specific data volumes by including the spec.preallocation
field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI (oc
).
Preallocation mode is supported for all CDI source types.
Procedure
Specify the
spec.preallocation
field in the data volume manifest:apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 ... pvc: ... preallocation: true 2
8.18.8. Uploading local disk images by using the web console
You can upload a locally stored disk image file by using the web console.
8.18.8.1. Prerequisites
- You must have a virtual machine image file in IMG, ISO, or QCOW2 format.
- If you require scratch space according to the CDI supported operations matrix, you must first define a storage class or prepare CDI scratch space for this operation to complete successfully.
8.18.8.2. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.18.8.3. Uploading an image file using the web console
Use the web console to upload an image file to a new persistent volume claim (PVC). You can later use this PVC to attach the image to new virtual machines.
Prerequisites
You must have one of the following:
- A raw virtual machine image file in either ISO or IMG format.
- A virtual machine image file in QCOW2 format.
For best results, compress your image file according to the following guidelines before you upload it:
Compress a raw image file by using
xz
orgzip
.NoteUsing a compressed raw image file results in the most efficient upload.
Compress a QCOW2 image file by using the method that is recommended for your client:
- If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool.
-
If you use a Windows client, compress the QCOW2 file by using
xz
orgzip
.
Procedure
-
From the side menu of the web console, click Storage
Persistent Volume Claims. - Click the Create Persistent Volume Claim drop-down list to expand it.
- Click With Data Upload Form to open the Upload Data to Persistent Volume Claim page.
- Click Browse to open the file manager and select the image that you want to upload, or drag the file into the Drag a file here or browse to upload field.
Optional: Set this image as the default image for a specific operating system.
- Select the Attach this data to a virtual machine operating system check box.
- Select an operating system from the list.
- The Persistent Volume Claim Name field is automatically filled with a unique name and cannot be edited. Take note of the name assigned to the PVC so that you can identify it later, if necessary.
- Select a storage class from the Storage Class list.
In the Size field, enter the size value for the PVC. Select the corresponding unit of measurement from the drop-down list.
WarningThe PVC size must be larger than the size of the uncompressed virtual disk.
- Select an Access Mode that matches the storage class that you selected.
- Click Upload.
8.18.8.4. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.18.9. Uploading local disk images by using the virtctl tool
You can upload a locally stored disk image to a new or existing data volume by using the virtctl
command-line utility.
8.18.9.1. Prerequisites
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a storage class or prepare CDI scratch space for this operation to complete successfully.
8.18.9.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.18.9.3. Creating an upload data volume
You can manually create a data volume with an upload
data source to use for uploading local disk images.
Procedure
Create a data volume configuration that specifies
spec: source: upload{}
:apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2
Create the data volume by running the following command:
$ oc create -f <upload-datavolume>.yaml
8.18.9.4. Uploading a local disk image to a data volume
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
You must have one of the following:
- A raw virtual machine image file in either ISO or IMG format.
- A virtual machine image file in QCOW2 format.
For best results, compress your image file according to the following guidelines before you upload it:
Compress a raw image file by using
xz
orgzip
.NoteUsing a compressed raw image file results in the most efficient upload.
Compress a QCOW2 image file by using the method that is recommended for your client:
- If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool.
-
If you use a Windows client, compress the QCOW2 file by using
xz
orgzip
.
-
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically.
- The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:$ virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3
Note-
If you do not want to create a new data volume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new data volume, omit the
Optional. To verify that a data volume was created, view all data volumes by running the following command:
$ oc get dvs
8.18.9.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.18.9.6. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.18.10. Uploading a local disk image to a block storage data volume
You can upload a local disk image into a block data volume by using the virtctl
command-line utility.
In this workflow, you create a local block device to use as a persistent volume, associate this block volume with an upload
data volume, and use virtctl
to upload the local disk image into the data volume.
8.18.10.1. Prerequisites
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a storage class or prepare CDI scratch space for this operation to complete successfully.
8.18.10.2. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.18.10.3. About block persistent volumes
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and persistent volume claim (PVC) specification.
8.18.10.4. Creating a local block persistent volume
Create a local block persistent volume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV manifest as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
manifest that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The file name of the persistent volume created in the previous step.
8.18.10.5. Creating an upload data volume
You can manually create a data volume with an upload
data source to use for uploading local disk images.
Procedure
Create a data volume configuration that specifies
spec: source: upload{}
:apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2
Create the data volume by running the following command:
$ oc create -f <upload-datavolume>.yaml
8.18.10.6. Uploading a local disk image to a data volume
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
You must have one of the following:
- A raw virtual machine image file in either ISO or IMG format.
- A virtual machine image file in QCOW2 format.
For best results, compress your image file according to the following guidelines before you upload it:
Compress a raw image file by using
xz
orgzip
.NoteUsing a compressed raw image file results in the most efficient upload.
Compress a QCOW2 image file by using the method that is recommended for your client:
- If you use a Linux client, sparsify the QCOW2 file by using the virt-sparsify tool.
-
If you use a Windows client, compress the QCOW2 file by using
xz
orgzip
.
-
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload data volume that you want to use. If this data volume does not exist, it is created automatically.
- The size of the data volume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:$ virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3
Note-
If you do not want to create a new data volume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new data volume, omit the
Optional. To verify that a data volume was created, view all data volumes by running the following command:
$ oc get dvs
8.18.10.7. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.18.10.8. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.18.11. Managing offline virtual machine snapshots
You can create, restore, and delete virtual machine (VM) snapshots for VMs that are powered off (offline). OpenShift Virtualization supports offline VM snapshots on:
- Red Hat OpenShift Container Storage
- Any other storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API
8.18.11.1. About virtual machine snapshots
A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a previous development version.
An offline VM snapshot is created from a VM that is powered off (Stopped state). The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation.
With the offline VM snapshots feature, cluster administrators and application developers can:
- Create a new snapshot
- List all snapshots attached to a specific VM
- Restore a VM from a snapshot
- Delete an existing VM snapshot
8.18.11.1.1. Virtual machine snapshot controller and custom resource definitions (CRDs)
The VM snapshot feature introduces three new API objects defined as CRDs for managing snapshots:
-
VirtualMachineSnapshot
: Represents a user request to create a snapshot. It contains information about the current state of the VM. -
VirtualMachineSnapshotContent
: Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. -
VirtualMachineRestore
: Represents a user request to restore a VM from a snapshot.
The VM snapshot controller binds a VirtualMachineSnapshotContent
object with the VirtualMachineSnapshot
object for which it was created, with a one-to-one mapping.
8.18.11.2. Creating an offline virtual machine snapshot in the web console
You can create a virtual machine (VM) snapshot by using the web console.
The VM snapshot only includes disks that meet the following requirements:
- Must be either a data volume or persistent volume claim
- Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots
If your VM storage includes disks that do not support snapshots, you can either edit them or contact your cluster administrator.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
-
If the virtual machine is running, click Actions
Stop Virtual Machine to power it down. - Click the Snapshots tab and then click Take Snapshot.
- Fill in the Snapshot Name and optional Description fields.
- Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot.
- If your VM has disks that cannot be included in the snapshot and you still wish to proceed, select the I am aware of this warning and wish to proceed checkbox.
- Click Save.
8.18.11.3. Creating an offline virtual machine snapshot in the CLI
You can create a virtual machine (VM) snapshot for an offline VM by creating a VirtualMachineSnapshot
object.
Prerequisites
- Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots.
-
Install the OpenShift CLI (
oc
). - Power down the VM for which you want to create a snapshot.
Procedure
Create a YAML file to define a
VirtualMachineSnapshot
object that specifies the name of the newVirtualMachineSnapshot
and the name of the source VM.For example:
apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2
Create the
VirtualMachineSnapshot
resource. The snapshot controller creates aVirtualMachineSnapshotContent
object, binds it to theVirtualMachineSnapshot
and updates thestatus
andreadyToUse
fields of theVirtualMachineSnapshot
object.$ oc create -f <my-vmsnapshot>.yaml
Verification
Verify that the
VirtualMachineSnapshot
object is created and bound withVirtualMachineSnapshotContent
. ThereadyToUse
flag must be set totrue
.$ oc describe vmsnapshot <my-vmsnapshot>
Example output
apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False" 1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True" 2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4
- 1
- The
status
field of theProgressing
condition specifies if the snapshot is still being created. - 2
- The
status
field of theReady
condition specifies if the snapshot creation process is complete. - 3
- Specifies if the snapshot is ready to be used.
- 4
- Specifies that the snapshot is bound to a
VirtualMachineSnapshotContent
object created by the snapshot controller.
-
Check the
spec:volumeBackups
property of theVirtualMachineSnapshotContent
resource to verify that the expected PVCs are included in the snapshot.
8.18.11.4. Restoring a virtual machine from a snapshot in the web console
You can restore a virtual machine (VM) to a previous configuration represented by a snapshot in the web console.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
-
If the virtual machine is running, click Actions
Stop Virtual Machine to power it down. - Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine.
Choose one of the following methods to restore a VM snapshot:
- For the snapshot that you want to use as the source to restore the VM, click Restore.
-
Select a snapshot to open the Snapshot Details screen and click Actions
Restore Virtual Machine Snapshot.
- In the confirmation pop-up window, click Restore to restore the VM to its previous configuration represented by the snapshot.
8.18.11.5. Restoring a virtual machine from a snapshot in the CLI
You can restore an existing virtual machine (VM) to a previous configuration by using a VM snapshot .
Prerequisites
-
Install the OpenShift CLI (
oc
). - Power down the VM you want to restore to a previous state.
Procedure
Create a YAML file to define a
VirtualMachineRestore
object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source.For example:
apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3
Create the
VirtualMachineRestore
resource. The snapshot controller updates the status fields of theVirtualMachineRestore
object and replaces the existing VM configuration with the snapshot content.$ oc create -f <my-vmrestore>.yaml
Verification
Verify that the VM is restored to the previous state represented by the snapshot. The
complete
flag must be set totrue
.$ oc get vmrestore <my-vmrestore>
Example output
apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" 2 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1
8.18.11.6. Deleting a virtual machine snapshot in the web console
You can delete an existing virtual machine snapshot by using the web console.
Procedure
-
Click Workloads
Virtualization from the side menu. - Click the Virtual Machines tab.
- Select a virtual machine to open the Virtual Machine Overview screen.
- Click the Snapshots tab. The page displays a list of snapshots associated with the virtual machine.
Choose one of the following methods to delete a virtual machine snapshot:
- Click the Options menu of the virtual machine snapshot that you want to delete and select Delete Virtual Machine Snapshot.
-
Select a snapshot to open the Snapshot Details screen and click Actions
Delete Virtual Machine Snapshot.
- In the confirmation pop-up window, click Delete to delete the snapshot.
8.18.11.7. Deleting a virtual machine snapshot in the CLI
You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot
object.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Delete the
VirtualMachineSnapshot
object. The snapshot controller deletes theVirtualMachineSnapshot
along with the associatedVirtualMachineSnapshotContent
object.$ oc delete vmsnapshot <my-vmsnapshot>
Verification
Verify that the snapshot is deleted and no longer attached to this VM:
$ oc get vmsnapshot
8.18.11.8. Additional resources
8.18.12. Moving a local virtual machine disk to a different node
Virtual machines that use local volume storage can be moved so that they run on a specific node.
You might want to move the virtual machine to a specific node for the following reasons:
- The current node has limitations to the local storage configuration.
- The new node is better optimized for the workload of that virtual machine.
To move a virtual machine that uses local storage, you must clone the underlying volume by using a data volume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new data volume, or add the new data volume to another virtual machine.
When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.
Users without the cluster-admin
role require additional user permissions to clone volumes across namespaces.
8.18.12.1. Cloning a local volume to another node
You can move a virtual machine disk so that it runs on a specific node by cloning the underlying persistent volume claim (PVC).
To ensure the virtual machine disk is cloned to the correct node, you must either create a new persistent volume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the data volume.
The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.
Prerequisites
- The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.
Procedure
Either create a new local PV on the node, or identify a local PV already on the node:
Create a local PV that includes the
nodeAffinity.nodeSelectorTerms
parameters. The following manifest creates a10Gi
local PV onnode01
.kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem
Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the
nodeAffinity
field in its configuration:$ oc get pv <destination-pv> -o yaml
The following snippet shows that the PV is on
node01
:Example output
... spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2 ...
Add a unique label to the PV:
$ oc label pv <destination-pv> node=node01
Create a data volume manifest that references the following:
- The PVC name and namespace of the virtual machine.
- The label you applied to the PV in the previous step.
The size of the destination PV.
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: "<source-vm-disk>" 2 namespace: "<source-namespace>" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5
- 1
- The name of the new data volume.
- 2
- The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration:
spec.volumes.persistentVolumeClaim.claimName
. - 3
- The namespace where the source PVC exists.
- 4
- The label that you applied to the PV in the previous step.
- 5
- The size of the destination PV.
Start the cloning operation by applying the data volume manifest to your cluster:
$ oc apply -f <clone-datavolume.yaml>
The data volume clones the PVC of the virtual machine into the PV on the specific node.
8.18.13. Expanding virtual storage by adding blank disk images
You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization.
8.18.13.1. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.18.13.2. Creating a blank disk image with data volumes
You can create a new blank disk image in a persistent volume claim by customizing and deploying a data volume configuration file.
Prerequisites
- At least one available persistent volume.
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the data volume configuration file:
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
Create the blank disk image by running the following command:
$ oc create -f <blank-image-datavolume>.yaml
8.18.13.3. Template: Data volume configuration file for blank disk images
blank-image-datavolume.yaml
apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
8.18.13.4. Additional resources
- Configure preallocation mode to improve write performance for data volume operations.
8.18.14. Cloning a data volume using smart-cloning
Smart-cloning is a built-in feature of OpenShift Container Platform Storage (OCS), designed to enhance performance of the cloning process. Clones created with smart-cloning are faster and more efficient than host-assisted cloning.
You do not need to perform any action to enable smart-cloning, but you need to ensure your storage environment is compatible with smart-cloning to use this feature.
When you create a data volume with a persistent volume claim (PVC) source, you automatically initiate the cloning process. You always receive a clone of the data volume, if your environment supports smart-cloning or not. However, you will only receive the performance benefits of smart cloning if you storage provider supports smart-cloning.
8.18.14.1. Understanding smart-cloning
When a data volume is smart-cloned, the following occurs:
- A snapshot of the source persistent volume claim (PVC) is created.
- A PVC is created from the snapshot.
- The snapshot is deleted.
8.18.14.2. Cloning a data volume
Prerequisites
For smart-cloning to occur, the following conditions are required:
- Your storage provider must support snapshots.
- The source and target PVCs must be defined to the same storage class.
-
The
VolumeSnapshotClass
object must reference the storage class defined to both the source and target PVCs.
Procedure
To initiate cloning of a data volume:
Create a YAML file for a
DataVolume
object that specifies the name of the new data volume and the name and namespace of the source PVC. In this example, because you specify the storage API, there is no need to specify accessModes or volumeMode. The optimal values will be calculated for you automatically.apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 storage: 4 resources: requests: storage: <2Gi> 5
Start cloning the PVC by creating the data volume:
$ oc create -f <cloner-datavolume>.yaml
NoteData volumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new data volume while the PVC clones.
8.18.14.3. Additional resources
- Cloning the persistent volume claim of a virtual machine disk into a new data volume
- Configure preallocation mode to improve write performance for data volume operations.
8.18.15. Creating and using boot sources
A boot source contains a bootable operating system (OS) and all of the configuration settings for the OS, such as drivers.
You use a boot source to create virtual machine templates with specific configurations. These templates can be used to create any number of available virtual machines.
Quick Start tours are available in the OpenShift Container Platform web console to assist you in creating a custom boot source, uploading a boot source, and other tasks. Select Quick Starts from the Help menu to view the Quick Start tours.
8.18.15.1. About virtual machines and boot sources
Virtual machines consist of a virtual machine definition and one or more disks that are backed by data volumes. Virtual machine templates enable you to create virtual machines using predefined virtual machine specifications.
Every virtual machine template requires a boot source, which is a fully configured virtual machine disk image including configured drivers. Each virtual machine template contains a virtual machine definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source.
Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster’s default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the previous default storage class.
To use the boot sources feature, install the latest release of OpenShift Virtualization. The namespace openshift-virtualization-os-images
enables the feature and is installed with the OpenShift Virtualization Operator. Once the boot source feature is installed, you can create boot sources, attach them to templates, and create virtual machines from the templates.
Define a boot source by using a persistent volume claim (PVC) that is populated by uploading a local file, cloning an existing PVC, importing from a registry, or by URL. Attach a boot source to a virtual machine template by using the web console. After the boot source is attached to a virtual machine template, you create any number of fully configured ready-to-use virtual machines from the template.
8.18.15.2. Importing a Red Hat Enterprise Linux image as a boot source
You can import a Red Hat Enterprise Linux (RHEL) image as a boot source by specifying the URL address for the image.
Prerequisites
- You must have access to the web server with the operating system image. For example: Red Hat Enterprise Linux web page with images.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Templates tab.
- Identify the RHEL template for which you want to configure a boot source and click Add source.
- In the Add boot source to template window, select Import via URL (creates PVC) from the Boot source type list.
- Click RHEL download page to access the Red Hat Customer Portal. A list of available installers and images is displayed on the Download Red Hat Enterprise Linux page.
- Identify the Red Hat Enterprise Linux KVM guest image that you want to download. Right-click Download Now, and copy the URL for the image.
- In the Add boot source to template window, paste the copied URL of the guest image into the Import URL field, and click Save and import.
Verification
To verify that a boot source was added to the template:
- Click the Templates tab.
- Confirm that the tile for this template displays a green checkmark.
You can now use this template to create RHEL virtual machines.
8.18.15.3. Adding a boot source for a virtual machine template
A boot source can be configured for any virtual machine template that you want to use for creating virtual machines or custom templates. When virtual machine templates are configured with a boot source, they are labeled Available in the Templates tab. After you add a boot source to a template, you can create a new virtual machine from the template.
There are four methods for selecting and adding a boot source in the web console:
- Upload local file (creates PVC)
- Import via URL (creates PVC)
- Clone existing PVC (creates PVC)
- Import via Registry (creates PVC)
Prerequisites
-
To add a boot source, you must be logged in as a user with the
os-images.kubevirt.io:edit
RBAC role or as an administrator. You do not need special privileges to create a virtual machine from a template with a boot source added. - To upload a local file, the operating system image file must exist on your local machine.
- To import via URL, access to the web server with the operating system image is required. For example: the Red Hat Enterprise Linux web page with images.
- To clone an existing PVC, access to the project with a PVC is required.
- To import via registry, access to the container registry is required.
Procedure
-
In the OpenShift Virtualization console, click Workloads
Virtualization from the side menu. - Click the Templates tab.
- Identify the virtual machine template for which you want to configure a boot source and click Add source.
- In the Add boot source to template window, click Select boot source, select a method for creating a persistent volume claim (PVC): Upload local file, Import via URL, Clone existing PVC, or Import via Registry.
- Optional: Click This is a CD-ROM boot source to mount a CD-ROM and use it to install the operating system on to an empty disk. The additional empty disk is automatically created and mounted by OpenShift Virtualization. If the additional disk is not needed, you can remove it when you create the virtual machine.
Enter a value for Persistent Volume Claim size to specify the PVC size that is adequate for the uncompressed image and any additional space that is required.
- Optional: Enter a name for Source provider to associate the name with this template.
Optional: Advanced Storage settings: Click Storage class and select the storage class that is used to create the disk. Typically, this storage class is the default storage class that is created for use by all PVCs.
NoteProvided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) are created with the cluster’s default storage class. If you select a different default storage class after configuration, you must delete the existing data volumes in the cluster namespace that are configured with the previous default storage class.
Optional: Advanced Storage settings: Click Access mode and select an access mode for the persistent volume:
- Single User (RWO) mounts the volume as read-write by a single node.
- Shared Access (RWX) mounts the volume as read-write by many nodes.
- Read Only (ROX) mounts the volume as read-only by many nodes.
- Optional: Advanced Storage settings: Click Volume mode if you want to select Block instead of the default value Filesystem. OpenShift Virtualization can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service.
Select the appropriate method to save your boot source:
- Click Save and upload if you uploaded a local file.
- Click Save and import if you imported content from a URL or the registry.
- Click Save and clone if you cloned an existing PVC.
Your custom virtual machine template with a boot source is listed in the Templates tab, and you can create virtual machines by using this template.
8.18.15.4. Creating a virtual machine from a template with an attached boot source
After you add a boot source to a template, you can create a new virtual machine from the template.
Procedure
- In the OpenShift Container Platform web console, click Workloads > Virtualization in the side menu.
- From the Virtual Machines tab or the Templates tab, click Create and select Virtual Machine with Wizard.
- In the Select a template step, select an OS from the Operating System list that has the (Source available) label next to the OS and version name. The (Source available) label indicates that a boot source is available for this OS.
- Click Review and Confirm.
- Review your virtual machine settings and edit them, if required.
- Click Create Virtual Machine to create your virtual machine. The Successfully created virtual machine page is displayed.
8.18.15.5. Creating a custom boot source
You can prepare a custom disk image, based on an existing disk image, for use as a boot source.
Use this procedure to complete the following tasks:
- Preparing a custom disk image
- Creating a boot source from the custom disk image
- Attaching the boot source to a custom template
Procedure
- In the OpenShift Virtualization console, click Workloads > Virtualization from the side menu.
- Click the Templates tab.
- Click the link in the Source provider column for the template you want to customize. A window displays, indicating that the template currently has a defined source.
- In the window, click the Customize source link.
- Click Continue in the About boot source customization window to proceed with customization after reading the information provided about the boot source customization process.
On the Prepare boot source customization page, in the Define new template section:
- Select the New template namespace field and then choose a project.
- Enter the name of the custom template in the New template name field.
- Enter the name of the template provider in the New template provider field.
- Select the New template support field and then choose the appropriate value, indicating support contacts for the custom template you create.
- Select the New template flavor field and then choose the appropriate CPU and memory values for the custom image you create.
-
In the Prepare boot source for customization section, customize the
cloud-init
YAML script, if needed, to define login credentials. Otherwise, the script generates default credentials for you. - Click Start Customization. The customization process begins and the Preparing boot source customization page displays, followed by the Customize boot source page. The Customize boot source page displays the output of the running script. When the script completes, your custom image is available.
- In the VNC console, click show password in the Guest login credentials section. Your login credentials display.
- When the image is ready for login, sign in with the VNC Console by providing the user name and password displayed in the Guest login credentials section.
- Verify the custom image works as expected. If it does, click Make this boot source available.
- In the Finish customization and make template available window, select I have sealed the boot source so it can be used as a template and then click Apply.
- On the Finishing boot source customization page, wait for the template creation process to complete. Click Navigate to template details or Navigate to template list to view your customized template, created from your custom boot source.
8.18.15.6. Additional resources
- Creating virtual machine templates
- Creating a Microsoft Windows boot source from a cloud image
- Customizing existing Microsoft Windows boot sources in OpenShift Container Platform
- Setting a PVC as a boot source for a Microsoft Windows template using the CLI
- Creating boot sources using automated scripting
- Creating a boot source automatically within a pod
8.18.16. Hot-plugging virtual disks
Hot-plugging virtual disks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Hot-plug and hot-unplug virtual disks when you want to add or remove them without stopping your virtual machine or virtual machine instance. This capability is helpful when you need to add storage to a running virtual machine without incurring down-time.
When you hot-plug a virtual disk, you attach a virtual disk to a virtual machine instance while the virtual machine is running.
When you hot-unplug a virtual disk, you detach a virtual disk from a virtual machine instance while the virtual machine is running.
Only data volumes and persistent volume claims (PVCs) can be hot-plugged and hot-unplugged. You cannot hot-plug or hot-unplug container disks.
When you attach a hot-plugged virtual disk to a virtual machine, you cannot perform live migration on the virtual machine. If you attempt to perform live migration on a virtual machine with a hot-plugged disk attached, live migration fails.
8.18.16.1. Hot-plugging a virtual disk using the CLI
Hot-plug virtual disks that you want to attach to a virtual machine instance (VMI) while a virtual machine is running.
Prerequisites
- You must have a running virtual machine to hot-plug a virtual disk.
- You must have at least one data volume or persistent volume claim (PVC) available for hot-plugging.
Procedure
Hot-plug a virtual disk by running the following command:
$ virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>]
-
Use the optional
--persist
flag to add the hot-plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the--persist
flag, you can no longer hot-plug or hot-unplug the virtual disk. The--persist
flag applies to virtual machines, not virtual machine interfaces. -
The optional
--serial
flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot-plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot-plugged data volume or PVC.
-
Use the optional
8.18.16.2. Hot-unplugging a virtual disk using the CLI
Hot-unplug virtual disks that you want to detach from a virtual machine instance (VMI) while a virtual machine is running.
Prerequisites
- Your virtual machine must be running.
- You must have at least one data volume or persistent volume claim (PVC) available and hot-plugged.
Procedure
Hot-unplug a virtual disk by running the following command:
$ virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>
8.18.17. Using container disks with virtual machines
You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage.
If you use large container disks, I/O traffic might increase, impacting worker nodes. This can lead to unavailable nodes. You can resolve this by:
8.18.17.1. About container disks
A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones.
A container disk can either be imported into a persistent volume claim (PVC) by using a data volume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk
volume.
8.18.17.1.1. Importing a container disk into a PVC by using a data volume
Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a data volume. You can then attach the data volume to a virtual machine for persistent storage.
8.18.17.1.2. Attaching a container disk to a virtual machine as a containerDisk
volume
A containerDisk
volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk
volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine.
Use containerDisk
volumes for read-only file systems such as CD-ROMs or for disposable virtual machines.
Using containerDisk
volumes for read-write file systems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly.
8.18.17.2. Preparing a container disk for virtual machines
You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a data volume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk
volume.
The size of a disk image inside a container disk is limited by the maximum layer size of the registry where the container disk is hosted.
For Red Hat Quay, you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed.
Prerequisites
-
Install
podman
if it is not already installed. - The virtual machine image must be either QCOW2 or RAW format.
Procedure
Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of
107
, and placed in the/disk/
directory inside the container. Permissions for the/disk/
directory must then be set to0440
.The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal
scratch
image in the second stage to store the result:$ cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF
- 1
- Where <vm_image> is the virtual machine image in either QCOW2 or RAW format.
To use a remote virtual machine image, replace<vm_image>.qcow2
with the complete url for the remote image.
Build and tag the container:
$ podman build -t <registry>/<container_disk_name>:latest .
Push the container image to the registry:
$ podman push <registry>/<container_disk_name>:latest
If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage.
8.18.17.3. Disabling TLS for a container registry to use as insecure registry
You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries
field of the HyperConverged
custom resource.
Prerequisites
-
Log in to the cluster as a user with the
cluster-admin
role.
Procedure
Edit the
HyperConverged
custom resource and add a list of insecure registries to thespec.storageImport.insecureRegistries
field.apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000"
- 1
- Replace the examples in this list with valid registry hostnames.
8.18.17.4. Next steps
- Import the container disk into persistent storage for a virtual machine.
-
Create a virtual machine that uses a
containerDisk
volume for ephemeral storage.
8.18.18. Preparing CDI scratch space
8.18.18.1. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.18.18.2. Understanding scratch space
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts.
You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass
field of the HyperConverged
custom resource.
If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used.
CDI requires requesting scratch space with a file
volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block
volume mode, you must define a storage class capable of provisioning file
volume mode PVCs.
Manual provisioning
If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
8.18.18.3. CDI operations that require scratch space
Type | Reason |
---|---|
Registry imports | CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
8.18.18.4. Defining a storage class
You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass
field to the HyperConverged
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR by running the following command:$ oc edit hco -n openshift-cnv kubevirt-hyperconverged
Add the
spec.scratchSpaceStorageClass
field to the CR, setting the value to the name of a storage class that exists in the cluster:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>" 1
- 1
- If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated.
-
Save and exit your default editor to update the
HyperConverged
CR.
8.18.18.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
8.18.18.6. Additional resources
8.18.19. Re-using persistent volumes
To re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used.
8.18.19.1. About reclaiming statically provisioned persistent volumes
When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.
You can then re-use the PV configuration to create a PV with a different name.
Statically provisioned PVs must have a reclaim policy of Retain
to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV.
The Recycle
reclaim policy is deprecated in OpenShift Container Platform 4.
8.18.19.2. Reclaiming statically provisioned persistent volumes
Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage.
Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage.
Procedure
Ensure that the reclaim policy of the PV is set to
Retain
:Check the reclaim policy of the PV:
$ oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'
If the
persistentVolumeReclaimPolicy
is not set toRetain
, edit the reclaim policy with the following command:$ oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Ensure that no resources are using the PV:
$ oc describe pvc <pvc_name> | grep 'Mounted By:'
Remove any resources that use the PVC before continuing.
Delete the PVC to release the PV:
$ oc delete pvc <pvc_name>
Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use
spec
parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV:$ oc get pv <pv_name> -o yaml > <file_name>.yaml
Delete the PV:
$ oc delete pv <pv_name>
Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder:
$ rm -rf <path_to_share_storage>
Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the
spec
parameters of that file as the basis for a new PV manifest:NoteTo avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted.
$ oc create -f <new_pv_name>.yaml
Additional resources
- Configuring local storage for virtual machines
- The OpenShift Container Platform Storage documentation has more information on Persistent Storage.
8.18.20. Deleting data volumes
You can manually delete a data volume by using the oc
command-line interface.
When you delete a virtual machine, the data volume it uses is automatically deleted.
8.18.20.1. About data volumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OpenShift Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.
8.18.20.2. Listing all data volumes
You can list the data volumes in your cluster by using the oc
command-line interface.
Procedure
List all data volumes by running the following command:
$ oc get dvs
8.18.20.3. Deleting a data volume
You can delete a data volume by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the data volume that you want to delete.
Procedure
Delete the data volume by running the following command:
$ oc delete dv <datavolume_name>
NoteThis command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.