Chapter 6. Virtual machines
6.1. Creating virtual machines
Use one of these procedures to create a virtual machine:
- Running the virtual machine wizard
- Pasting a pre-configured YAML file with the virtual machine wizard
- Using the CLI
- Importing a VMware virtual machine or template with the virtual machine wizard
Do not create virtual machines in openshift-*
namespaces. Instead, create a new namespace or use an existing namespace without the openshift
prefix.
6.1.1. Running the virtual machine wizard to create a virtual machine
The web console features an interactive wizard that guides you through General, Networking, Storage, Advanced, and Review steps to simplify the process of creating virtual machines. All required fields are marked by a *
. When the required fields are completed, you can review and create your virtual machine.
Network Interface Cards (NICs) and storage disks can be created and attached to virtual machines after they have been created.
Bootable Disk
If either URL
or Container
are selected as the Source in the General step, a rootdisk
disk is created and attached to the virtual machine as the Bootable Disk. You can modify the rootdisk
but you cannot remove it.
A Bootable Disk is not required for virtual machines provisioned from a PXE source if there are no disks attached to the virtual machine. If one or more disks are attached to the virtual machine, you must select one as the Bootable Disk.
Prerequisites
- When you create your virtual machine using the wizard, your virtual machine’s storage medium must support Read-Write-Many (RWX) PVCs.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Click Create Virtual Machine and select New with Wizard.
- Fill in all required fields in the General step. Selecting a Template automatically fills in these fields.
Click Next to progress to the Networking step. A
nic0
NIC is attached by default.- (Optional) Click Add Network Interface to create additional NICs.
- (Optional) You can remove any or all NICs by clicking the Options menu and selecting Delete. A virtual machine does not need a NIC attached to be created. NICs can be created after the virtual machine has been created.
Click Next to progress to the Storage screen.
- (Optional) Click Add Disk to create additional disks. These disks can be removed by clicking the Options menu and selecting Delete.
- (Optional) Click the Options menu to edit the disk and save your changes.
- Click Review and Create. The Results screen displays the JSON configuration file for the virtual machine.
The virtual machine is listed in Workloads
Refer to the virtual machine wizard fields section when running the web console wizard.
6.1.1.1. Virtual machine wizard fields
Name | Parameter | Description |
---|---|---|
Template | Template from which to create the virtual machine. Selecting a template will automatically complete other fields. | |
Source | PXE | Provision virtual machine from PXE menu. Requires a PXE-capable NIC in the cluster. |
URL | Provision virtual machine from an image available from an HTTP or S3 endpoint. | |
Container |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: | |
Disk | Provision virtual machine from a disk. | |
Operating System | The primary operating system that is selected for the virtual machine. | |
Flavor | small, medium, large, tiny, Custom | Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system. |
Memory | Size in GiB of the memory allocated to the virtual machine. | |
CPUs | The amount of CPU allocated to the virtual machine. | |
Workload Profile | High Performance | A virtual machine configuration that is optimized for high-performance workloads. |
Server | A profile optimized to run server workloads. | |
Desktop | A virtual machine configuration for use on a desktop. | |
Name |
The name can contain lowercase letters ( | |
Description | Optional description field. | |
Start virtual machine on creation | Select to automatically start the virtual machine upon creation. |
6.1.1.2. Cloud-init fields
Name | Description |
---|---|
Hostname | Sets a specific host name for the virtual machine. |
Authenticated SSH Keys | The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine. |
Use custom script | Replaces other options with a field in which you paste a custom cloud-init script. |
6.1.1.3. CD-ROM fields
Source | Description |
---|---|
Container |
Specify the container path. For example: |
URL | Specify the URL path and size in GiB. Then, select the storage class for this URL from the drop-down list. |
Attach Disk | Select the virtual machine disk that you want to attach. |
6.1.1.4. Networking fields
Name | Description |
---|---|
Name | Name for the Network Interface Card. |
Model | Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO. |
Network | List of available NetworkAttachmentDefinition objects. |
Type |
List of available binding methods. For the default Pod network, |
MAC Address | MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session. |
6.1.1.5. Storage fields
Name | Description |
---|---|
Source | Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs). |
Name |
Name of the disk. The name can contain lowercase letters ( |
Size (GiB) | Size, in GiB, of the disk. |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
Storage class |
The |
6.1.2. Pasting in a pre-configured YAML file to create a virtual machine
Create a virtual machine by writing or pasting a YAML configuration file in the web console in the Workloads example
virtual machine configuration is provided by default whenever you open the YAML edit screen.
If your YAML configuration is invalid when you click Create, an error message indicates the parameter in which the error occurs. Only one error is shown at a time.
Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Click Create Virtual Machine and select New from YAML.
Write or paste your virtual machine configuration in the editable window.
-
Alternatively, use the
example
virtual machine provided by default in the YAML screen.
-
Alternatively, use the
- (Optional) Click Download to download the YAML configuration file in its present state.
- Click Create to create the virtual machine.
The virtual machine is listed in Workloads
6.1.3. Using the CLI to create a virtual machine
Procedure
The spec
object of the VirtualMachine configuration file references the virtual machine settings, such as the number of cores and the amount of memory, the disk type, and the volumes to use.
-
Attach the virtual machine disk to the virtual machine by referencing the relevant PVC
claimName
as a volume. To create a virtual machine with the OpenShift Container Platform client, run this command:
$ oc create -f <vm.yaml>
- Since virtual machines are created in a Stopped state, run a virtual machine instance by starting it.
A ReplicaSet’s purpose is often used to guarantee the availability of a specified number of identical pods. ReplicaSet is not currently supported in container-native virtualization.
Setting | Description |
---|---|
Cores | The number of cores inside the virtual machine. Must be a value greater than or equal to 1. |
Memory | The amount of RAM that is allocated to the virtual machine by the node. Specify a value in M for Megabyte or Gi for Gigabyte. |
Disks | The name of the volume that is referenced. Must match the name of a volume. |
Setting | Description |
---|---|
Name | The name of the volume, which must be a DNS label and unique within the virtual machine. |
PersistentVolumeClaim |
The PVC to attach to the virtual machine. The |
6.1.4. Virtual machine storage volume types
Virtual machine storage volume types are listed, as well as domain and volume settings.
ephemeral | A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim. The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. |
persistentVolumeClaim | Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. |
dataVolume | DataVolumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready. |
cloudInitNoCloud | Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. |
containerDisk | References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and embedded in a volume when the virtual machine is created. A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. Container disks are not limited to a single virtual machine and are useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. |
emptyDisk | Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. |
6.1.5. Additional resources
The VirtualMachineSpec
definition in the KubeVirt v.0.26.5 API Reference provides broader context for the parameters and hierarchy of the virtual machine specification.
The KubeVirt API Reference is the upstream project reference and might contain parameters that are not supported in container-native virtualization.
6.2. Editing virtual machines
You can update a virtual machine configuration using either the YAML editor in the web console or the OpenShift client on the command line. You can also update a subset of the parameters in the Virtual Machine Overview of the web console.
6.2.1. Editing a virtual machine in the web console
Edit select values of a virtual machine in the Virtual Machine Overview screen of the web console by clicking on the pencil icon next to the relevant field. Other values can be edited using the CLI.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Select a virtual machine to open the Virtual Machine Overview screen.
- Click the pencil icon to make that field editable.
- Make the relevant changes and click Save.
If the virtual machine is running, changes will not take effect until you reboot the virtual machine.
6.2.2. Editing a virtual machine YAML configuration using the web console
Using the web console, edit the YAML configuration of a virtual machine.
Not all parameters can be updated. If you edit values that cannot be changed and click Save, an error message indicates the parameter that was not able to be updated.
The YAML configuration can be edited while the virtual machine is Running, however the changes will only take effect after the virtual machine has been stopped and started again.
Navigating away from the YAML screen while editing cancels any changes to the configuration you have made.
Procedure
-
Click Workloads
Virtual Machine from the side menu. - Select a virtual machine.
- Click the YAML tab to display the editable configuration.
- Optional: You can click Download to download the YAML file locally in its current state.
- Edit the file and click Save.
A confirmation message shows that the modification has been successful and includes the updated version number for the object.
6.2.3. Editing a virtual machine YAML configuration using the CLI
Use this procedure to edit a virtual machine YAML configuration using the CLI.
Prerequisites
- You configured a virtual machine with a YAML object configuration file.
-
You installed the
oc
CLI.
Procedure
Run the following command to update the virtual machine configuration.
$ oc edit <object_type> <object_ID>
- Open the object configuration.
- Edit the YAML.
If you edit a running virtual machine, you need to do one of the following:
- Restart the virtual machine
Run the following command for the new configuration to take effect.
$ oc apply <object_type> <object_ID>
6.2.4. Adding a virtual disk to a virtual machine
Use this procedure to add a virtual disk to a virtual machine.
Procedure
- From the Virtual Machines tab, select your virtual machine.
- Select the Disks tab.
- Click Add Disks to open the Add Disk window.
- In the Add Disk window, specify Source, Name, Size, Interface, and Storage Class.
- Use the drop-down lists and check boxes to edit the disk configuration.
- Click OK.
6.2.5. Adding a network interface to a virtual machine
Use this procedure to add a network interface to a virtual machine.
Procedure
- From the Virtual Machines tab, select the virtual machine.
- Select the Network Interfaces tab.
- Click Add Network Interface.
- In the Add Network Interface window, specify the Name, Model, Network, Type, and MAC Address of the network interface.
- Click Add to add the network interface.
- Restart the virtual machine to enable access.
- Edit the drop-down lists and check boxes to configure the network interface.
- Click Save Changes.
- Click OK.
The new network interface displays at the top of the Create Network Interface list until the user restarts it.
The new network interface has a Pending VM restart
Link State until you restart the virtual machine. Hover over the Link State to display more detailed information.
The Link State is set to Up by default when the network interface card is defined on the virtual machine and connected to the network.
6.2.6. Editing CD-ROMs for Virtual Machines
Use the following procedure to configure CD-ROMs for virtual machines.
Procedure
- From the Virtual Machines tab, select your virtual machine.
- Select the Overview tab.
To add or edit a CD-ROM configuration, click the pencil icon to the right of the CD-ROMs label. The Edit CD-ROM window opens.
- If CD-ROMs are unavailable for editing, the following message displays: The virtual machine doesn’t have any CD-ROMs attached.
- If there are CD-ROMs available, you can remove a CD-ROM by clicking -.
In the Edit CD-ROM window, do the following:
- Select the type of CD-ROM configuration from the drop-down list for Media Type. CD-ROM configuration types are Container, URL, and Persistent Volume Claim.
- Complete the required information for each Type.
- When all CD-ROMs are added, click Save.
6.3. Editing boot order
You can update the values for a boot order list by using the web console or the CLI.
With Boot Order in the Virtual Machine Overview page, you can:
- Select a disk or Network Interface Card (NIC) and add it to the boot order list.
- Edit the order of the disks or NICs in the boot order list.
- Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources.
6.3.1. Adding items to a boot order list in the web console
Add items to a boot order list by using the web console.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Select a virtual machine to open the Virtual Machine Overview screen.
- Click the pencil icon that is located on the right side of Boot Order. If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot disks from YAML by order of appearance in YAMLv file. Please select a boot source.
- Click Add Source and select a bootable disk or Network Interface Card (NIC) for the virtual machine.
- Add any additional disks or NICs to the boot order list.
- Click Save.
6.3.2. Editing a boot order list in the web console
Edit the boot order list in the web console.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Select a virtual machine to open the Virtual Machine Overview screen.
- Click the pencil icon that is located on the right side of Boot Order.
Choose the appropriate method to move the item in the boot order list:
- If you do not use a screen reader, hover over the arrow icon next to the item that you want to move, drag the item up or down, and drop it in a location of your choice.
- If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice.
- Click Save.
6.3.3. Editing a boot order list in the YAML configuration file
Edit the boot order list in a YAML configuration file by using the CLI.
Procedure
Open the YAML configuration file for the virtual machine by running the following command:
$ oc edit vm example
Edit the YAML file and modify the values for the boot order associated with a disk or Network Interface Card (NIC). For example:
disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default
- Save the YAML file.
- Click reload the content to apply the updated boot order values from the YAML file to the boot order list in the web console.
6.3.4. Removing items from a boot order list in the web console
Remove items from a boot order list by using the web console.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Select a virtual machine to open the Virtual Machine Overview screen.
- Click the pencil icon that is located on the right side of Boot Order.
- Click the Remove icon next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot disks from YAML by order of appearance in YAML file. Please select a boot source.
6.4. Deleting virtual machines
You can delete a virtual machine from the web console or by using the oc
command-line interface.
6.4.1. Deleting a virtual machine using the web console
Deleting a virtual machine permanently removes it from the cluster.
When you delete a virtual machine, the DataVolume it uses is automatically deleted.
Procedure
-
In the container-native virtualization console, click Workloads
Virtual Machines from the side menu. Click the ⋮ button of the virtual machine that you want to delete and select Delete Virtual Machine.
-
Alternatively, click the virtual machine name to open the Virtual Machine Details screen and click Actions
Delete Virtual Machine.
-
Alternatively, click the virtual machine name to open the Virtual Machine Details screen and click Actions
- In the confirmation pop-up window, click Delete to permanently delete the virtual machine.
6.4.2. Deleting a virtual machine by using the CLI
You can delete a virtual machine by using the oc
command-line interface (CLI). The oc
client enables you to perform actions on multiple virtual machines.
When you delete a virtual machine, the DataVolume it uses is automatically deleted.
Prerequisites
- Identify the name of the virtual machine that you want to delete.
Procedure
Delete the virtual machine by running the following command:
$ oc delete vm <vm_name>
NoteThis command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.
6.5. Deleting virtual machine instances
When you delete a virtual machine, the associated VMI is automatically deleted. To manually delete a virtual machine instance (VMI), use the oc
command-line interface (CLI).
Use this procedure to check for and delete any outstanding VMIs before you uninstall container-native virtualization.
6.5.1. Listing all virtual machine instances
You can list the virtual machine instances in your cluster by using the oc
command-line interface (CLI).
Procedure
List all virtual machine instances by running the following command:
$ oc get vmis
6.5.2. Deleting a virtual machine instance
You can delete a virtual machine instance by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the virtual machine instance that you want to delete.
Procedure
Delete the virtual machine instance by running the following command:
$ oc delete vmi <vmi_name>
NoteThis command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.
6.6. Controlling virtual machine states
You can stop, start, restart, and unpause virtual machines from the web console.
To control virtual machines from the command-line interface (CLI), use the virtctl
client.
6.6.1. Starting a virtual machine
You can start a virtual machine from the web console.
Procedure
-
Click Workloads
Virtual Machines. - Find the row that contains the virtual machine that you want to start.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row.
To view comprehensive information about the selected virtual machine before you start it:
- Access the Virtual Machine Details page by clicking the name of the virtual machine.
- Click Actions.
- Select Start Virtual Machine.
- In the confirmation window, click Start to start the virtual machine.
When you start virtual machine that is provisioned from a URL
source for the first time, the virtual machine has a status of Importing while container-native virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes.
6.6.2. Restarting a virtual machine
You can restart a running virtual machine from the web console.
To avoid errors, do not restart a virtual machine while it has a status of Importing.
Procedure
-
Click Workloads
Virtual Machines. - Find the row that contains the virtual machine that you want to restart.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row.
To view comprehensive information about the selected virtual machine before you restart it:
- Access the Virtual Machine Details page by clicking the name of the virtual machine.
- Click Actions.
- Select Restart Virtual Machine.
- In the confirmation window, click Restart to restart the virtual machine.
6.6.3. Stopping a virtual machine
You can stop a virtual machine from the web console.
Procedure
-
Click Workloads
Virtual Machines. - Find the row that contains the virtual machine that you want to stop.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- Click the Options menu located at the far right end of the row.
To view comprehensive information about the selected virtual machine before you stop it:
- Access the Virtual Machine Details page by clicking the name of the virtual machine.
- Click Actions.
- Select Stop Virtual Machine.
- In the confirmation window, click Stop to stop the virtual machine.
6.6.4. Unpausing a virtual machine
You can unpause a paused virtual machine from the web console.
Prerequisites
At least one of your virtual machines must have a status of Paused.
NoteYou can pause virtual machines by using the
virtctl
client.
Procedure
-
Click Workloads
Virtual Machines. - Find the row that contains the virtual machine that you want to unpause.
Navigate to the appropriate menu for your use case:
To stay on this page, where you can perform actions on multiple virtual machines:
- In the Status column, click Paused.
To view comprehensive information about the selected virtual machine before you unpause it:
- Access the Virtual Machine Details page by clicking the name of the virtual machine.
- Click the pencil icon that is located on the right side of Status.
- In the confirmation window, click Unpause to unpause the virtual machine.
6.7. Accessing virtual machine consoles
Container-native virtualization provides different virtual machine consoles that you can use to accomplish different product tasks. You can access these consoles through the web console and by using CLI commands.
6.7.1. Virtual machine console sessions
You can connect to the VNC and serial consoles of a running virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console.
There are two consoles available: the graphical VNC Console and the Serial Console. The VNC Console opens by default whenever you navigate to the Consoles tab. You can switch between the consoles using the VNC Console Serial Console list.
Console sessions remain active in the background unless they are disconnected. When the Disconnect before switching checkbox is active and you switch consoles, the current console session is disconnected and a new session with the selected console connects to the virtual machine. This ensures only one console session is open at a time.
Options for the VNC Console
The Send Key button lists key combinations to send to the virtual machine.
Options for the Serial Console
Use the Disconnect button to manually disconnect the Serial Console session from the virtual machine.
Use the Reconnect button to manually open a Serial Console session to the virtual machine.
6.7.2. Connecting to the virtual machine with the web console
6.7.2.1. Connecting to the terminal
You can connect to a virtual machine by using the web console.
Procedure
- Ensure you are in the correct project. If not, click the Project list and select the appropriate project.
-
Click Workloads
Virtual Machines to display the virtual machines in the project. - Select a virtual machine.
-
In the Overview tab, click the
virt-launcher-<vm-name>
pod. - Click the Terminal tab. If the terminal is blank, select the terminal and press any key to initiate connection.
6.7.2.2. Connecting to the serial console
Connect to the Serial Console of a running virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console.
Procedure
-
In the container-native virtualization console, click Workloads
Virtual Machines. - Select a virtual machine.
- Click Consoles. The VNC console opens by default.
- Click the VNC Console drop-down list and select Serial Console.
6.7.2.3. Connecting to the VNC console
Connect to the VNC console of a running virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console.
Procedure
-
In the container-native virtualization console, click Workloads
Virtual Machines. - Select a virtual machine.
- Click Consoles. The VNC console opens by default.
6.7.2.4. Connecting to the RDP console
The desktop viewer console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines.
To connect to a Windows virtual machine with RDP, download the console.rdp
file for the virtual machine from the Consoles tab in the Virtual Machine Details screen of the web console and supply it to your preferred RDP client.
Prerequisites
-
A running Windows virtual machine with the QEMU guest agent installed. The
qemu-guest-agent
is included in the VirtIO drivers. - A layer-2 NIC attached to the virtual machine.
- An RDP client installed on a machine on the same network as the Windows virtual machine.
Procedure
-
In the container-native virtualization console, click Workloads
Virtual Machines. - Select a Windows virtual machine.
- Click the Consoles tab.
- Click the Consoles list and select Desktop Viewer.
- In the Network Interface list, select the layer-2 NIC.
-
Click Launch Remote Desktop to download the
console.rdp
file. Open an RDP client and reference the
console.rdp
file. For example, using remmina:$ remmina --connect /path/to/console.rdp
- Enter the Administrator user name and password to connect to the Windows virtual machine.
6.7.3. Accessing virtual machine consoles by using CLI commands
6.7.3.1. Accessing a virtual machine instance via SSH
You can use SSH to access a virtual machine after you expose port 22 on it.
The virtctl expose
command forwards a virtual machine instance port to a node port and creates a service for enabled access. The following example creates the fedora-vm-ssh
service that forwards port 22 of the <fedora-vm>
virtual machine to a port on the node:
Prerequisites
- You must be in the same project as the virtual machine instance.
-
The virtual machine instance you want to access must be connected to the default Pod network by using the
masquerade
binding method. - The virtual machine instance you want to access must be running.
-
Install the OpenShift CLI (
oc
).
Procedure
Run the following command to create the
fedora-vm-ssh
service:$ virtctl expose vm <fedora-vm> --port=20022 --target-port=22 --name=fedora-vm-ssh --type=NodePort 1
- 1
<fedora-vm>
is the name of the virtual machine that you run thefedora-vm-ssh
service on.
Check the service to find out which port the service acquired:
$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 20022:32551/TCP 6s
In this example, the service acquired the
32551
port.Log in to the virtual machine instance via SSH. Use the
ipAddress
of the node and the port that you found in the previous step:$ ssh username@<node_IP_address> -p 32551
6.7.3.2. Accessing the serial console of a virtual machine instance
The virtctl console
command opens a serial console to the specified virtual machine instance.
Prerequisites
-
The
virt-viewer
package must be installed. - The virtual machine instance you want to access must be running.
Procedure
Connect to the serial console with
virtctl
:$ virtctl console <VMI>
6.7.3.3. Accessing the graphical console of a virtual machine instances with VNC
The virtctl
client utility can use the remote-viewer
function to open a graphical console to a running virtual machine instance. This capability is included in the virt-viewer
package.
Prerequisites
-
The
virt-viewer
package must be installed. - The virtual machine instance you want to access must be running.
If you use virtctl
via SSH on a remote machine, you must forward the X session to your machine.
Procedure
Connect to the graphical interface with the
virtctl
utility:$ virtctl vnc <VMI>
If the command failed, try using the
-v
flag to collect troubleshooting information:$ virtctl vnc <VMI> -v 4
6.7.3.4. Connecting to a Windows virtual machine with an RDP console
The Remote Desktop Protocol (RDP) provides a better console experience for connecting to Windows virtual machines.
To connect to a Windows virtual machine with RDP, specify the IP address of the attached L2 NIC to your RDP client.
Prerequisites
-
A running Windows virtual machine with the QEMU guest agent installed. The
qemu-guest-agent
is included in the VirtIO drivers. - A layer 2 NIC attached to the virtual machine.
- An RDP client installed on a machine on the same network as the Windows virtual machine.
Procedure
Log in to the container-native virtualization cluster through the
oc
CLI tool as a user with an access token.$ oc login -u <user> https://<cluster.example.com>:8443
Use
oc describe vmi
to display the configuration of the running Windows virtual machine.$ oc describe vmi <windows-vmi-name>
... spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net ... status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net ...
-
Identify and copy the IP address of the layer 2 network interface. This is
192.0.2.0
in the above example, or2001:db8::
if you prefer IPv6. - Open an RDP client and use the IP address copied in the previous step for the connection.
- Enter the Administrator user name and password to connect to the Windows virtual machine.
6.8. Installing VirtIO driver on an existing Windows virtual machine
6.8.1. Understanding VirtIO drivers
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in container-native virtualization. The supported drivers are available in the container-native-virtualization/virtio-win
container disk of the Red Hat Container Catalog.
The container-native-virtualization/virtio-win
container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the virtual machine.
See also: Installing Virtio drivers on a new Windows virtual machine.
6.8.2. Supported VirtIO drivers for Microsoft Windows virtual machines
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes displays as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
6.8.3. Adding VirtIO drivers container disk to a virtual machine
Container-native virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Container Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win
container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.
Prerequisites
-
Download the
container-native-virtualization/virtio-win
container disk from the Red Hat Container Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as acdrom
disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- Container-native virtualization boots virtual machine disks in the order defined in the
VirtualMachine
configuration file. You can either define other disks for the virtual machine before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the virtual machine boots from the correct disk. If you specify thebootOrder
for a disk, it must be specified for all disks in the configuration.
The disk is available once the virtual machine has started:
-
If you add the container disk to a running virtual machine, use
oc apply -f <vm.yaml>
in the CLI or reboot the virtual machine for the changes to take effect. -
If the virtual machine is not running, use
virtctl start <vm>
.
-
If you add the container disk to a running virtual machine, use
After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.
6.8.4. Installing VirtIO drivers on an existing Windows virtual machine
Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.
This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. Refer to the installation documentation for your version of Windows for specific installation steps.
Procedure
- Start the virtual machine and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
-
Open the
Device Properties
to identify the unknown device. Right-click the device and select Properties. - Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
-
Open the
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the virtual machine to complete the driver installation.
6.8.5. Removing the VirtIO container disk from a virtual machine
After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win
container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win
container disk from the virtual machine configuration file.
Procedure
Edit the configuration file and remove the
disk
and thevolume
.$ oc edit vm <vm-name>
spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- Reboot the virtual machine for the changes to take effect.
6.9. Installing VirtIO driver on a new Windows virtual machine
6.9.1. Prerequisites
- Windows installation media accessible by the virtual machine, such as importing an ISO into a data volume and attaching it to the virtual machine.
6.9.2. Understanding VirtIO drivers
VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines to run in container-native virtualization. The supported drivers are available in the container-native-virtualization/virtio-win
container disk of the Red Hat Container Catalog.
The container-native-virtualization/virtio-win
container disk must be attached to the virtual machine as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation on the virtual machine or added to an existing Windows installation.
After the drivers are installed, the container-native-virtualization/virtio-win
container disk can be removed from the virtual machine.
See also: Installing VirtIO driver on an existing Windows virtual machine.
6.9.3. Supported VirtIO drivers for Microsoft Windows virtual machines
Driver name | Hardware ID | Description |
---|---|---|
viostor |
VEN_1AF4&DEV_1001 | The block driver. Sometimes displays as an SCSI Controller in the Other devices group. |
viorng |
VEN_1AF4&DEV_1005 | The entropy source driver. Sometimes displays as a PCI Device in the Other devices group. |
NetKVM |
VEN_1AF4&DEV_1000 | The network driver. Sometimes displays as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. |
6.9.4. Adding VirtIO drivers container disk to a virtual machine
Container-native virtualization distributes VirtIO drivers for Microsoft Windows as a container disk, which is available from the Red Hat Container Catalog. To install these drivers to a Windows virtual machine, attach the container-native-virtualization/virtio-win
container disk to the virtual machine as a SATA CD drive in the virtual machine configuration file.
Prerequisites
-
Download the
container-native-virtualization/virtio-win
container disk from the Red Hat Container Catalog. This is not mandatory, because the container disk will be downloaded from the Red Hat registry if it not already present in the cluster, but it can reduce installation time.
Procedure
Add the
container-native-virtualization/virtio-win
container disk as acdrom
disk in the Windows virtual machine configuration file. The container disk will be downloaded from the registry if it is not already present in the cluster.spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- 1
- Container-native virtualization boots virtual machine disks in the order defined in the
VirtualMachine
configuration file. You can either define other disks for the virtual machine before thecontainer-native-virtualization/virtio-win
container disk or use the optionalbootOrder
parameter to ensure the virtual machine boots from the correct disk. If you specify thebootOrder
for a disk, it must be specified for all disks in the configuration.
The disk is available once the virtual machine has started:
-
If you add the container disk to a running virtual machine, use
oc apply -f <vm.yaml>
in the CLI or reboot the virtual machine for the changes to take effect. -
If the virtual machine is not running, use
virtctl start <vm>
.
-
If you add the container disk to a running virtual machine, use
After the virtual machine has started, the VirtIO drivers can be installed from the attached SATA CD drive.
6.9.5. Installing VirtIO drivers during Windows installation
Install the VirtIO drivers from the attached SATA CD driver during Windows installation.
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. Refer to the documentation for the version of Windows that you are installing.
Procedure
- Start the virtual machine and connect to a graphical console.
- Begin the Windows installation process.
- Select the Advanced installation.
-
The storage destination will not be recognized until the driver is loaded. Click
Load driver
. - The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Repeat the previous two steps for all required drivers.
- Complete the Windows installation.
6.9.6. Removing the VirtIO container disk from a virtual machine
After installing all required VirtIO drivers to the virtual machine, the container-native-virtualization/virtio-win
container disk no longer needs to be attached to the virtual machine. Remove the container-native-virtualization/virtio-win
container disk from the virtual machine configuration file.
Procedure
Edit the configuration file and remove the
disk
and thevolume
.$ oc edit vm <vm-name>
spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk
- Reboot the virtual machine for the changes to take effect.
6.10. Advanced virtual machine management
6.10.1. Automating management tasks
You can automate container-native virtualization management tasks by using Red Hat Ansible Automation Platform. Learn the basics by using an Ansible Playbook to create a new virtual machine.
6.10.1.1. About Red Hat Ansible Automation
Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for container-native virtualization, and Ansible modules enable you to automate cluster management tasks such as template, persistent volume claim, and virtual machine operations.
Ansible provides a way to automate container-native virtualization management, which you can also accomplish by using the oc
CLI tool or APIs. Ansible is unique because it allows you to integrate KubeVirt modules with other Ansible modules.
6.10.1.2. Automating virtual machine creation
You can use the kubevirt_vm
Ansible Playbook to create virtual machines in your OpenShift Container Platform cluster using Red Hat Ansible Automation Platform.
Prerequisites
- Red Hat Ansible Engine version 2.8 or newer
Procedure
Edit an Ansible Playbook YAML file so that it includes the
kubevirt_vm
task:kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus:
NoteThis snippet only includes the
kubevirt_vm
portion of the playbook.Edit the values to reflect the virtual machine you want to create, including the
namespace
, the number ofcpu_cores
, thememory
, and thedisks
. For example:kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio
If you want the virtual machine to boot immediately after creation, add
state: running
to the YAML file. For example:kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1
- 1
- Changing this value to
state: absent
deletes the virtual machine, if it already exists.
Run the
ansible-playbook
command, using your playbook’s file name as the only argument:$ ansible-playbook create-vm.yaml
Review the output to determine if the play was successful:
(...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
If you did not include
state: running
in your playbook file and you want to boot the VM now, edit the file so that it includesstate: running
and run the playbook again:$ ansible-playbook create-vm.yaml
To verify that the virtual machine was created, try to access the VM console.
6.10.1.3. Example: Ansible Playbook for creating virtual machines
You can use the kubevirt_vm
Ansible Playbook to automate virtual machine creation.
The following YAML file is an example of the kubevirt_vm
playbook. It includes sample values that you must replace with your own information if you run the playbook.
--- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio
Additional information
6.10.2. Configuring PXE booting for virtual machines
PXE booting, or network booting, is available in container-native virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
6.10.2.1. Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
6.10.2.2. Container-native virtualization networking glossary
Container-native virtualization provides advanced networking functionality by using custom resources and plug-ins.
The following terms are used throughout container-native virtualization documentation:
- Container Network Interface (CNI)
- a Cloud Native Computing Foundation project, focused on container network connectivity. Container-native virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
- Multus
- a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.
- Custom Resource Definition (CRD)
- a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- NetworkAttachmentDefinition
- a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Preboot eXecution Environment (PXE)
- an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.
6.10.2.3. PXE booting with a specified MAC address
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the NetworkAttachmentDefinition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
- A Linux bridge must be connected.
- The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the NetworkAttachmentDefinition file for PXE network
pxe-net-conf
:apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ "cniVersion": "0.3.1", "name": "pxe-net-conf", "plugins": [ { "type": "cnv-bridge", "bridge": "br1" }, { "type": "cnv-tuning" 1 } ] }'
- 1
- The
cnv-tuning
plug-in provides support for custom MAC addresses.
NoteThe virtual machine instance will be attached to the bridge
br1
through an access port with the requested VLAN.
Create the NetworkAttachmentDefinition object by using the file you created in the previous step:
$ oc create -f pxe-net-conf.yaml
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. However, note that at this time, MAC addresses assigned automatically are not persistent.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1
NoteBoot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2
Specify that the network is connected to the previously created NetworkAttachmentDefinition. In this scenario,
<pxe-net>
is connected to the NetworkAttachmentDefinition called<pxe-net-conf>
:networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yaml virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running
View the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot
- Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
$ virtctl console vmi-pxe-boot
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from OpenShift Container Platform.$ ip addr ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
6.10.2.4. Template: virtual machine instance configuration file for PXE booting
apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: creationTimestamp: null labels: special: vmi-pxe-boot name: vmi-pxe-boot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 machine: type: "" resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: pxe-net-conf name: pxe-net terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin name: cloudinitdisk status: {}
6.10.3. Managing guest memory
If you want to adjust guest memory settings to suit a specific use case, you can do so by editing the guest’s YAML configuration file. Container-native virtualization allows you to configure guest memory overcommitment and disable guest memory overhead accounting.
Both of these procedures carry some degree of risk. Proceed only if you are an experienced administrator.
6.10.3.1. Configuring guest memory overcommitment
If your virtual workload requires more memory than available, you can use memory overcommitment to allocate all or most of the host’s memory to your virtual machine instances. Enabling memory overcommitment means you can maximize resources that are normally reserved for the host.
For example, if the host has 32 GB RAM, you can use memory overcommitment to fit 8 virtual machines with 4 GB RAM each. This allocation works under the assumption that the virtual machines will not use all of their memory at the same time.
Procedure
To explicitly tell the virtual machine instance that it has more memory available than was requested from the cluster, edit the virtual machine configuration file and set
spec.domain.memory.guest
to a higher value thanspec.domain.resources.requests.memory
. This process is called memory overcommitment.In this example,
1024M
is requested from the cluster, but the virtual machine instance is told that it has2048M
available. As long as there is enough free memory available on the node, the virtual machine instance will consume up to 2048M.kind: VirtualMachine spec: template: domain: resources: requests: memory: 1024M memory: guest: 2048M
NoteThe same eviction rules as those for pods apply to the virtual machine instance if the node is under memory pressure.
Create the virtual machine:
$ oc create -f <file name>.yaml
6.10.3.2. Disabling guest memory overhead accounting
This procedure is only useful in certain use-cases and must only be attempted by advanced users.
A small amount of memory is requested by each virtual machine instance in addition to the amount that you request. This additional memory is used for the infrastructure that wraps each VirtualMachineInstance
process.
Though it is not usually advisable, it is possible to increase the virtual machine instance density on the node by disabling guest memory overhead accounting.
Procedure
To disable guest memory overhead accounting, edit the YAML configuration file and set the
overcommitGuestOverhead
value totrue
. This parameter is disabled by default.kind: VirtualMachine spec: template: domain: resources: overcommitGuestOverhead: true requests: memory: 1024M
NoteIf
overcommitGuestOverhead
is enabled, it adds the guest overhead to memory limits, if present.Create the virtual machine:
$ oc create -f <file name>.yaml
6.10.4. Enabling dedicated resources for virtual machines
Virtual machines can have resources of a node, such as CPU, dedicated to them in order to improve performance.
6.10.4.1. About dedicated resources
When you enable dedicated resources for your virtual machine, your virtual machine’s workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions.
6.10.4.2. Prerequisites
-
The CPU Manager must be configured on the node. Verify that the node has the
cpumanager
=true
label before scheduling virtual machine workloads.
6.10.4.3. Enabling dedicated resources for a virtual machine
You can enable dedicated resources for a virtual machine in the Virtual Machine Overview page of the web console.
Procedure
-
Click Workloads
Virtual Machines from the side menu. - Select a virtual machine to open the Virtual Machine Overview page.
- Click the Details tab.
- Click the pencil icon to the right of the Dedicated Resources field to open the Dedicated Resources window.
- Select Schedule this workload with dedicated resources (guaranteed policy).
- Click Save.
6.11. Importing virtual machines
6.11.1. TLS certificates for DataVolume imports
6.11.1.1. Adding TLS certificates for authenticating DataVolume imports
TLS certificates for registry or HTTPS endpoints must be added to a ConfigMap in order to import data from these sources. This ConfigMap must be present in the namespace of the destination DataVolume.
Create the ConfigMap by referencing the relative file path for the TLS certificate.
Procedure
Ensure you are in the correct namespace. The ConfigMap can only be referenced by DataVolumes if it is in the same namespace.
$ oc get ns
Create the ConfigMap:
$ oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>
6.11.1.2. Example: ConfigMap created from a TLS certificate
The following example is of a ConfigMap created from ca.pem
TLS certificate.
apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> ... -----END CERTIFICATE-----
6.11.2. Importing virtual machine images with DataVolumes
You can import an existing virtual machine image into your OpenShift Container Platform cluster. Container-native virtualization uses DataVolumes to automate the import of data and the creation of an underlying PersistentVolumeClaim (PVC).
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.
The resizing procedure varies based on the operating system installed on the VM. Refer to the operating system documentation for details.
6.11.2.1. Prerequisites
- If the endpoint requires a TLS certificate, the certificate must be included in a ConfigMap in the same namespace as the DataVolume and referenced in the DataVolume configuration.
- You may need to define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
6.11.2.2. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
6.11.2.3. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.11.2.4. Importing a virtual machine image into an object with DataVolumes
To create a virtual machine from an imported image, specify the image location in the VirtualMachine
configuration file before you create the virtual machine.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
-
An
HTTP
endpoint where the image is hosted, along with any authentication credentials needed to access the data source - At least one available PersistentVolume
Procedure
Identify an
HTTP
file server that hosts the virtual disk image that you want to import. You need the complete URL in the correct format:If your data source requires authentication credentials, edit the
endpoint-secret.yaml
file, and apply the updated configuration to the cluster:apiVersion: v1 kind: Secret metadata: name: <endpoint-secret> labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2
$ oc apply -f endpoint-secret.yaml
Edit the virtual machine configuration file, specifying the data source for the image you want to import. In this example, a Fedora image is imported:
apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: local source: http: url: https://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2 1 secretRef: "" 2 certConfigMap: "" 3 status: {} running: false template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 64M terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}
- 1
- The
HTTP
source of the image you want to import. - 2
- The
secretRef
parameter is optional. - 3
- The
certConfigMap
is required for communicating with servers that use self-signed certificates or certificates not signed by the system CA bundle. The referenced ConfigMap must be in the same namespace as the DataVolume.
Create the virtual machine:
$ oc create -f vm-<name>-datavolume.yaml
NoteThe
oc create
command creates the DataVolume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation, and the import process begins. When the import completes, the DataVolume status changes toSucceeded
, and the virtual machine is allowed to start.DataVolume provisioning happens in the background, so there is no need to monitor it. You can start the virtual machine, and it will not run until the import is complete.
Optional verification steps
-
Run
oc get pods
and look for the importer Pod. This Pod downloads the image from the specified URL and stores it on the provisioned PV. Monitor the DataVolume status until it shows
Succeeded
.$ oc describe dv <data-label> 1
- 1
- The data label for the DataVolume specified in the virtual machine configuration file.
To verify that provisioning is complete and that the VMI has started, try accessing its serial console:
$ virtctl console <vm-fedora-datavolume>
6.11.2.5. Template: DataVolume virtual machine configuration file
example-dv-vm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: example-vm
name: example-vm
spec:
dataVolumeTemplates:
- metadata:
name: example-dv
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
source:
http:
url: "" 1
running: false
template:
metadata:
labels:
kubevirt.io/vm: example-vm
spec:
domain:
cpu:
cores: 1
devices:
disks:
- disk:
bus: virtio
name: example-dv-disk
machine:
type: q35
resources:
requests:
memory: 1G
terminationGracePeriodSeconds: 0
volumes:
- dataVolume:
name: example-dv
name: example-dv-disk
- 1
- The
HTTP
source of the image you want to import, if applicable.
6.11.2.6. Template: DataVolume import configuration file
example-import-dv.yaml
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: "example-import-dv" spec: source: http: url: "" 1 secretRef: "" 2 pvc: accessModes: - ReadWriteOnce resources: requests: storage: "1G"
6.11.3. Importing virtual machine images to block storage with DataVolumes
You can import an existing virtual machine image into your OpenShift Container Platform cluster. Container-native virtualization uses DataVolumes to automate the import of data and the creation of an underlying PersistentVolumeClaim (PVC).
When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.
The resizing procedure varies based on the operating system that is installed on the virtual machine. Refer to the operating system documentation for details.
6.11.3.1. Prerequisites
- If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
6.11.3.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.11.3.3. About block PersistentVolumes
A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines that either write to the disk directly or implement their own storage service.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and PersistentVolumeClaim (PVC) specification.
6.11.3.4. Creating a local block PersistentVolume
Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
configuration that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The filename of the PersistentVolume created in the previous step.
6.11.3.5. Importing a virtual machine image to a block PersistentVolume using DataVolumes
You can import an existing virtual machine image into your OpenShift Container Platform cluster. Container-native virtualization uses DataVolumes to automate the importing data and the creation of an underlying PersistentVolumeClaim (PVC). You can then reference the DataVolume in a virtual machine configuration.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
An
HTTP
ors3
endpoint where the image is hosted, along with any authentication credentials needed to access the data source - At least one available block PV.
Procedure
If your data source requires authentication credentials, edit the
endpoint-secret.yaml
file, and apply the updated configuration to the cluster.Edit the
endpoint-secret.yaml
file with your preferred text editor:apiVersion: v1 kind: Secret metadata: name: <endpoint-secret> labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2
Update the secret:
$ oc apply -f endpoint-secret.yaml
Create a
DataVolume
configuration that specifies the data source for the image you want to import andvolumeMode: Block
so that an available block PV is used.apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <import-pv-datavolume> 1 spec: storageClassName: local 2 source: http: url: <http://download.fedoraproject.org/pub/fedora/linux/releases/28/Cloud/x86_64/images/Fedora-Cloud-Base-28-1.1.x86_64.qcow2> 3 secretRef: <endpoint-secret> 4 pvc: volumeMode: Block 5 accessModes: - ReadWriteOnce resources: requests: storage: <2Gi>
Create the DataVolume to import the virtual machine image.
$ oc create -f <import-pv-datavolume.yaml>1
- 1
- The filename DataVolume created in the previous step.
6.11.3.6. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
6.11.4. Importing a VMware virtual machine or template
You can import a single VMware virtual machine or template into your OpenShift Container Platform cluster.
If you import a VMware template, the wizard creates a virtual machine based on the template.
Importing a VMware virtual machine or template is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
The import process uses the VMware Virtual Disk Development Kit (VDDK) to copy the VMware virtual disk. You can download the VDDK SDK, build a VDDK image, upload image to your image registry, and add it to the v2v-vmware
ConfigMap.
You can import the VMware VM with the virtual machine wizard and then update the virtual machine’s network name.
6.11.4.1. Configuring an image registry for the VDDK image
You can configure either an internal OpenShift Container Platform image registry or a secure external image registry for the VDDK image.
Storing the VDDK image in a public registry might violate the terms of the VMware license.
6.11.4.1.1. Configuring an internal image registry
You can configure the internal OpenShift Container Platform image registry on bare metal by updating the Image Registry Operator configuration.
6.11.4.1.1.1. Changing the image registry’s management state
To start the image registry, you must change the Image Registry Operator configuration’s managementState
from Removed
to Managed
.
Procedure
Change
managementState
Image Registry Operator configuration fromRemoved
toManaged
. For example:$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'
6.11.4.1.1.2. Configuring registry storage for bare metal
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
- Cluster administrator permissions.
- A cluster on bare metal.
Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage.
ImportantOpenShift Container Platform supports
ReadWriteOnce
access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas,ReadWriteMany
access is required.- Must have 100Gi capacity.
Procedure
To configure your registry to use storage, change the
spec.storage.pvc
in theconfigs.imageregistry/cluster
resource.NoteWhen using shared storage, review your security settings to prevent outside access.
Verify that you do not have a registry pod:
$ oc get pod -n openshift-image-registry
NoteIf the storage type is
emptyDIR
, the replica number cannot be greater than1
.Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io
Example output
storage: pvc: claim:
Leave the
claim
field blank to allow the automatic creation of animage-registry-storage
PVC.Check the
clusteroperator
status:$ oc get clusteroperator image-registry
6.11.4.1.2. Configuring access to an internal image registry
You can access the OpenShift Container Platform internal registry directly, from within the cluster, or externally, by exposing the registry with a route.
6.11.4.1.2.1. Accessing registry directly from the cluster
You can access the registry from inside the cluster.
Procedure
Access the registry from the cluster by using internal routes:
Access the node by getting the node’s address:
$ oc get nodes $ oc debug nodes/<node_address>
In order to have access to tools such as
oc
andpodman
on the node, run the following command:sh-4.2# chroot /host
Log in to the container image registry by using your access token:
sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443 sh-4.2# podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000
You should see a message confirming login, such as:
Login Succeeded!
NoteYou can pass any value for the user name; the token contains all necessary information. Passing a user name that contains colons will result in a login failure.
Since the Image Registry Operator creates the route, it will likely be similar to
default-route-openshift-image-registry.<cluster_name>
.Perform
podman pull
andpodman push
operations against your registry:ImportantYou can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.
In the following examples, use:
Component Value <registry_ip>
172.30.124.220
<port>
5000
<project>
openshift
<image>
image
<tag>
omitted (defaults to
latest
)Pull an arbitrary image:
$ podman pull name.io/image
Tag the new image with the form
<registry_ip>:<port>/<project>/<image>
. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry:$ podman tag name.io/image image-registry.openshift-image-registry.svc:5000/openshift/image
NoteYou must have the
system:image-builder
role for the specified project, which allows the user to write or push an image. Otherwise, thepodman push
in the next step will fail. To test, you can create a new project to push the image.Push the newly-tagged image to your registry:
$ podman push image-registry.openshift-image-registry.svc:5000/openshift/image
6.11.4.1.2.2. Exposing a secure registry manually
Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.
Prerequisites:
The following prerequisites are automatically performed:
- Deploy the Registry Operator.
- Deploy the Ingress Operator.
Procedure
You can expose the route by using DefaultRoute
parameter in the configs.imageregistry.operator.openshift.io
resource or by using custom routes.
To expose the registry using DefaultRoute
:
Set
DefaultRoute
toTrue
:$ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
Log in with
podman
:$ HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') $ podman login -u $(oc whoami) -p $(oc whoami -t) --tls-verify=false $HOST 1
- 1
--tls-verify=false
is needed if the cluster’s default certificate for routes is untrusted. You can set a custom, trusted certificate as the default certificate with the Ingress Operator.
To expose the registry using custom routes:
Create a secret with your route’s TLS keys:
$ oc create secret tls public-route-tls \ -n openshift-image-registry \ --cert=</path/to/tls.crt> \ --key=</path/to/tls.key>
This step is optional. If you do not create a secret, the route uses the default TLS configuration from the Ingress Operator.
On the Registry Operator:
spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls ...
NoteOnly set
secretName
if you are providing a custom TLS configuration for the registry’s route.
6.11.4.1.3. Configuring access to an external image registry
If you use an external image registry for the VDDK image, you can add the external image registry’s certificate authorities to the OpenShift Container Platform cluster.
Optionally, you can create a pull secret from your Docker credentials and add it to your service account.
6.11.4.1.3.1. Adding certificate authorities to the cluster
You can add certificate authorities (CAs) to the cluster for use when pushing and pulling images via the following procedure.
Prerequisites
- You must have cluster administrator privileges.
-
You must have access to the registry’s public certificates, usually a
hostname/ca.crt
file located in the/etc/docker/certs.d/
directory.
Procedure
Create a ConfigMap in the
openshift-config
namespace containing the trusted certificates for the registries that use self-signed certificates. For each CA file, ensure the key in the ConfigMap is the registry’s hostname in thehostname[..port]
format:$ oc create configmap registry-cas -n openshift-config \ --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt \ --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt
Update the cluster image configuration:
$ oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-cas"}}}' --type=merge
6.11.4.1.3.2. Allowing pods to reference images from other secured registries
The .dockercfg
$HOME/.docker/config.json
file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry.
To pull a secured container image that is not from OpenShift Container Platform’s internal registry, you must create a pull secret from your Docker credentials and add it to your service account.
Procedure
If you already have a
.dockercfg
file for the secured registry, you can create a secret from that file by running:$ oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<path/to/.dockercfg> \ --type=kubernetes.io/dockercfg
Or if you have a
$HOME/.docker/config.json
file:$ oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson
If you do not already have a Docker credentials file for the secured registry, you can create a secret by running:
$ oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email>
To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses.
default
is the default service account:$ oc secrets link default <pull_secret_name> --for=pull
6.11.4.2. Creating and using a VDDK image
You can download the VMware Virtual Disk Development Kit (VDDK), build a VDDK image, and push the VDDK image to your image registry. You then add the VDDK image to the v2v-vmware
ConfigMap.
Prerequisites
- You must have access to an OpenShift Container Platform internal image registry or a secure external registry.
Procedure
Create and navigate to a temporary directory:
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
- In a browser, navigate to VMware code and click SDKs.
- Under Compute Virtualization, click Virtual Disk Development Kit (VDDK).
- Select the latest VDDK release, click Download, and then save the VDDK archive in the temporary directory.
Extract the VDDK archive:
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
Create a
Dockerfile
:$ cat > Dockerfile <<EOF FROM busybox:latest COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOF
Build the image:
$ podman build . -t <registry_route_or_server_path>/vddk:<tag> 1
- 1
- Specify your image registry:
-
For an internal OpenShift Container Platform registry, use the internal registry route, for example,
image-registry.openshift-image-registry.svc:5000/openshift/vddk:<tag>
. -
For an external registry, specify the server name, path, and tag, for example,
server.example.com:5000/vddk:<tag>
.
-
For an internal OpenShift Container Platform registry, use the internal registry route, for example,
Push the image to the registry:
$ podman push <registry_route_or_server_path>/vddk:<tag>
- Ensure that the image is accessible to your OpenShift Container Platform environment.
Edit the
v2v-vmware
ConfigMap in the openshift-cnv project:$ oc edit configmap v2v-vmware -n openshift-cnv
Add the
vddk-init-image
parameter to thedata
stanza:... data: vddk-init-image: <registry_route_or_server_path>/vddk:<tag>
6.11.4.3. Importing a VMware virtual machine or template with the virtual machine wizard
You can import a VMware virtual machine or template using the virtual machine wizard.
Prerequisites
-
You must create a VDDK image, push it to an image registry, and add it to the
v2v-vmware
ConfigMap. There must be sufficient storage space for the imported disk.
WarningIf you try to import a virtual machine whose disk size is larger than the available storage space, the operation cannot complete. You will not be able to import another virtual machine or to clean up the storage because there are insufficient resources to support object deletion. To resolve this situation, you must add more object storage devices to the storage backend.
- The VMware virtual machine must be powered off.
Procedure
-
In the container-native virtualization web console, click Workloads
Virtual Machines. - Click Create Virtual Machine and select Import with Wizard.
In the General screen, perform the following steps:
- Select VMware from the Provider list.
Select Connect to New Instance or a saved vCenter instance from the vCenter instance list.
- If you select Connect to New Instance, fill in the vCenter hostname, Username, and Password.
- If you select a saved vCenter instance, the wizard connects to the vCenter instance using the saved credentials.
- Select a virtual machine or a template to import from the VM or Template to Import list.
- Select an operating system.
Select an existing flavor or Custom from the Flavor list.
If you select Custom, specify the Memory (GB) and the CPUs.
- Select a Workload Profile.
- If the virtual machine name is already being used by another virtual machine in the namespace, update the name.
- Click Next.
In the Networking screen, perform the following steps:
- Click the Options menu of a network interface and select Edit.
Enter a valid network interface name.
The name can contain lowercase letters (
a-z
), numbers (0-9
), and hyphens (-
), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.
), or special characters.- Select the network interface model.
- Select the network definition.
- Select the network interface type.
- Enter the MAC address.
- Click Save and then click Next.
In the Storage screen, perform the following steps:
- Click the Options menu of a disk and select Edit.
Enter a valid name.
The name can contain lowercase letters (
a-z
), numbers (0-9
), and hyphens (-
), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, periods (.
), or special characters.- Select the interface type.
Select the storage class.
If you do not select a storage class, container-native virtualization uses the default storage class to create the virtual machine.
- Click Save and then click Next.
-
In the Advanced screen, enter the Hostname and Authorized SSH Keys if you are using
cloud-init
. - Click Next.
Review your settings and click Create Virtual Machine.
A Successfully created virtual machine message and a list of resources created for the virtual machine are displayed. The powered off virtual machine appears in Workloads
Virtual Machines. Click See virtual machine details to view the dashboard of the imported virtual machine.
If an error occurs, perform the following steps:
-
Click Workloads
Pods. -
Click the Conversion Pod, for example,
kubevirt-v2v-conversion-rhel7-mini-1-27b9h
. - Click Logs and check for error messages.
-
Click Workloads
See virtual machine wizard fields for more information on the wizard fields.
6.11.4.4. Updating the imported VMware virtual machine’s NIC name
You must update the NIC name of a virtual machine imported from VMware to conform to container-native virtualization naming conventions.
Procedure
- Log in to the virtual machine.
-
Go to the
/etc/sysconfig/network-scripts
directory. Change the network configuration file name to
ifcfg-eth0
:$ mv vmnic0 ifcfg-eth0 1
- 1
- Additional network configuration files are numbered sequentially, for example,
ifcfg-eth1
,ifcfg-eth2
.
Update the
NAME
andDEVICE
parameters in the network configuration file:NAME=eth0 DEVICE=eth0
Restart the network:
$ systemctl restart network
6.11.4.5. Troubleshooting a VMware virtual machine import
If an imported virtual machine’s status is Import error: (VMware)
, you can check the Conversion Pod log for errors:
Obtain the Conversion Pod name:
$ oc get pods -n <project> | grep v2v 1 kubevirt-v2v-conversion-f66f7d-zqkz7 1/1 Running 0 4h49m
- 1
- Specify the project of your imported virtual machine.
Obtain the Conversion Pod log:
$ oc logs kubevirt-v2v-conversion-f66f7d-zqkz7 -f -n <project>
6.11.4.5.1. Error messages
If an imported virtual machine event displays the error message,
Readiness probe failed
, the following error message appears in the Conversion Pod log:INFO - have error: ('virt-v2v error: internal error: invalid argument: libvirt domain ‘v2v_migration_vm_1’ is running or paused. It must be shut down in order to perform virt-v2v conversion',)"
You must ensure that the virtual machine is shut down before importing it.
6.11.4.5.2. Known issues
Your OpenShift Container Platform environment must have sufficient storage space for the imported disk.
If you try to import a virtual machine whose disk size is larger than the available storage space, the operation cannot complete. You will not be able to import another virtual machine or to clean up the storage because there are insufficient resources to support object deletion. To resolve this situation, you must add more object storage devices to the storage backend. (BZ#1721504)
- If you use NFS-backed storage for the 2 GB disk that is attached to the Conversion Pod, you must configure a hostPath volume. (BZ#1814611)
6.11.4.6. Virtual machine wizard fields
6.11.4.6.1. Virtual machine wizard fields
Name | Parameter | Description |
---|---|---|
Template | Template from which to create the virtual machine. Selecting a template will automatically complete other fields. | |
Source | PXE | Provision virtual machine from PXE menu. Requires a PXE-capable NIC in the cluster. |
URL | Provision virtual machine from an image available from an HTTP or S3 endpoint. | |
Container |
Provision virtual machine from a bootable operating system container located in a registry accessible from the cluster. Example: | |
Disk | Provision virtual machine from a disk. | |
Operating System | The primary operating system that is selected for the virtual machine. | |
Flavor | small, medium, large, tiny, Custom | Presets that determine the amount of CPU and memory allocated to the virtual machine. The presets displayed for Flavor are determined by the operating system. |
Memory | Size in GiB of the memory allocated to the virtual machine. | |
CPUs | The amount of CPU allocated to the virtual machine. | |
Workload Profile | High Performance | A virtual machine configuration that is optimized for high-performance workloads. |
Server | A profile optimized to run server workloads. | |
Desktop | A virtual machine configuration for use on a desktop. | |
Name |
The name can contain lowercase letters ( | |
Description | Optional description field. | |
Start virtual machine on creation | Select to automatically start the virtual machine upon creation. |
6.11.4.6.2. Cloud-init fields
Name | Description |
---|---|
Hostname | Sets a specific host name for the virtual machine. |
Authenticated SSH Keys | The user’s public key that is copied to ~/.ssh/authorized_keys on the virtual machine. |
Use custom script | Replaces other options with a field in which you paste a custom cloud-init script. |
6.11.4.6.3. Networking fields
Name | Description |
---|---|
Name | Name for the Network Interface Card. |
Model | Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO. |
Network | List of available NetworkAttachmentDefinition objects. |
Type |
List of available binding methods. For the default Pod network, |
MAC Address | MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session. |
6.11.4.6.4. Storage fields
Name | Description |
---|---|
Source | Select a blank disk for the virtual machine or choose from the options available: URL, Container, Attach Cloned Disk, or Attach Disk. To select an existing disk and attach it to the virtual machine, choose Attach Cloned Disk or Attach Disk from a list of available PersistentVolumeClaims (PVCs). |
Name |
Name of the disk. The name can contain lowercase letters ( |
Size (GiB) | Size, in GiB, of the disk. |
Interface | Type of disk device. Supported interfaces are virtIO, SATA, and SCSI. |
Storage class |
The |
6.12. Cloning virtual machines
6.12.1. Enabling user permissions to clone DataVolumes across namespaces
The isolating nature of namespaces means that users cannot by default clone resources between namespaces.
To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin
role must create a new ClusterRole. Bind this ClusterRole to a user to enable them to clone virtual machines to the destination namespace.
6.12.1.1. Prerequisites
-
Only a user with the
cluster-admin
role can create ClusterRoles.
6.12.1.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.12.1.3. Creating RBAC resources for cloning DataVolumes
Create a new ClusterRole that enables permissions for all actions for the datavolumes
resource.
Procedure
Create a ClusterRole manifest:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"]
- 1
- Unique name for the ClusterRole.
Create the ClusterRole in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the ClusterRole manifest created in the previous step.
Create a RoleBinding manifest that applies to both the source and destination namespaces and references the ClusterRole created in the previous step.
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io
Create the RoleBinding in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the RoleBinding manifest created in the previous step.
6.12.2. Cloning a virtual machine disk into a new DataVolume
You can clone the PersistentVolumeClaim (PVC) of a virtual machine disk into a new DataVolume by referencing the source PVC in your DataVolume configuration file.
6.12.2.1. Prerequisites
- You may need to define a StorageClass or prepare CDI scratch space for this operation to complete successfully. The CDI supported operations matrix shows the conditions that require scratch space.
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
6.12.2.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.12.2.3. Cloning the PersistentVolumeClaim of a virtual machine disk into a new DataVolume
You can clone a PersistentVolumeClaim (PVC) of an existing virtual machine disk into a new DataVolume. The new DataVolume can then be used for a new virtual machine.
When a DataVolume is created independently of a virtual machine, the lifecycle of the DataVolume is independent of the virtual machine. If the virtual machine is deleted, neither the DataVolume nor its associated PVC is deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
).
Procedure
- Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a DataVolume object that specifies the name of the new DataVolume, the name and namespace of the source PVC, and the size of the new DataVolume.
For example:
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4
Start cloning the PVC by creating the DataVolume:
$ oc create -f <cloner-datavolume>.yaml
NoteDataVolumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new DataVolume while the PVC clones.
6.12.2.4. Template: DataVolume clone configuration file
example-clone-dv.yaml
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: "example-clone-dv" spec: source: pvc: name: source-pvc namespace: example-ns pvc: accessModes: - ReadWriteOnce resources: requests: storage: "1G"
6.12.2.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
6.12.3. Cloning a virtual machine by using a DataVolumeTemplate
You can create a new virtual machine by cloning the PersistentVolumeClaim (PVC) of an existing VM. By including a dataVolumeTemplate
in your virtual machine configuration file, you create a new DataVolume from the original PVC.
6.12.3.1. Prerequisites
- You may need to define a StorageClass or prepare CDI scratch space for this operation to complete successfully. The CDI supported operations matrix shows the conditions that require scratch space.
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
6.12.3.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.12.3.3. Creating a new virtual machine from a cloned PersistentVolumeClaim by using a DataVolumeTemplate
You can create a virtual machine that clones the PersistentVolumeClaim (PVC) of an existing virtual machine into a DataVolume. By referencing a dataVolumeTemplate
in the virtual machine spec
, the source
PVC is cloned to a DataVolume, which is then automatically used for the creation of the virtual machine.
When a DataVolume is created as part of the DataVolumeTemplate of a virtual machine, the lifecycle of the DataVolume is then dependent on the virtual machine. If the virtual machine is deleted, the DataVolume and associated PVC are also deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
).
Procedure
- Examine the virtual machine you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a
VirtualMachine
object. The following virtual machine example clonesmy-favorite-vm-disk
, which is located in thesource-namespace
namespace. The2Gi
DataVolume calledfavorite-clone
is created frommy-favorite-vm-disk
.For example:
apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: "source-namespace" name: "my-favorite-vm-disk"
- 1
- The virtual machine to create.
Create the virtual machine with the PVC-cloned DataVolume:
$ oc create -f <vm-clone-datavolumetemplate>.yaml
6.12.3.4. Template: DataVolume virtual machine configuration file
example-dv-vm.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
labels:
kubevirt.io/vm: example-vm
name: example-vm
spec:
dataVolumeTemplates:
- metadata:
name: example-dv
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
source:
http:
url: "" 1
running: false
template:
metadata:
labels:
kubevirt.io/vm: example-vm
spec:
domain:
cpu:
cores: 1
devices:
disks:
- disk:
bus: virtio
name: example-dv-disk
machine:
type: q35
resources:
requests:
memory: 1G
terminationGracePeriodSeconds: 0
volumes:
- dataVolume:
name: example-dv
name: example-dv-disk
- 1
- The
HTTP
source of the image you want to import, if applicable.
6.12.3.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
6.12.4. Cloning a virtual machine disk into a new block storage DataVolume
You can clone the PersistentVolumeClaim (PVC) of a virtual machine disk into a new block DataVolume by referencing the source PVC in your DataVolume configuration file.
6.12.4.1. Prerequisites
- If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
- Users need additional permissions to clone the PVC of a virtual machine disk into another namespace.
6.12.4.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.12.4.3. About block PersistentVolumes
A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines that either write to the disk directly or implement their own storage service.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and PersistentVolumeClaim (PVC) specification.
6.12.4.4. Creating a local block PersistentVolume
Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
configuration that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The filename of the PersistentVolume created in the previous step.
6.12.4.5. Cloning the PersistentVolumeClaim of a virtual machine disk into a new DataVolume
You can clone a PersistentVolumeClaim (PVC) of an existing virtual machine disk into a new DataVolume. The new DataVolume can then be used for a new virtual machine.
When a DataVolume is created independently of a virtual machine, the lifecycle of the DataVolume is independent of the virtual machine. If the virtual machine is deleted, neither the DataVolume nor its associated PVC is deleted.
Prerequisites
- Determine the PVC of an existing virtual machine disk to use. You must power down the virtual machine that is associated with the PVC before you can clone it.
-
Install the OpenShift CLI (
oc
). - At least one available block PersistentVolume (PV) that is the same size as or larger than the source PVC.
Procedure
- Examine the virtual machine disk you want to clone to identify the name and namespace of the associated PVC.
Create a YAML file for a DataVolume object that specifies the name of the new DataVolume, the name and namespace of the source PVC,
volumeMode: Block
so that an available block PV is used, and the size of the new DataVolume.For example:
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: "<source-namespace>" 2 name: "<my-favorite-vm-disk>" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5
- 1
- The name of the new DataVolume.
- 2
- The namespace where the source PVC exists.
- 3
- The name of the source PVC.
- 4
- The size of the new DataVolume. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
- 5
- Specifies that the destination is a block PV
Start cloning the PVC by creating the DataVolume:
$ oc create -f <cloner-datavolume>.yaml
NoteDataVolumes prevent a virtual machine from starting before the PVC is prepared, so you can create a virtual machine that references the new DataVolume while the PVC clones.
6.12.4.6. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
6.13. Virtual machine networking
6.13.1. Using the default Pod network for virtual machines
You can use the default Pod network with container-native virtualization. To do so, you must use the masquerade
binding method. It is the only recommended binding method for use with the default Pod network. Do not use masquerade
mode with non-default networks.
For secondary networks, use the bridge
binding method.
6.13.1.1. Configuring masquerade mode from the command line
You can use masquerade mode to hide a virtual machine’s outgoing traffic behind the Pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the Pod network backend through a Linux bridge.
Enable masquerade mode and allow traffic to enter the virtual machine by editing your virtual machine configuration file.
Prerequisites
- The virtual machine must be configured to use DHCP to acquire IPv4 addresses. The examples below are configured to use DHCP.
Procedure
Edit the
interfaces
spec of your virtual machine configuration file:kind: VirtualMachine spec: domain: devices: interfaces: - name: red masquerade: {} 1 ports: - port: 80 2 networks: - name: red pod: {}
Create the virtual machine:
$ oc create -f <vm-name>.yaml
6.13.1.2. Selecting binding method
If you create a virtual machine from the container-native virtualization web console wizard, select the required binding method from the Networking screen.
6.13.1.2.1. Networking fields
Name | Description |
---|---|
Name | Name for the Network Interface Card. |
Model | Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO. |
Network | List of available NetworkAttachmentDefinition objects. |
Type |
List of available binding methods. For the default Pod network, |
MAC Address | MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session. |
6.13.1.3. Virtual machine configuration examples for the default network
6.13.1.3.1. Template: virtual machine configuration file
apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: example-vm namespace: default spec: running: false template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo "fedora" | passwd fedora --stdin
6.13.1.3.2. Template: Windows virtual machine instance configuration file
apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: labels: special: vmi-windows name: vmi-windows spec: domain: clock: timer: hpet: present: false hyperv: {} pit: tickPolicy: delay rtc: tickPolicy: catchup utc: {} cpu: cores: 2 devices: disks: - disk: bus: sata name: pvcdisk interfaces: - masquerade: {} model: e1000 name: default features: acpi: {} apic: {} hyperv: relaxed: {} spinlocks: spinlocks: 8191 vapic: {} firmware: uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223 machine: type: q35 resources: requests: memory: 2Gi networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - name: pvcdisk persistentVolumeClaim: claimName: disk-windows
6.13.2. Attaching a virtual machine to multiple networks
Container-native virtualization provides layer-2 networking capabilities that allow you to connect virtual machines to multiple networks. You can import virtual machines with existing workloads that depend on access to multiple interfaces. You can also configure a PXE network so that you can boot machines over the network.
To get started, a network administrator configures a bridge NetworkAttachmentDefinition for a namespace in the web console or CLI. Users can then create a NIC to attach Pods and virtual machines in that namespace to the bridge network.
6.13.2.1. Container-native virtualization networking glossary
Container-native virtualization provides advanced networking functionality by using custom resources and plug-ins.
The following terms are used throughout container-native virtualization documentation:
- Container Network Interface (CNI)
- a Cloud Native Computing Foundation project, focused on container network connectivity. Container-native virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
- Multus
- a "meta" CNI plug-in that allows multiple CNIs to exist so that a Pod or virtual machine can use the interfaces it needs.
- Custom Resource Definition (CRD)
- a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
- NetworkAttachmentDefinition
- a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
- Preboot eXecution Environment (PXE)
- an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.
6.13.2.2. Creating a NetworkAttachmentDefinition
6.13.2.3. Prerequisites
- A Linux bridge must be configured and attached on every node. See the node networking section for more information.
6.13.2.3.1. Creating a Linux bridge NetworkAttachmentDefinition in the web console
The NetworkAttachmentDefinition is a custom resource that exposes layer-2 devices to a specific namespace in your container-native virtualization cluster.
Network administrators can create NetworkAttachmentDefinitions to provide existing layer-2 networking to pods and virtual machines.
Procedure
-
In the web console, click Networking
Network Attachment Definitions. - Click Create Network Attachment Definition .
- Enter a unique Name and optional Description.
- Click the Network Type list and select CNV Linux bridge.
- Enter the name of the bridge in the Bridge Name field.
- (Optional) If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field.
- Click Create.
6.13.2.3.2. Creating a Linux bridge NetworkAttachmentDefinition in the CLI
As a network administrator, you can configure a NetworkAttachmentDefinition of type cnv-bridge
to provide Layer-2 networking to pods and virtual machines.
The NetworkAttachmentDefinition must be in the same namespace as the Pod or virtual machine.
Procedure
Create a new file for the NetworkAttachmentDefinition in any local directory. The file must have the following contents, modified to match your configuration:
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: a-bridge-network annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br0 1 spec: config: '{ "cniVersion": "0.3.1", "name": "a-bridge-network", 2 "plugins": [ { "type": "cnv-bridge", 3 "bridge": "br0" 4 }, { "type": "cnv-tuning" 5 } ] }'
- 1
- If you add this annotation to your NetworkAttachmentDefinition, your virtual machine instances will only run on nodes that have the
br0
bridge connected. - 2
- Required. A name for the configuration. It is recommended to match the configuration name to the
name
value of the NetworkAttachmentDefinition. - 3
- The actual name of the Container Network Interface (CNI) plug-in that provides the network for this NetworkAttachmentDefinition. Do not change this field unless you want to use a different CNI.
- 4
- You must substitute the actual name of the bridge, if it is not
br0
. - 5
- Required. This allows the MAC pool manager to assign a unique MAC address to the connection.
$ oc create -f <resource_spec.yaml>
Edit the configuration file of a virtual machine or virtual machine instance that you want to connect to the bridge network, for example:
apiVersion: v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - masquerade: {} name: default - bridge: {} name: bridge-net 1 ... networks: - name: default pod: {} - name: bridge-net 2 multus: networkName: a-bridge-network 3 ...
NoteThe virtual machine instance will be connected to both the
default
Pod network andbridge-net
, which is defined by a NetworkAttachmentDefinition nameda-bridge-network
.Apply the configuration file to the resource:
$ oc create -f <local/path/to/network-attachment-definition.yaml>
When defining the NIC in the next section, ensure that the NETWORK value is the bridge network name from the NetworkAttachmentDefinition you created in the previous section.
6.13.2.4. Creating a NIC for a virtual machine
Create and attach additional NICs to a virtual machine from the web console.
Procedure
-
In the correct project in the container-native virtualization console, click Workloads
Virtual Machines. - Select a virtual machine.
- Click Network Interfaces to display the NICs already attached to the virtual machine.
- Click Create Network Interface to create a new slot in the list.
- Fill in the Name, Model, Network, Type, and MAC Address for the new NIC.
- Click the ✓ button to save and attach the NIC to the virtual machine.
6.13.2.5. Networking fields
Name | Description |
---|---|
Name | Name for the Network Interface Card. |
Model | Indicates the model of the Network Interface Card. Supported values are e1000, e1000e, ne2k_pci, pcnet, rtl8139, and virtIO. |
Network | List of available NetworkAttachmentDefinition objects. |
Type |
List of available binding methods. For the default Pod network, |
MAC Address | MAC address for the Network Interface Card. If a MAC address is not specified, an ephemeral address is generated for the session. |
Install the optional QEMU guest agent on the virtual machine so that the host can display relevant information about the additional networks.
6.13.3. Installing the QEMU guest agent on virtual machines
The QEMU guest agent is a daemon that runs on the virtual machine. The agent passes network information on the virtual machine, notably the IP address of additional networks, to the host.
6.13.3.1. Prerequisites
Verify that the guest agent is installed and running by entering the following command:
$ systemctl status qemu-guest-agent
6.13.3.2. Installing QEMU guest agent on a Linux virtual machine
The qemu-guest-agent
is widely available and available by default in Red Hat virtual machines. Install the agent and start the service
Procedure
- Access the virtual machine command line through one of the consoles or by SSH.
Install the QEMU guest agent on the virtual machine:
$ yum install -y qemu-guest-agent
Start the QEMU guest agent service:
$ systemctl start qemu-guest-agent
Ensure the service is persistent:
$ systemctl enable qemu-guest-agent
You can also install and start the QEMU guest agent by using the custom script field in the cloud-init section of the wizard when creating either virtual machines or virtual machines templates in the web console.
6.13.3.3. Installing QEMU guest agent on a Windows virtual machine
For Windows virtual machines, the QEMU guest agent is included in the VirtIO drivers, which can be installed using one of the following procedures:
6.13.3.3.1. Installing VirtIO drivers on an existing Windows virtual machine
Install the VirtIO drivers from the attached SATA CD drive to an existing Windows virtual machine.
This procedure uses a generic approach to adding drivers to Windows. The process might differ slightly between versions of Windows. Refer to the installation documentation for your version of Windows for specific installation steps.
Procedure
- Start the virtual machine and connect to a graphical console.
- Log in to a Windows user session.
Open Device Manager and expand Other devices to list any Unknown device.
-
Open the
Device Properties
to identify the unknown device. Right-click the device and select Properties. - Click the Details tab and select Hardware Ids in the Property list.
- Compare the Value for the Hardware Ids with the supported VirtIO drivers.
-
Open the
- Right-click the device and select Update Driver Software.
- Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Click Next to install the driver.
- Repeat this process for all the necessary VirtIO drivers.
- After the driver installs, click Close to close the window.
- Reboot the virtual machine to complete the driver installation.
6.13.3.3.2. Installing VirtIO drivers during Windows installation
Install the VirtIO drivers from the attached SATA CD driver during Windows installation.
This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. Refer to the documentation for the version of Windows that you are installing.
Procedure
- Start the virtual machine and connect to a graphical console.
- Begin the Windows installation process.
- Select the Advanced installation.
-
The storage destination will not be recognized until the driver is loaded. Click
Load driver
. - The drivers are attached as a SATA CD drive. Click OK and browse the CD drive for the storage driver to load. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture.
- Repeat the previous two steps for all required drivers.
- Complete the Windows installation.
6.13.4. Viewing the IP address of NICs on a virtual machine
The QEMU guest agent runs on the virtual machine and passes the IP address of attached NICs to the host, allowing you to view the IP address from both the web console and the oc
client.
6.13.4.1. Prerequisites
Verify that the guest agent is installed and running by entering the following command:
$ systemctl status qemu-guest-agent
- If the guest agent is not installed and running, install and run the guest agent on the virtual machine.
6.13.4.2. Viewing the IP address of a virtual machine interface in the CLI
The network interface configuration is included in the oc describe vmi <vmi_name>
command.
You can also view the IP address information by running ip addr
on the virtual machine, or by running oc get vmi <vmi_name> -o yaml
.
Procedure
Use the
oc describe
command to display the virtual machine interface configuration:$ oc describe vmi <vmi_name> ... Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa
6.13.4.3. Viewing the IP address of a virtual machine interface in the web console
The IP information displays in the Virtual Machine Overview screen for the virtual machine.
Procedure
-
In the container-native virtualization console, click Workloads
Virtual Machines. - Click the virtual machine name to open the Virtual Machine Overview screen.
The information for each attached NIC is displayed under IP ADDRESSES.
6.14. Virtual machine disks
6.14.1. Configuring local storage for virtual machines
You can configure local storage for your virtual machines by using the hostpath provisioner feature.
6.14.1.1. About the hostpath provisioner
The hostpath provisioner is a local storage provisioner designed for container-native virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
When you install the container-native virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:
Configure SELinux:
- If you use Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig object on each node.
-
Otherwise, apply the SELinux label
container_file_t
to the PersistentVolume (PV) backing directory on each node.
- Create a HostPathProvisioner custom resource.
-
Create a
StorageClass
object for the hostpath provisioner.
The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the PersistentVolumes that the hostpath provisioner creates.
6.14.1.2. Configuring SELinux for the hostpath provisioner on Red Hat Enterprise Linux CoreOS 8
You must configure SELinux before you create the HostPathProvisioner custom resource. To configure SELinux on Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig
object on each node.
If you do not use Red Hat Enterprise Linux CoreOS workers, skip this procedure.
Prerequisites
- Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
Procedure
Create the MachineConfig file. For example:
$ touch machineconfig.yaml
Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-set-selinux-for-hostpath-provisioner labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 2.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux chcon for hostpath provisioner Before=kubelet.service [Service] ExecStart=/usr/bin/chcon -Rt container_file_t <path/to/backing/directory> 1 [Install] WantedBy=multi-user.target enabled: true name: hostpath-provisioner.service
- 1
- Specify the backing directory where you want the provisioner to create PVs.
Create the
MachineConfig
object:$ oc create -f machineconfig.yaml -n <namespace>
6.14.1.3. Using the hostpath provisioner to enable local storage
To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner custom resource.
Prerequisites
- Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
Apply the SELinux context
container_file_t
to the PV backing directory on each node. For example:$ sudo chcon -t container_file_t -R </path/to/backing/directory>
NoteIf you use Red Hat Enterprise Linux CoreOS 8 workers, you must configure SELinux by using a MachineConfig manifest instead.
Procedure
Create the HostPathProvisioner custom resource file. For example:
$ touch hostpathprovisioner_cr.yaml
Edit the file, ensuring that the
spec.pathConfig.path
value is the directory where you want the hostpath provisioner to create PVs. For example:apiVersion: hostpathprovisioner.kubevirt.io/v1alpha1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" 1 useNamingPrefix: "false" 2
NoteIf you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the
container_file_t
SELinux context, this can causePermission denied
errors.Create the custom resource in the
openshift-cnv
namespace:$ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv
6.14.1.4. Creating a StorageClass
object
When you create a StorageClass
object, you set parameters that affect the dynamic provisioning of PersistentVolumes (PVs) that belong to that storage class.
You cannot update a StorageClass
object’s parameters after you create it.
Procedure
Create a YAML file for defining the storage class. For example:
$ touch storageclass.yaml
Edit the file. For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner 1 provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3
- 1
- You can optionally rename the storage class by changing this value.
- 2
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the storage class defaults toDelete
. - 3
- The
volumeBindingMode
value determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a PV until after a Pod that uses the PersistentVolumeClaim (PVC) is created. This ensures that the PV meets the Pod’s scheduling requirements.
Create the
StorageClass
object:$ oc create -f storageclass.yaml
Additional information
6.14.2. Uploading local disk images by using the virtctl tool
You can upload a locally stored disk image to a new or existing DataVolume by using the virtctl
command-line utility.
6.14.2.1. Prerequisites
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
6.14.2.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.14.2.3. Creating an upload DataVolume
You can manually create a DataVolume with an upload
data source to use for uploading local disk images.
Procedure
Create a DataVolume configuration that specifies
spec: source: upload{}
:apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2
Create the DataVolume by running the following command:
$ oc create -f <upload-datavolume>.yaml
6.14.2.4. Uploading a local disk image to a DataVolume
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
- The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:$ virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3
Note-
If you do not want to create a new DataVolume, omit the
--size
parameter and include the--no-create
flag. -
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new DataVolume, omit the
Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:
$ oc get dvs
6.14.2.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
6.14.3. Uploading a local disk image to a block storage DataVolume
You can upload a local disk image into a block DataVolume by using the virtctl
command-line utility.
In this workflow, you create a local block device to use as a PersistentVolume, associate this block volume with an upload
DataVolume, and use virtctl
to upload the local disk image into the DataVolume.
6.14.3.1. Prerequisites
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
6.14.3.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.14.3.3. About block PersistentVolumes
A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines that either write to the disk directly or implement their own storage service.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and PersistentVolumeClaim (PVC) specification.
6.14.3.4. Creating a local block PersistentVolume
Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
configuration that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The filename of the PersistentVolume created in the previous step.
6.14.3.5. Creating an upload DataVolume
You can manually create a DataVolume with an upload
data source to use for uploading local disk images.
Procedure
Create a DataVolume configuration that specifies
spec: source: upload{}
:apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2
Create the DataVolume by running the following command:
$ oc create -f <upload-datavolume>.yaml
6.14.3.6. Uploading a local disk image to a DataVolume
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
- The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:$ virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3
Note-
If you do not want to create a new DataVolume, omit the
--size
parameter and include the--no-create
flag. -
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new DataVolume, omit the
Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:
$ oc get dvs
6.14.3.7. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
6.14.4. Moving a local virtual machine disk to a different node
Virtual machines that use local volume storage can be moved so that they run on a specific node.
You might want to move the virtual machine to a specific node for the following reasons:
- The current node has limitations to the local storage configuration.
- The new node is better optimized for the workload of that virtual machine.
To move a virtual machine that uses local storage, you must clone the underlying volume by using a DataVolume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new DataVolume, or add the new DataVolume to another virtual machine.
Users without the cluster-admin
role require additional user permissions in order to clone volumes across namespaces.
6.14.4.1. Cloning a local volume to another node
You can move a virtual machine disk so that it runs on a specific node by cloning the underlying PersistentVolumeClaim (PVC).
To ensure the virtual machine disk is cloned to the correct node, you must either create a new PersistentVolume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the DataVolume.
The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.
Prerequisites
- The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.
Procedure
Either create a new local PV on the node, or identify a local PV already on the node:
Create a local PV that includes the
nodeAffinity.nodeSelectorTerms
parameters. The following manifest creates a10Gi
local PV onnode01
.kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem
Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the
nodeAffinity
field in its configuration:$ oc get pv <destination-pv> -o yaml
The following snippet shows that the PV is on
node01
:... spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2 ...
Add a unique label to the PV:
$ oc label pv <destination-pv> node=node01
Create a DataVolume manifest that references the following:
- The PVC name and namespace of the virtual machine.
- The label you applied to the PV in the previous step.
The size of the destination PV.
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: "<source-vm-disk>" 2 namespace: "<source-namespace>" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5
- 1
- The name of the new DataVolume.
- 2
- The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration:
spec.volumes.persistentVolumeClaim.claimName
. - 3
- The namespace where the source PVC exists.
- 4
- The label that you applied to the PV in the previous step.
- 5
- The size of the destination PV.
Start the cloning operation by applying the DataVolume manifest to your cluster:
$ oc apply -f <clone-datavolume.yaml>
The DataVolume clones the PVC of the virtual machine into the PV on the specific node.
6.14.5. Expanding virtual storage by adding blank disk images
You can increase your storage capacity or create new data partitions by adding blank disk images to container-native virtualization.
6.14.5.1. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.14.5.2. Creating a blank disk image with DataVolumes
You can create a new blank disk image in a PersistentVolumeClaim by customizing and deploying a DataVolume configuration file.
Prerequisites
- At least one available PersistentVolume.
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the DataVolume configuration file:
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
Create the blank disk image by running the following command:
$ oc create -f <blank-image-datavolume>.yaml
6.14.5.3. Template: DataVolume configuration file for blank disk images
blank-image-datavolume.yaml
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
6.14.6. Storage defaults for DataVolumes
The kubevirt-storage-class-defaults
ConfigMap provides access mode and volume mode defaults for DataVolumes. You can edit or add storage class defaults to the ConfigMap in order to create DataVolumes in the web console that better match the underlying storage.
6.14.6.1. About storage settings for DataVolumes
DataVolumes require a defined access mode and volume mode to be created in the web console. These storage settings are configured by default with a ReadWriteOnce
access mode and Filesystem
volume mode.
You can modify these settings by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
All DataVolumes that you create in the web console use the default storage settings unless you specifya storage class that is also defined in the ConfigMap.
6.14.6.1.1. Access modes
DataVolumes support the following access modes:
-
ReadWriteOnce
: The volume can be mounted as read-write by a single node.ReadWriteOnce
has greater versatility and is the default setting. -
ReadWriteMany
: The volume can be mounted as read-write by many nodes.ReadWriteMany
is required for some features, such as live migration of virtual machines between nodes.
ReadWriteMany
is recommended if the underlying storage supports it.
6.14.6.1.2. Volume modes
The volume mode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. DataVolumes support the following volume modes:
-
Filesystem
: Creates a filesystem on the DataVolume. This is the default setting. -
Block
: Creates a block DataVolume. Only useBlock
if the underlying storage supports it.
6.14.6.2. Editing the kubevirt-storage-class-defaults
ConfigMap in the web console
Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
Procedure
-
Click Workloads
Config Maps from the side menu. - In the Project list, select openshift-cnv.
- Click kubevirt-storage-class-defaults to open the Config Map Overview.
- Click the YAML tab to display the editable configuration.
Update the
data
values with the storage configuration that is appropriate for your underlying storage:... data: accessMode: ReadWriteOnce 1 volumeMode: Filesystem 2 <new>.accessMode: ReadWriteMany 3 <new>.volumeMode: Block 4
- 1
- The default accessMode is
ReadWriteOnce
. - 2
- The default volumeMode is
Filesystem
. - 3
- If you add an access mode for a storage class, replace the
<new>
part of the parameter with the storage class name. - 4
- If you add a volume mode for a storage class, replace the
<new>
part of the parameter with the storage class name.
- Click Save to update the ConfigMap.
6.14.6.3. Editing the kubevirt-storage-class-defaults
ConfigMap in the CLI
Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
Procedure
Use
oc edit
to edit the ConfigMap:$ oc edit configmap kubevirt-storage-class-defaults -n openshift-cnv
Update the
data
values of the ConfigMap:... data: accessMode: ReadWriteOnce 1 volumeMode: Filesystem 2 <new>.accessMode: ReadWriteMany 3 <new>.volumeMode: Block 4
- 1
- The default accessMode is
ReadWriteOnce
. - 2
- The default volumeMode is
Filesystem
. - 3
- If you add an access mode for storage class, replace the
<new>
part of the parameter with the storage class name. - 4
- If you add a volume mode for a storage class, replace the
<new>
part of the parameter with the storage class name.
- Save and exit the editor to update the ConfigMap.
6.14.6.4. Example of multiple storage class defaults
The following YAML file is an example of a kubevirt-storage-class-defaults
ConfigMap that has storage settings configured for two storage classes, migration
and block
.
Ensure that all settings are supported by your underlying storage before you update the ConfigMap.
kind: ConfigMap apiVersion: v1 metadata: name: kubevirt-storage-class-defaults namespace: openshift-cnv ... data: accessMode: ReadWriteOnce volumeMode: Filesystem nfs-sc.accessMode: ReadWriteMany nfs-sc.volumeMode: Filesystem block-sc.accessMode: ReadWriteMany block-sc.volumeMode: Block
6.14.7. Preparing CDI scratch space
6.14.7.1. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.14.7.2. Understanding scratch space
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, the CDI provisions a scratch space PVC equal to the size of the PVC backing the destination DataVolume (DV). The scratch space PVC is deleted after the operation completes or aborts.
The CDIConfig object allows you to define which StorageClass to use to bind the scratch space PVC by setting the scratchSpaceStorageClass
in the spec:
section of the CDIConfig object.
If the defined StorageClass does not match a StorageClass in the cluster, then the default StorageClass defined for the cluster is used. If there is no default StorageClass defined in the cluster, the StorageClass used to provision the original DV or PVC is used.
The CDI requires requesting scratch space with a file
volume mode, regardless of the PVC backing the origin DataVolume. If the origin PVC is backed by block
volume mode, you must define a StorageClass capable of provisioning file
volume mode PVCs.
Manual provisioning
If there are no storage classes, the CDI will use any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod will remain in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
6.14.7.3. CDI operations that require scratch space
Type | Reason |
---|---|
Registry imports | The CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, the CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
6.14.7.4. Defining a StorageClass in the CDI configuration
Define a StorageClass in the CDI configuration to dynamically provision scratch space for CDI operations.
Procedure
Use the
oc
client to edit thecdiconfig/config
and add or edit thespec: scratchSpaceStorageClass
to match a StorageClass in the cluster.$ oc edit cdiconfig/config
API Version: cdi.kubevirt.io/v1alpha1 kind: CDIConfig metadata: name: config ... spec: scratchSpaceStorageClass: "<storage_class>" ...
6.14.7.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
Archive+ | ✓ TAR | ✓ TAR | ✓ TAR | □ TAR | □ TAR |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
+ Archive does not support block mode DVs
Additional resources
- See the Dynamic provisioning section for more information on StorageClasses and how these are defined in the cluster.
6.14.8. Deleting DataVolumes
You can manually delete a DataVolume by using the oc
command-line interface.
When you delete a virtual machine, the DataVolume it uses is automatically deleted.
6.14.8.1. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
6.14.8.2. Listing all DataVolumes
You can list the DataVolumes in your cluster by using the oc
command-line interface.
Procedure
List all DataVolumes by running the following command:
$ oc get dvs
6.14.8.3. Deleting a DataVolume
You can delete a DataVolume by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the DataVolume that you want to delete.
Procedure
Delete the DataVolume by running the following command:
$ oc delete dv <datavolume_name>
NoteThis command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.