This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.7.15. Virtual machine disks
7.15.1. Storage features 复制链接链接已复制到粘贴板!
Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization.
Virtual machine live migration | Host-assisted virtual machine disk cloning | Storage-assisted virtual machine disk cloning | Virtual machine snapshots | |
---|---|---|---|---|
OpenShift Container Storage: RBD block-mode volumes | Yes | Yes | Yes | Yes |
OpenShift Virtualization hostpath provisioner | No | Yes | No | No |
Other multi-node writable storage | Yes [1] | Yes | Yes [2] | Yes [2] |
Other single-node writable storage | No | Yes | Yes [2] | Yes [2] |
- PVCs must request a ReadWriteMany access mode.
- Storage provider must support both Kubernetes and CSI snapshot APIs
You cannot live migrate virtual machines that use:
- A storage class with ReadWriteOnce (RWO) access mode
- Passthrough features such as SRI-OV and GPU
Do not set the evictionStrategy
field to LiveMigrate
for these virtual machines.
7.15.2. Configuring local storage for virtual machines 复制链接链接已复制到粘贴板!
You can configure local storage for your virtual machines by using the hostpath provisioner feature.
7.15.2.1. About the hostpath provisioner 复制链接链接已复制到粘贴板!
The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
When you install the OpenShift Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:
Configure SELinux:
- If you use Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig object on each node.
-
Otherwise, apply the SELinux label
container_file_t
to the PersistentVolume (PV) backing directory on each node.
- Create a HostPathProvisioner custom resource.
-
Create a
StorageClass
object for the hostpath provisioner.
The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the PersistentVolumes that the hostpath provisioner creates.
You must configure SELinux before you create the HostPathProvisioner custom resource. To configure SELinux on Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig
object on each node.
If you do not use Red Hat Enterprise Linux CoreOS workers, skip this procedure.
Prerequisites
- Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
Procedure
Create the MachineConfig file. For example:
touch machineconfig.yaml
$ touch machineconfig.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the backing directory where you want the provisioner to create PVs.
Create the
MachineConfig
object:oc create -f machineconfig.yaml -n <namespace>
$ oc create -f machineconfig.yaml -n <namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner custom resource.
Prerequisites
- Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
Apply the SELinux context
container_file_t
to the PV backing directory on each node. For example:sudo chcon -t container_file_t -R </path/to/backing/directory>
$ sudo chcon -t container_file_t -R </path/to/backing/directory>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意If you use Red Hat Enterprise Linux CoreOS 8 workers, you must configure SELinux by using a MachineConfig manifest instead.
Procedure
Create the HostPathProvisioner custom resource file. For example:
touch hostpathprovisioner_cr.yaml
$ touch hostpathprovisioner_cr.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the file, ensuring that the
spec.pathConfig.path
value is the directory where you want the hostpath provisioner to create PVs. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the
container_file_t
SELinux context, this can causePermission denied
errors.Create the custom resource in the
openshift-cnv
namespace:oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv
$ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.2.4. Creating a StorageClass object 复制链接链接已复制到粘贴板!
When you create a StorageClass
object, you set parameters that affect the dynamic provisioning of PersistentVolumes (PVs) that belong to that storage class.
You cannot update a StorageClass
object’s parameters after you create it.
Procedure
Create a YAML file for defining the storage class. For example:
touch storageclass.yaml
$ touch storageclass.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the file. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- You can optionally rename the storage class by changing this value.
- 2
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the storage class defaults toDelete
. - 3
- The
volumeBindingMode
value determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a PV until after a Pod that uses the PersistentVolumeClaim (PVC) is created. This ensures that the PV meets the Pod’s scheduling requirements.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using StorageClass
with volumeBindingMode
set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a Pod
is created using the PVC.
Create the
StorageClass
object:oc create -f storageclass.yaml
$ oc create -f storageclass.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional information
You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
7.15.3.1. About CPU and memory quotas in a namespace 复制链接链接已复制到粘贴板!
A resource quota, defined by the ResourceQuota
object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.
The CDIConfig
object defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values for the CDIConfig
object are set to a default value of 0. This ensures that pods created by CDI that make no compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
Modify the default settings for CPU and memory requests and limits for your use case by editing the spec
attribute of the CDIConfig
object.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
cdiconfig/config
by running the following command:oc edit cdiconfig/config
$ oc edit cdiconfig/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the default CPU and memory requests and limits by editing the
spec: podResourceRequirements
property of theCDIConfig
object:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save and exit the editor to update the
CDIConfig
object.
Verification
View the
CDIConfig
status and verify your changes by running the following command:oc get cdiconfig config -o yaml
$ oc get cdiconfig config -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.3.3. Additional resources 复制链接链接已复制到粘贴板!
You can upload a locally stored disk image to a new or existing DataVolume by using the virtctl
command-line utility.
7.15.4.1. Prerequisites 复制链接链接已复制到粘贴板!
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
7.15.4.2. About DataVolumes 复制链接链接已复制到粘贴板!
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.4.3. Creating an upload DataVolume 复制链接链接已复制到粘贴板!
You can manually create a DataVolume with an upload
data source to use for uploading local disk images.
Procedure
Create a DataVolume configuration that specifies
spec: source: upload{}
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the DataVolume by running the following command:
oc create -f <upload-datavolume>.yaml
$ oc create -f <upload-datavolume>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.4.4. Uploading a local disk image to a DataVolume 复制链接链接已复制到粘贴板!
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
- The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:virtctl image-upload dv <datavolume_name> \ --size=<datavolume_size> \ --image-path=</path/to/image> \
$ virtctl image-upload dv <datavolume_name> \
1 --size=<datavolume_size> \
2 --image-path=</path/to/image> \
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意-
If you do not want to create a new DataVolume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new DataVolume, omit the
Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:
oc get dvs
$ oc get dvs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.4.5. CDI supported operations matrix 复制链接链接已复制到粘贴板!
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
You can upload a local disk image into a block DataVolume by using the virtctl
command-line utility.
In this workflow, you create a local block device to use as a PersistentVolume, associate this block volume with an upload
DataVolume, and use virtctl
to upload the local disk image into the DataVolume.
7.15.5.1. Prerequisites 复制链接链接已复制到粘贴板!
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
7.15.5.2. About DataVolumes 复制链接链接已复制到粘贴板!
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.5.3. About block PersistentVolumes 复制链接链接已复制到粘贴板!
A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and PersistentVolumeClaim (PVC) specification.
7.15.5.4. Creating a local block PersistentVolume 复制链接链接已复制到粘贴板!
Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):dd if=/dev/zero of=<loop10> bs=100M count=20
$ dd if=/dev/zero of=<loop10> bs=100M count=20
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the
loop10
file as a loop device.losetup </dev/loop10>d3 <loop10>
$ losetup </dev/loop10>d3 <loop10>
1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
PersistentVolume
configuration that references the mounted loop device.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the block PV.
oc create -f <local-block-pv10.yaml>
# oc create -f <local-block-pv10.yaml>
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The filename of the PersistentVolume created in the previous step.
7.15.5.5. Creating an upload DataVolume 复制链接链接已复制到粘贴板!
You can manually create a DataVolume with an upload
data source to use for uploading local disk images.
Procedure
Create a DataVolume configuration that specifies
spec: source: upload{}
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the DataVolume by running the following command:
oc create -f <upload-datavolume>.yaml
$ oc create -f <upload-datavolume>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.5.6. Uploading a local disk image to a DataVolume 复制链接链接已复制到粘贴板!
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
- The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:virtctl image-upload dv <datavolume_name> \ --size=<datavolume_size> \ --image-path=</path/to/image> \
$ virtctl image-upload dv <datavolume_name> \
1 --size=<datavolume_size> \
2 --image-path=</path/to/image> \
3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意-
If you do not want to create a new DataVolume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new DataVolume, omit the
Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:
oc get dvs
$ oc get dvs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.5.7. CDI supported operations matrix 复制链接链接已复制到粘贴板!
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
Virtual machines that use local volume storage can be moved so that they run on a specific node.
You might want to move the virtual machine to a specific node for the following reasons:
- The current node has limitations to the local storage configuration.
- The new node is better optimized for the workload of that virtual machine.
To move a virtual machine that uses local storage, you must clone the underlying volume by using a DataVolume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new DataVolume, or add the new DataVolume to another virtual machine.
Users without the cluster-admin
role require additional user permissions in order to clone volumes across namespaces.
7.15.6.1. Cloning a local volume to another node 复制链接链接已复制到粘贴板!
You can move a virtual machine disk so that it runs on a specific node by cloning the underlying PersistentVolumeClaim (PVC).
To ensure the virtual machine disk is cloned to the correct node, you must either create a new PersistentVolume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the DataVolume.
The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.
Prerequisites
- The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.
Procedure
Either create a new local PV on the node, or identify a local PV already on the node:
Create a local PV that includes the
nodeAffinity.nodeSelectorTerms
parameters. The following manifest creates a10Gi
local PV onnode01
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the
nodeAffinity
field in its configuration:oc get pv <destination-pv> -o yaml
$ oc get pv <destination-pv> -o yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following snippet shows that the PV is on
node01
:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add a unique label to the PV:
oc label pv <destination-pv> node=node01
$ oc label pv <destination-pv> node=node01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a DataVolume manifest that references the following:
- The PVC name and namespace of the virtual machine.
- The label you applied to the PV in the previous step.
The size of the destination PV.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the new DataVolume.
- 2
- The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration:
spec.volumes.persistentVolumeClaim.claimName
. - 3
- The namespace where the source PVC exists.
- 4
- The label that you applied to the PV in the previous step.
- 5
- The size of the destination PV.
Start the cloning operation by applying the DataVolume manifest to your cluster:
oc apply -f <clone-datavolume.yaml>
$ oc apply -f <clone-datavolume.yaml>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
The DataVolume clones the PVC of the virtual machine into the PV on the specific node.
You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization.
7.15.7.1. About DataVolumes 复制链接链接已复制到粘贴板!
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.7.2. Creating a blank disk image with DataVolumes 复制链接链接已复制到粘贴板!
You can create a new blank disk image in a PersistentVolumeClaim by customizing and deploying a DataVolume configuration file.
Prerequisites
- At least one available PersistentVolume.
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the DataVolume configuration file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the blank disk image by running the following command:
oc create -f <blank-image-datavolume>.yaml
$ oc create -f <blank-image-datavolume>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
blank-image-datavolume.yaml
7.15.8. Storage defaults for DataVolumes 复制链接链接已复制到粘贴板!
The kubevirt-storage-class-defaults
ConfigMap provides access mode and volume mode defaults for DataVolumes. You can edit or add storage class defaults to the ConfigMap in order to create DataVolumes in the web console that better match the underlying storage.
7.15.8.1. About storage settings for DataVolumes 复制链接链接已复制到粘贴板!
DataVolumes require a defined access mode and volume mode to be created in the web console. These storage settings are configured by default with a ReadWriteOnce
access mode and Filesystem
volume mode.
You can modify these settings by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
All DataVolumes that you create in the web console use the default storage settings unless you specify a storage class that is also defined in the ConfigMap.
7.15.8.1.1. Access modes 复制链接链接已复制到粘贴板!
DataVolumes support the following access modes:
-
ReadWriteOnce
: The volume can be mounted as read-write by a single node.ReadWriteOnce
has greater versatility and is the default setting. -
ReadWriteMany
: The volume can be mounted as read-write by many nodes.ReadWriteMany
is required for some features, such as live migration of virtual machines between nodes.
ReadWriteMany
is recommended if the underlying storage supports it.
7.15.8.1.2. Volume modes 复制链接链接已复制到粘贴板!
The volume mode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. DataVolumes support the following volume modes:
-
Filesystem
: Creates a filesystem on the DataVolume. This is the default setting. -
Block
: Creates a block DataVolume. Only useBlock
if the underlying storage supports it.
Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
Procedure
-
Click Workloads
Config Maps from the side menu. - In the Project list, select openshift-cnv.
- Click kubevirt-storage-class-defaults to open the Config Map Overview.
- Click the YAML tab to display the editable configuration.
Update the
data
values with the storage configuration that is appropriate for your underlying storage:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default accessMode is
ReadWriteOnce
. - 2
- The default volumeMode is
Filesystem
. - 3
- If you add an access mode for a storage class, replace the
<new>
part of the parameter with the storage class name. - 4
- If you add a volume mode for a storage class, replace the
<new>
part of the parameter with the storage class name.
- Click Save to update the config map.
Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
Procedure
Edit the ConfigMap by running the following command:
oc edit configmap kubevirt-storage-class-defaults -n openshift-cnv
$ oc edit configmap kubevirt-storage-class-defaults -n openshift-cnv
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
data
values of the ConfigMap:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default accessMode is
ReadWriteOnce
. - 2
- The default volumeMode is
Filesystem
. - 3
- If you add an access mode for storage class, replace the
<new>
part of the parameter with the storage class name. - 4
- If you add a volume mode for a storage class, replace the
<new>
part of the parameter with the storage class name.
- Save and exit the editor to update the config map.
7.15.8.4. Example of multiple storage class defaults 复制链接链接已复制到粘贴板!
The following YAML file is an example of a kubevirt-storage-class-defaults
ConfigMap that has storage settings configured for two storage classes, migration
and block
.
Ensure that all settings are supported by your underlying storage before you update the ConfigMap.
7.15.9. Using container disks with virtual machines 复制链接链接已复制到粘贴板!
You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage.
7.15.9.1. About container disks 复制链接链接已复制到粘贴板!
A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones.
A container disk can either be imported into a persistent volume claim (PVC) by using a DataVolume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk
volume.
Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a DataVolume. You can then attach the DataVolume to a virtual machine for persistent storage.
A containerDisk
volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk
volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine.
Use containerDisk
volumes for read-only filesystems such as CD-ROMs or for disposable virtual machines.
Using containerDisk
volumes for read-write filesystems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly.
You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a DataVolume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk
volume.
Prerequisites
-
Install
podman
if it is not already installed. - The virtual machine image must be either QCOW2 or RAW format.
Procedure
Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of
107
, and placed in the/disk/
directory inside the container. Permissions for the/disk/
directory must then be set to0440
.The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal
scratch
image in the second stage to store the result:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Where <vm_image> is the virtual machine image in either QCOW2 or RAW format.
To use a remote virtual machine image, replace<vm_image>.qcow2
with the complete url for the remote image.
Build and tag the container:
podman build -t <registry>/<container_disk_name>:latest .
$ podman build -t <registry>/<container_disk_name>:latest .
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Push the container image to the registry:
podman push <registry>/<container_disk_name>:latest
$ podman push <registry>/<container_disk_name>:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage.
You can disable TLS (transport layer security) for a container registry by adding the registry to the cdi-insecure-registries
ConfigMap.
Prerequisites
-
Log in to the cluster as a user with the
cluster-admin
role.
Procedure
Add the registry to the
cdi-insecure-registries
ConfigMap in thecdi
namespace.oc patch configmap cdi-insecure-registries -n cdi \ --type merge -p '{"data":{"mykey": "<insecure-registry-host>:5000"}}'
$ oc patch configmap cdi-insecure-registries -n cdi \ --type merge -p '{"data":{"mykey": "<insecure-registry-host>:5000"}}'
1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<insecure-registry-host>
with the registry hostname.
7.15.9.4. Next steps 复制链接链接已复制到粘贴板!
- Import the container disk into persistent storage for a virtual machine.
- Create a virtual machine that uses a containerDisk volume for ephemeral storage.
7.15.10. Preparing CDI scratch space 复制链接链接已复制到粘贴板!
7.15.10.1. About DataVolumes 复制链接链接已复制到粘贴板!
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.10.2. Understanding scratch space 复制链接链接已复制到粘贴板!
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, the CDI provisions a scratch space PVC equal to the size of the PVC backing the destination DataVolume (DV). The scratch space PVC is deleted after the operation completes or aborts.
The CDIConfig object allows you to define which StorageClass to use to bind the scratch space PVC by setting the scratchSpaceStorageClass
in the spec:
section of the CDIConfig object.
If the defined StorageClass does not match a StorageClass in the cluster, then the default StorageClass defined for the cluster is used. If there is no default StorageClass defined in the cluster, the StorageClass used to provision the original DV or PVC is used.
The CDI requires requesting scratch space with a file
volume mode, regardless of the PVC backing the origin DataVolume. If the origin PVC is backed by block
volume mode, you must define a StorageClass capable of provisioning file
volume mode PVCs.
Manual provisioning
If there are no storage classes, the CDI will use any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod will remain in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
7.15.10.3. CDI operations that require scratch space 复制链接链接已复制到粘贴板!
Type | Reason |
---|---|
Registry imports | The CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, the CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
Define a StorageClass in the CDI configuration to dynamically provision scratch space for CDI operations.
Procedure
Use the
oc
client to edit thecdiconfig/config
and add or edit thespec: scratchSpaceStorageClass
to match a StorageClass in the cluster.oc edit cdiconfig/config
$ oc edit cdiconfig/config
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.10.5. CDI supported operations matrix 复制链接链接已复制到粘贴板!
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
Additional resources
- See the Dynamic provisioning section for more information on StorageClasses and how these are defined in the cluster.
7.15.11. Re-using persistent volumes 复制链接链接已复制到粘贴板!
In order to re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used.
When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.
You can then re-use the PV configuration to create a PV with a different name.
Statically provisioned PVs must have a reclaim policy of Retain
to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV.
The Recycle
reclaim policy is deprecated in OpenShift Container Platform 4.
Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage.
Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage.
Procedure
Ensure that the reclaim policy of the PV is set to
Retain
:Check the reclaim policy of the PV:
oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'
$ oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the
persistentVolumeReclaimPolicy
is not set toRetain
, edit the reclaim policy with the following command:oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
$ oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Ensure that no resources are using the PV:
oc describe pvc <pvc_name> | grep 'Mounted By:'
$ oc describe pvc <pvc_name> | grep 'Mounted By:'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any resources that use the PVC before continuing.
Delete the PVC to release the PV:
oc delete pvc <pvc_name>
$ oc delete pvc <pvc_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use
spec
parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV:oc get pv <pv_name> -o yaml > <file_name>.yaml
$ oc get pv <pv_name> -o yaml > <file_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the PV:
oc delete pv <pv_name>
$ oc delete pv <pv_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder:
rm -rf <path_to_share_storage>
$ rm -rf <path_to_share_storage>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the
spec
parameters of that file as the basis for a new PV manifest:注意To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted.
oc create -f <new_pv_name>.yaml
$ oc create -f <new_pv_name>.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- Configuring local storage for virtual machines
- The OpenShift Container Platform Storage documentation has more information on Persistent Storage.
7.15.12. Deleting DataVolumes 复制链接链接已复制到粘贴板!
You can manually delete a DataVolume by using the oc
command-line interface.
When you delete a virtual machine, the DataVolume it uses is automatically deleted.
7.15.12.1. About DataVolumes 复制链接链接已复制到粘贴板!
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.12.2. Listing all DataVolumes 复制链接链接已复制到粘贴板!
You can list the DataVolumes in your cluster by using the oc
command-line interface.
Procedure
List all DataVolumes by running the following command:
oc get dvs
$ oc get dvs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.15.12.3. Deleting a DataVolume 复制链接链接已复制到粘贴板!
You can delete a DataVolume by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the DataVolume that you want to delete.
Procedure
Delete the DataVolume by running the following command:
oc delete dv <datavolume_name>
$ oc delete dv <datavolume_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 注意This command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.