7.15. Virtual machine disks
7.15.1. Storage features
Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization.
7.15.1.1. OpenShift Virtualization storage feature matrix
Virtual machine live migration | Host-assisted virtual machine disk cloning | Storage-assisted virtual machine disk cloning | Virtual machine snapshots | |
---|---|---|---|---|
OpenShift Container Storage: RBD block-mode volumes | Yes | Yes | Yes | Yes |
OpenShift Virtualization hostpath provisioner | No | Yes | No | No |
Other multi-node writable storage | Yes [1] | Yes | Yes [2] | Yes [2] |
Other single-node writable storage | No | Yes | Yes [2] | Yes [2] |
- PVCs must request a ReadWriteMany access mode.
- Storage provider must support both Kubernetes and CSI snapshot APIs
You cannot live migrate virtual machines that use:
- A storage class with ReadWriteOnce (RWO) access mode
- Passthrough features such as SRI-OV and GPU
Do not set the evictionStrategy
field to LiveMigrate
for these virtual machines.
7.15.2. Configuring local storage for virtual machines
You can configure local storage for your virtual machines by using the hostpath provisioner feature.
7.15.2.1. About the hostpath provisioner
The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.
When you install the OpenShift Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:
Configure SELinux:
- If you use Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig object on each node.
-
Otherwise, apply the SELinux label
container_file_t
to the PersistentVolume (PV) backing directory on each node.
- Create a HostPathProvisioner custom resource.
-
Create a
StorageClass
object for the hostpath provisioner.
The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the PersistentVolumes that the hostpath provisioner creates.
7.15.2.2. Configuring SELinux for the hostpath provisioner on Red Hat Enterprise Linux CoreOS 8
You must configure SELinux before you create the HostPathProvisioner custom resource. To configure SELinux on Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig
object on each node.
If you do not use Red Hat Enterprise Linux CoreOS workers, skip this procedure.
Prerequisites
- Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
Procedure
Create the MachineConfig file. For example:
$ touch machineconfig.yaml
Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:
apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-set-selinux-for-hostpath-provisioner labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 2.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux chcon for hostpath provisioner Before=kubelet.service [Service] ExecStart=/usr/bin/chcon -Rt container_file_t <path/to/backing/directory> 1 [Install] WantedBy=multi-user.target enabled: true name: hostpath-provisioner.service
- 1
- Specify the backing directory where you want the provisioner to create PVs.
Create the
MachineConfig
object:$ oc create -f machineconfig.yaml -n <namespace>
7.15.2.3. Using the hostpath provisioner to enable local storage
To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner custom resource.
Prerequisites
- Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
Apply the SELinux context
container_file_t
to the PV backing directory on each node. For example:$ sudo chcon -t container_file_t -R </path/to/backing/directory>
注意If you use Red Hat Enterprise Linux CoreOS 8 workers, you must configure SELinux by using a MachineConfig manifest instead.
Procedure
Create the HostPathProvisioner custom resource file. For example:
$ touch hostpathprovisioner_cr.yaml
Edit the file, ensuring that the
spec.pathConfig.path
value is the directory where you want the hostpath provisioner to create PVs. For example:apiVersion: hostpathprovisioner.kubevirt.io/v1alpha1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" 1 useNamingPrefix: "false" 2
注意If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the
container_file_t
SELinux context, this can causePermission denied
errors.Create the custom resource in the
openshift-cnv
namespace:$ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv
7.15.2.4. Creating a StorageClass
object
When you create a StorageClass
object, you set parameters that affect the dynamic provisioning of PersistentVolumes (PVs) that belong to that storage class.
You cannot update a StorageClass
object’s parameters after you create it.
Procedure
Create a YAML file for defining the storage class. For example:
$ touch storageclass.yaml
Edit the file. For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner 1 provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3
- 1
- You can optionally rename the storage class by changing this value.
- 2
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the storage class defaults toDelete
. - 3
- The
volumeBindingMode
value determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a PV until after a Pod that uses the PersistentVolumeClaim (PVC) is created. This ensures that the PV meets the Pod’s scheduling requirements.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using StorageClass
with volumeBindingMode
set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a Pod
is created using the PVC.
Create the
StorageClass
object:$ oc create -f storageclass.yaml
Additional information
7.15.3. Configuring CDI to work with namespaces that have a compute resource quota
You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
7.15.3.1. About CPU and memory quotas in a namespace
A resource quota, defined by the ResourceQuota
object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.
The CDIConfig
object defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values for the CDIConfig
object are set to a default value of 0. This ensures that pods created by CDI that make no compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
7.15.3.2. Editing the CDIConfig
object to override CPU and memory defaults
Modify the default settings for CPU and memory requests and limits for your use case by editing the spec
attribute of the CDIConfig
object.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
cdiconfig/config
by running the following command:$ oc edit cdiconfig/config
Change the default CPU and memory requests and limits by editing the
spec: podResourceRequirements
property of theCDIConfig
object:apiVersion: cdi.kubevirt.io/v1alpha1 kind: CDIConfig metadata: labels: app: containerized-data-importer cdi.kubevirt.io: "" name: config spec: podResourceRequirements: limits: cpu: "4" memory: "1Gi" requests: cpu: "1" memory: "250Mi" ...
-
Save and exit the editor to update the
CDIConfig
object.
Verification
View the
CDIConfig
status and verify your changes by running the following command:$ oc get cdiconfig config -o yaml
7.15.3.3. Additional resources
7.15.4. Uploading local disk images by using the virtctl tool
You can upload a locally stored disk image to a new or existing DataVolume by using the virtctl
command-line utility.
7.15.4.1. Prerequisites
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
7.15.4.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.4.3. Creating an upload DataVolume
You can manually create a DataVolume with an upload
data source to use for uploading local disk images.
Procedure
Create a DataVolume configuration that specifies
spec: source: upload{}
:apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2
Create the DataVolume by running the following command:
$ oc create -f <upload-datavolume>.yaml
7.15.4.4. Uploading a local disk image to a DataVolume
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
- The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:$ virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3
注意-
If you do not want to create a new DataVolume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new DataVolume, omit the
Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:
$ oc get dvs
7.15.4.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
7.15.5. Uploading a local disk image to a block storage DataVolume
You can upload a local disk image into a block DataVolume by using the virtctl
command-line utility.
In this workflow, you create a local block device to use as a PersistentVolume, associate this block volume with an upload
DataVolume, and use virtctl
to upload the local disk image into the DataVolume.
7.15.5.1. Prerequisites
-
Install the
kubevirt-virtctl
package. - If you require scratch space according to the CDI supported operations matrix, you must first define a StorageClass or prepare CDI scratch space for this operation to complete successfully.
7.15.5.2. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.5.3. About block PersistentVolumes
A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines by reducing overhead.
Raw block volumes are provisioned by specifying volumeMode: Block
in the PV and PersistentVolumeClaim (PVC) specification.
7.15.5.4. Creating a local block PersistentVolume
Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block
volume and use it as a block device for a virtual machine image.
Procedure
-
Log in as
root
to the node on which to create the local PV. This procedure usesnode01
for its examples. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file
loop10
with a size of 2Gb (20 100Mb blocks):$ dd if=/dev/zero of=<loop10> bs=100M count=20
Mount the
loop10
file as a loop device.$ losetup </dev/loop10>d3 <loop10> 1 2
Create a
PersistentVolume
configuration that references the mounted loop device.kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4
Create the block PV.
# oc create -f <local-block-pv10.yaml>1
- 1
- The filename of the PersistentVolume created in the previous step.
7.15.5.5. Creating an upload DataVolume
You can manually create a DataVolume with an upload
data source to use for uploading local disk images.
Procedure
Create a DataVolume configuration that specifies
spec: source: upload{}
:apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2
Create the DataVolume by running the following command:
$ oc create -f <upload-datavolume>.yaml
7.15.5.6. Uploading a local disk image to a DataVolume
You can use the virtctl
CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.
After you upload a local disk image, you can add it to a virtual machine.
Prerequisites
-
A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using
xz
orgz
. -
The
kubevirt-virtctl
package must be installed on the client machine. - The client machine must be configured to trust the OpenShift Container Platform router’s certificate.
Procedure
Identify the following items:
- The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
- The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
- The file location of the virtual machine disk image that you want to upload.
Upload the disk image by running the
virtctl image-upload
command. Specify the parameters that you identified in the previous step. For example:$ virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3
注意-
If you do not want to create a new DataVolume, omit the
--size
parameter and include the--no-create
flag. - When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
-
To allow insecure server connections when using HTTPS, use the
--insecure
parameter. Be aware that when you use the--insecure
flag, the authenticity of the upload endpoint is not verified.
-
If you do not want to create a new DataVolume, omit the
Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:
$ oc get dvs
7.15.5.7. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
7.15.6. Moving a local virtual machine disk to a different node
Virtual machines that use local volume storage can be moved so that they run on a specific node.
You might want to move the virtual machine to a specific node for the following reasons:
- The current node has limitations to the local storage configuration.
- The new node is better optimized for the workload of that virtual machine.
To move a virtual machine that uses local storage, you must clone the underlying volume by using a DataVolume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new DataVolume, or add the new DataVolume to another virtual machine.
Users without the cluster-admin
role require additional user permissions in order to clone volumes across namespaces.
7.15.6.1. Cloning a local volume to another node
You can move a virtual machine disk so that it runs on a specific node by cloning the underlying PersistentVolumeClaim (PVC).
To ensure the virtual machine disk is cloned to the correct node, you must either create a new PersistentVolume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the DataVolume.
The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.
Prerequisites
- The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.
Procedure
Either create a new local PV on the node, or identify a local PV already on the node:
Create a local PV that includes the
nodeAffinity.nodeSelectorTerms
parameters. The following manifest creates a10Gi
local PV onnode01
.kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem
Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the
nodeAffinity
field in its configuration:$ oc get pv <destination-pv> -o yaml
The following snippet shows that the PV is on
node01
:Example output
... spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2 ...
Add a unique label to the PV:
$ oc label pv <destination-pv> node=node01
Create a DataVolume manifest that references the following:
- The PVC name and namespace of the virtual machine.
- The label you applied to the PV in the previous step.
The size of the destination PV.
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: "<source-vm-disk>" 2 namespace: "<source-namespace>" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5
- 1
- The name of the new DataVolume.
- 2
- The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration:
spec.volumes.persistentVolumeClaim.claimName
. - 3
- The namespace where the source PVC exists.
- 4
- The label that you applied to the PV in the previous step.
- 5
- The size of the destination PV.
Start the cloning operation by applying the DataVolume manifest to your cluster:
$ oc apply -f <clone-datavolume.yaml>
The DataVolume clones the PVC of the virtual machine into the PV on the specific node.
7.15.7. Expanding virtual storage by adding blank disk images
You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization.
7.15.7.1. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.7.2. Creating a blank disk image with DataVolumes
You can create a new blank disk image in a PersistentVolumeClaim by customizing and deploying a DataVolume configuration file.
Prerequisites
- At least one available PersistentVolume.
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the DataVolume configuration file:
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
Create the blank disk image by running the following command:
$ oc create -f <blank-image-datavolume>.yaml
7.15.7.3. Template: DataVolume configuration file for blank disk images
blank-image-datavolume.yaml
apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: "hostpath" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi
7.15.8. Storage defaults for DataVolumes
The kubevirt-storage-class-defaults
ConfigMap provides access mode and volume mode defaults for DataVolumes. You can edit or add storage class defaults to the ConfigMap in order to create DataVolumes in the web console that better match the underlying storage.
7.15.8.1. About storage settings for DataVolumes
DataVolumes require a defined access mode and volume mode to be created in the web console. These storage settings are configured by default with a ReadWriteOnce
access mode and Filesystem
volume mode.
You can modify these settings by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
All DataVolumes that you create in the web console use the default storage settings unless you specify a storage class that is also defined in the ConfigMap.
7.15.8.1.1. Access modes
DataVolumes support the following access modes:
-
ReadWriteOnce
: The volume can be mounted as read-write by a single node.ReadWriteOnce
has greater versatility and is the default setting. -
ReadWriteMany
: The volume can be mounted as read-write by many nodes.ReadWriteMany
is required for some features, such as live migration of virtual machines between nodes.
ReadWriteMany
is recommended if the underlying storage supports it.
7.15.8.1.2. Volume modes
The volume mode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. DataVolumes support the following volume modes:
-
Filesystem
: Creates a filesystem on the DataVolume. This is the default setting. -
Block
: Creates a block DataVolume. Only useBlock
if the underlying storage supports it.
7.15.8.2. Editing the kubevirt-storage-class-defaults config map in the web console
Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
Procedure
-
Click Workloads
Config Maps from the side menu. - In the Project list, select openshift-cnv.
- Click kubevirt-storage-class-defaults to open the Config Map Overview.
- Click the YAML tab to display the editable configuration.
Update the
data
values with the storage configuration that is appropriate for your underlying storage:... data: accessMode: ReadWriteOnce 1 volumeMode: Filesystem 2 <new>.accessMode: ReadWriteMany 3 <new>.volumeMode: Block 4
- 1
- The default accessMode is
ReadWriteOnce
. - 2
- The default volumeMode is
Filesystem
. - 3
- If you add an access mode for a storage class, replace the
<new>
part of the parameter with the storage class name. - 4
- If you add a volume mode for a storage class, replace the
<new>
part of the parameter with the storage class name.
- Click Save to update the config map.
7.15.8.3. Editing the kubevirt-storage-class-defaults config map in the CLI
Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults
ConfigMap in the openshift-cnv
namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.
You must configure storage settings that are supported by the underlying storage.
Procedure
Edit the ConfigMap by running the following command:
$ oc edit configmap kubevirt-storage-class-defaults -n openshift-cnv
Update the
data
values of the ConfigMap:... data: accessMode: ReadWriteOnce 1 volumeMode: Filesystem 2 <new>.accessMode: ReadWriteMany 3 <new>.volumeMode: Block 4
- 1
- The default accessMode is
ReadWriteOnce
. - 2
- The default volumeMode is
Filesystem
. - 3
- If you add an access mode for storage class, replace the
<new>
part of the parameter with the storage class name. - 4
- If you add a volume mode for a storage class, replace the
<new>
part of the parameter with the storage class name.
- Save and exit the editor to update the config map.
7.15.8.4. Example of multiple storage class defaults
The following YAML file is an example of a kubevirt-storage-class-defaults
ConfigMap that has storage settings configured for two storage classes, migration
and block
.
Ensure that all settings are supported by your underlying storage before you update the ConfigMap.
kind: ConfigMap apiVersion: v1 metadata: name: kubevirt-storage-class-defaults namespace: openshift-cnv ... data: accessMode: ReadWriteOnce volumeMode: Filesystem nfs-sc.accessMode: ReadWriteMany nfs-sc.volumeMode: Filesystem block-sc.accessMode: ReadWriteMany block-sc.volumeMode: Block
7.15.9. Using container disks with virtual machines
You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage.
7.15.9.1. About container disks
A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones.
A container disk can either be imported into a persistent volume claim (PVC) by using a DataVolume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk
volume.
7.15.9.1.1. Importing a container disk into a PVC by using a DataVolume
Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a DataVolume. You can then attach the DataVolume to a virtual machine for persistent storage.
7.15.9.1.2. Attaching a container disk to a virtual machine as a containerDisk
volume
A containerDisk
volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk
volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine.
Use containerDisk
volumes for read-only filesystems such as CD-ROMs or for disposable virtual machines.
Using containerDisk
volumes for read-write filesystems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly.
7.15.9.2. Preparing a container disk for virtual machines
You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a DataVolume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk
volume.
Prerequisites
-
Install
podman
if it is not already installed. - The virtual machine image must be either QCOW2 or RAW format.
Procedure
Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of
107
, and placed in the/disk/
directory inside the container. Permissions for the/disk/
directory must then be set to0440
.The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal
scratch
image in the second stage to store the result:$ cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF
- 1
- Where <vm_image> is the virtual machine image in either QCOW2 or RAW format.
To use a remote virtual machine image, replace<vm_image>.qcow2
with the complete url for the remote image.
Build and tag the container:
$ podman build -t <registry>/<container_disk_name>:latest .
Push the container image to the registry:
$ podman push <registry>/<container_disk_name>:latest
If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage.
7.15.9.3. Disabling TLS for a container registry to use as insecure registry
You can disable TLS (transport layer security) for a container registry by adding the registry to the cdi-insecure-registries
ConfigMap.
Prerequisites
-
Log in to the cluster as a user with the
cluster-admin
role.
Procedure
Add the registry to the
cdi-insecure-registries
ConfigMap in thecdi
namespace.$ oc patch configmap cdi-insecure-registries -n cdi \ --type merge -p '{"data":{"mykey": "<insecure-registry-host>:5000"}}' 1
- 1
- Replace
<insecure-registry-host>
with the registry hostname.
7.15.9.4. Next steps
- Import the container disk into persistent storage for a virtual machine.
- Create a virtual machine that uses a containerDisk volume for ephemeral storage.
7.15.10. Preparing CDI scratch space
7.15.10.1. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.10.2. Understanding scratch space
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, the CDI provisions a scratch space PVC equal to the size of the PVC backing the destination DataVolume (DV). The scratch space PVC is deleted after the operation completes or aborts.
The CDIConfig object allows you to define which StorageClass to use to bind the scratch space PVC by setting the scratchSpaceStorageClass
in the spec:
section of the CDIConfig object.
If the defined StorageClass does not match a StorageClass in the cluster, then the default StorageClass defined for the cluster is used. If there is no default StorageClass defined in the cluster, the StorageClass used to provision the original DV or PVC is used.
The CDI requires requesting scratch space with a file
volume mode, regardless of the PVC backing the origin DataVolume. If the origin PVC is backed by block
volume mode, you must define a StorageClass capable of provisioning file
volume mode PVCs.
Manual provisioning
If there are no storage classes, the CDI will use any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod will remain in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
7.15.10.3. CDI operations that require scratch space
Type | Reason |
---|---|
Registry imports | The CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, the CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
7.15.10.4. Defining a StorageClass in the CDI configuration
Define a StorageClass in the CDI configuration to dynamically provision scratch space for CDI operations.
Procedure
Use the
oc
client to edit thecdiconfig/config
and add or edit thespec: scratchSpaceStorageClass
to match a StorageClass in the cluster.$ oc edit cdiconfig/config
API Version: cdi.kubevirt.io/v1alpha1 kind: CDIConfig metadata: name: config ... spec: scratchSpaceStorageClass: "<storage_class>" ...
7.15.10.5. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt(QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
Additional resources
- See the Dynamic provisioning section for more information on StorageClasses and how these are defined in the cluster.
7.15.11. Re-using persistent volumes
In order to re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used.
7.15.11.1. About reclaiming statically provisioned persistent volumes
When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.
You can then re-use the PV configuration to create a PV with a different name.
Statically provisioned PVs must have a reclaim policy of Retain
to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV.
The Recycle
reclaim policy is deprecated in OpenShift Container Platform 4.
7.15.11.2. Reclaiming statically provisioned persistent volumes
Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage.
Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage.
Procedure
Ensure that the reclaim policy of the PV is set to
Retain
:Check the reclaim policy of the PV:
$ oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'
If the
persistentVolumeReclaimPolicy
is not set toRetain
, edit the reclaim policy with the following command:$ oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Ensure that no resources are using the PV:
$ oc describe pvc <pvc_name> | grep 'Mounted By:'
Remove any resources that use the PVC before continuing.
Delete the PVC to release the PV:
$ oc delete pvc <pvc_name>
Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use
spec
parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV:$ oc get pv <pv_name> -o yaml > <file_name>.yaml
Delete the PV:
$ oc delete pv <pv_name>
Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder:
$ rm -rf <path_to_share_storage>
Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the
spec
parameters of that file as the basis for a new PV manifest:注意To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted.
$ oc create -f <new_pv_name>.yaml
Additional Resources
- Configuring local storage for virtual machines
- The OpenShift Container Platform Storage documentation has more information on Persistent Storage.
7.15.12. Deleting DataVolumes
You can manually delete a DataVolume by using the oc
command-line interface.
When you delete a virtual machine, the DataVolume it uses is automatically deleted.
7.15.12.1. About DataVolumes
DataVolume
objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.
7.15.12.2. Listing all DataVolumes
You can list the DataVolumes in your cluster by using the oc
command-line interface.
Procedure
List all DataVolumes by running the following command:
$ oc get dvs
7.15.12.3. Deleting a DataVolume
You can delete a DataVolume by using the oc
command-line interface (CLI).
Prerequisites
- Identify the name of the DataVolume that you want to delete.
Procedure
Delete the DataVolume by running the following command:
$ oc delete dv <datavolume_name>
注意This command only deletes objects that exist in the current project. Specify the
-n <project_name>
option if the object you want to delete is in a different project or namespace.