Chapter 9. Storage
9.1. Storage configuration overview
You can configure a default storage class, storage profiles, Containerized Data Importer (CDI), data volumes, and automatic boot source updates.
9.1.1. Storage
The following storage configuration tasks are mandatory:
- Configure a default storage class
- You must configure a default storage class for your cluster. Otherwise, the cluster cannot receive automated boot source updates.
- Configure storage profiles
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
The following storage configuration tasks are optional:
- Reserve additional PVC space for file system overhead
- By default, 5.5% of a file system PVC is reserved for overhead, reducing the space available for VM disks by that amount. You can configure a different overhead value.
- Configure local storage by using the hostpath provisioner
- You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the HPP Operator is automatically installed.
- Configure user permissions to clone data volumes between namespaces
- You can configure RBAC roles to enable users to clone data volumes between namespaces.
9.1.2. Containerized Data Importer
You can perform the following Containerized Data Importer (CDI) configuration tasks:
- Override the resource request limits of a namespace
- You can configure CDI to import, upload, and clone VM disks into namespaces that are subject to CPU and memory resource restrictions.
- Configure CDI scratch space
- CDI requires scratch space (temporary storage) to complete some operations, such as importing and uploading VM images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV).
9.1.3. Data volumes
You can perform the following data volume configuration tasks:
- Enable preallocation for data volumes
- CDI can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes.
- Manage data volume annotations
- Data volume annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
9.1.4. Boot source updates
You can perform the following boot source update configuration task:
- Manage automatic boot source updates
- Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, CDI imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. You can enable automatic updates for custom boot sources.
9.2. Configuring storage profiles
A storage profile provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class.
The Containerized Data Importer (CDI) recognizes a storage provider if it has been configured to identify and interact with the storage provider’s capabilities.
For recognized storage types, the CDI provides values that optimize the creation of PVCs. You can also configure automatic settings for the storage class by customizing the storage profile. If the CDI does not recognize your storage provider, you must configure storage profiles.
When using OpenShift Virtualization with Red Hat OpenShift Data Foundation, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block
.
9.2.1. Customizing the storage profile
You can specify default parameters by editing the StorageProfile
object for the provisioner’s storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume
object.
You cannot modify storage class parameters. To make changes, delete and re-create the storage class. You must then reapply any customizations that were previously made to the storage profile.
An empty status
section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations.
If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created.
Prerequisites
- Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail.
Procedure
Edit the storage profile. In this example, the provisioner is not recognized by CDI.
$ oc edit storageprofile <storage_class>
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>
Provide the needed attribute values in the storage profile:
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>
After you save your changes, the selected values appear in the storage profile
status
element.
9.2.1.1. Setting a default cloning strategy using a storage profile
You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy. Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance.
Cloning strategies can be specified by setting the cloneStrategy
attribute in a storage profile to one of these values:
-
snapshot
is used by default when snapshots are configured. The CDI will use the snapshot method if it recognizes the storage provider and the provider supports Container Storage Interface (CSI) snapshots. This cloning strategy uses a temporary volume snapshot to clone the volume. -
copy
uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning. -
csi-clone
uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlikesnapshot
orcopy
, which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in theStorageProfile
object for the provisioner’s storage class.
You can also set clone strategies using the CLI without modifying the default claimPropertySets
in your YAML spec
section.
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>
Storage provider | Default behavior |
---|---|
rook-ceph.rbd.csi.ceph.com | Snapshot |
openshift-storage.rbd.csi.ceph.com | Snapshot |
csi-vxflexos.dellemc.com | CSI Clone |
csi-isilon.dellemc.com | CSI Clone |
csi-powermax.dellemc.com | CSI Clone |
csi-powerstore.dellemc.com | CSI Clone |
hspc.csi.hitachi.com | CSI Clone |
csi.hpe.com | CSI Clone |
spectrumscale.csi.ibm.com | CSI Clone |
rook-ceph.rbd.csi.ceph.com | CSI Clone |
openshift-storage.rbd.csi.ceph.com | CSI Clone |
cephfs.csi.ceph.com | CSI Clone |
openshift-storage.cephfs.csi.ceph.com | CSI Clone |
9.3. Managing automatic boot source updates
You can manage automatic updates for the following boot sources:
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources.
9.3.1. Managing Red Hat boot source updates
You can opt out of automatic updates for all system-defined boot sources by disabling the enableCommonBootImageImport
feature gate. If you disable this feature gate, all DataImportCron
objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.
When the enableCommonBootImageImport
feature gate is disabled, DataSource
objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by creating a new persistent volume claim (PVC) or volume snapshot for the DataSource
object, then populating it with an operating system image.
9.3.1.1. Managing automatic updates for all system-defined boot sources
Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents CDIDataImportCronOutdated
alerts from filling up logs.
To disable automatic updates for all system-defined boot sources, turn off the enableCommonBootImageImport
feature gate by setting the value to false
. Setting this value to true
re-enables the feature gate and turns automatic updates back on.
Custom boot sources are not affected by this setting.
Procedure
Toggle the feature gate for automatic boot source updates by editing the
HyperConverged
custom resource (CR).To disable automatic boot source updates, set the
spec.featureGates.enableCommonBootImageImport
field in theHyperConverged
CR tofalse
. For example:$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": false}]'
To re-enable automatic boot source updates, set the
spec.featureGates.enableCommonBootImageImport
field in theHyperConverged
CR totrue
. For example:$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": true}]'
9.3.2. Managing custom boot source updates
Custom boot sources that are not provided by OpenShift Virtualization are not controlled by the feature gate. You must manage them individually by editing the HyperConverged
custom resource (CR).
You must configure a storage class. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Defining a storage class for details.
9.3.2.1. Configuring a storage class for custom boot source updates
You can override the default storage class by editing the HyperConverged
custom resource (CR).
Boot sources are created from storage using the default storage class. If your cluster does not have a default storage class, you must define one before configuring automatic updates for custom boot sources.
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Define a new storage class by entering a value in the
storageClassName
field:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <new_storage_class> 1 schedule: "0 */12 * * *" 2 managedDataSource: <data_source> 3 # ...
For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.
Remove the
storageclass.kubernetes.io/is-default-class
annotation from the current default storage class.Retrieve the name of the current default storage class by running the following command:
$ oc get storageclass
Example output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11d 1
- 1
- In this example, the current default storage class is named
hostpath-csi-basic
.
Remove the annotation from the current default storage class by running the following command:
$ oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 1
- 1
- Replace
<current_default_storage_class>
with thestorageClassName
value of the default storage class.
Set the new storage class as the default by running the following command:
$ oc patch storageclass <new_storage_class> -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 1
- 1
- Replace
<new_storage_class>
with thestorageClassName
value that you added to theHyperConverged
CR.
9.3.2.2. Enabling automatic updates for custom boot sources
OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the HyperConverged
custom resource (CR).
Prerequisites
- The cluster has a default storage class.
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Edit the
HyperConverged
CR, adding the appropriate template and boot source in thedataImportCronTemplates
section. For example:Example custom resource
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" 1 labels: instancetype.kubevirt.io/default-preference: centos.7 instancetype.kubevirt.io/default-instancetype: u1.medium spec: schedule: "0 */12 * * *" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos7 4
- 1
- This annotation is required for storage classes with
volumeBindingMode
set toWaitForFirstConsumer
. - 2
- Schedule for the job specified in cron format.
- 3
- Use to create a data volume from a registry source. Use the default
pod
pullMethod
and notnode
pullMethod
, which is based on thenode
docker cache. Thenode
docker cache is useful when a registry image is available viaContainer.Image
, but the CDI importer is not authorized to access it. - 4
- For the custom image to be detected as an available boot source, the name of the image’s
managedDataSource
must match the name of the template’sDataSource
, which is found underspec.dataVolumeTemplates.spec.sourceRef.name
in the VM template YAML file.
- Save the file.
9.3.2.3. Enabling volume snapshot boot sources
Enable volume snapshot boot sources by setting the parameter in the StorageProfile
associated with the storage class that stores operating system base images. Although DataImportCron
was originally designed to maintain only PVC sources, VolumeSnapshot
sources scale better than PVC sources for certain storage types.
Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot.
Prerequisites
- You must have access to a volume snapshot with the operating system image.
- The storage must support snapshotting.
Procedure
Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command:
$ oc edit storageprofile <storage_class>
-
Review the
dataImportCronSourceFormat
specification of theStorageProfile
to confirm whether or not the VM is using PVC or volume snapshot by default. Edit the storage profile, if needed, by updating the
dataImportCronSourceFormat
specification tosnapshot
.Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: # ... spec: dataImportCronSourceFormat: snapshot
Verification
Open the storage profile object that corresponds to the storage class used to provision boot sources.
$ oc get storageprofile <storage_class> -oyaml
-
Confirm that the
dataImportCronSourceFormat
specification of theStorageProfile
is set to 'snapshot', and that anyDataSource
objects that theDataImportCron
points to now reference volume snapshots.
You can now use these boot sources to create virtual machines.
9.3.3. Disabling automatic updates for a single boot source
You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the HyperConverged
custom resource (CR).
Procedure
Open the
HyperConverged
CR in your default editor by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Disable automatic updates for an individual boot source by editing the
spec.dataImportCronTemplates
field.- Custom boot source
-
Remove the boot source from the
spec.dataImportCronTemplates
field. Automatic updates are disabled for custom boot sources by default.
-
Remove the boot source from the
- System-defined boot source
Add the boot source to
spec.dataImportCronTemplates
.NoteAutomatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them.
Set the value of the
dataimportcrontemplate.kubevirt.io/enable
annotation to'false'
.For example:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron # ...
- Save the file.
9.3.4. Verifying the status of a boot source
You can determine if a boot source is system-defined or custom by viewing the HyperConverged
custom resource (CR).
Procedure
View the contents of the
HyperConverged
CR by running the following command:$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml
Example output
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: # ... status: # ... dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 # ... - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 2 # ...
Verify the status of the boot source by reviewing the
status.dataImportCronTemplates.status
field.-
If the field contains
commonTemplate: true
, it is a system-defined boot source. -
If the
status.dataImportCronTemplates.status
field has the value{}
, it is a custom boot source.
-
If the field contains
9.4. Reserving PVC space for file system overhead
When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem
volume mode, you must ensure that there is enough space on the PVC for the VM disk and for file system overhead, such as metadata.
By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount.
You can configure a different overhead value by editing the HCO
object. You can change the value globally and you can specify values for specific storage classes.
9.4.1. Overriding the default file system overhead value
Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the spec.filesystemOverhead
attribute of the HCO
object.
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Open the
HCO
object for editing by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Edit the
spec.filesystemOverhead
fields, populating them with your chosen values:# ... spec: filesystemOverhead: global: "<new_global_value>" 1 storageClass: <storage_class_name>: "<new_value_for_this_storage_class>" 2
- 1
- The default file system overhead percentage used for any storage classes that do not already have a set value. For example,
global: "0.07"
reserves 7% of the PVC for file system overhead. - 2
- The file system overhead percentage for the specified storage class. For example,
mystorageclass: "0.04"
changes the default overhead value for PVCs in themystorageclass
storage class to 4%.
-
Save and exit the editor to update the
HCO
object.
Verification
View the
CDIConfig
status and verify your changes by running one of the following commands:To generally verify changes to
CDIConfig
:$ oc get cdiconfig -o yaml
To view your specific changes to
CDIConfig
:$ oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'
9.5. Configuring local storage by using the hostpath provisioner
You can configure local storage for virtual machines by using the hostpath provisioner (HPP).
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner Operator is automatically installed. HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use HPP, you create an HPP custom resource (CR) with a basic storage pool.
9.5.1. Creating a hostpath provisioner with a basic storage pool
You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools
stanza. The storage pool specifies the name and path used by the CSI driver.
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in
spec.storagePools.path
must have read/write access.
Procedure
Create an
hpp_cr.yaml
file with astoragePools
stanza as in the following example:apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: "/var/myvolumes" 2 workload: nodeSelector: kubernetes.io/os: linux
- Save the file and exit.
Create the HPP by running the following command:
$ oc create -f hpp_cr.yaml
9.5.1.1. About creating storage classes
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools
stanza.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass
value with volumeBindingMode
parameter set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
9.5.1.2. Creating a storage class for the CSI driver with the storagePools stanza
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass
object’s parameters after you create it.
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass
value with volumeBindingMode
parameter set to WaitForFirstConsumer
, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
Procedure
Create a
storageclass_csi.yaml
file to define the storage class:apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3
- 1
- The two possible
reclaimPolicy
values areDelete
andRetain
. If you do not specify a value, the default value isDelete
. - 2
- The
volumeBindingMode
parameter determines when dynamic provisioning and volume binding occur. SpecifyWaitForFirstConsumer
to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements. - 3
- Specify the name of the storage pool defined in the HPP CR.
- Save the file and exit.
Create the
StorageClass
object by running the following command:$ oc create -f storageclass_csi.yaml
9.5.2. About storage pools created with PVC templates
If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR).
A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation.
The PVC template is based on the spec
stanza of the PersistentVolumeClaim
object:
Example PersistentVolumeClaim
object
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: iso-pvc
spec:
volumeMode: Block 1
storageClassName: my-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
- 1
- This value is only required for block volume mode PVs.
You define a storage pool using a pvcTemplate
specification in the HPP CR. The Operator creates a PVC from the pvcTemplate
specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes.
You can combine basic storage pools with storage pools created from PVC templates.
9.5.2.1. Creating a storage pool with a PVC template
You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR).
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in
spec.storagePools.path
must have read/write access.
Procedure
Create an
hpp_pvc_template_pool.yaml
file for the HPP CR that specifies a persistent volume (PVC) template in thestoragePools
stanza according to the following example:apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: "/var/myvolumes" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux
- 1
- The
storagePools
stanza is an array that can contain both basic and PVC template storage pools. - 2
- Specify the storage pool directories under this node path.
- 3
- Optional: The
volumeMode
parameter can be eitherBlock
orFilesystem
as long as it matches the provisioned volume format. If no value is specified, the default isFilesystem
. If thevolumeMode
isBlock
, the mounting pod creates an XFS file system on the block volume before mounting it. - 4
- If the
storageClassName
parameter is omitted, the default storage class is used to create PVCs. If you omitstorageClassName
, ensure that the HPP storage class is not the default storage class. - 5
- You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
- Save the file and exit.
Create the HPP with a storage pool by running the following command:
$ oc create -f hpp_pvc_template_pool.yaml
9.6. Enabling user permissions to clone data volumes across namespaces
The isolating nature of namespaces means that users cannot by default clone resources between namespaces.
To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin
role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace.
9.6.1. Creating RBAC resources for cloning data volumes
Create a new cluster role that enables permissions for all actions for the datavolumes
resource.
Prerequisites
- You must have cluster admin privileges.
Procedure
Create a
ClusterRole
manifest:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"]
- 1
- Unique name for the cluster role.
Create the cluster role in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the
ClusterRole
manifest created in the previous step.
Create a
RoleBinding
manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io
Create the role binding in the cluster:
$ oc create -f <datavolume-cloner.yaml> 1
- 1
- The file name of the
RoleBinding
manifest created in the previous step.
9.7. Configuring CDI to override CPU and memory quotas
You can configure the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
9.7.1. About CPU and memory quotas in a namespace
A resource quota, defined by the ResourceQuota
object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.
The HyperConverged
custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0
. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
When the AutoResourceLimits
feature gate is enabled, OpenShift Virtualization automatically manages CPU and memory limits. If a namespace has both CPU and memory quotas, the memory limit is set to double the base allocation and the CPU limit is one per vCPU.
9.7.2. Overriding CPU and memory defaults
Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads
stanza to the HyperConverged
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the
spec.resourceRequirements.storageWorkloads
stanza to the CR, setting the values based on your use case. For example:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi"
-
Save and exit the editor to update the
HyperConverged
CR.
9.7.3. Additional resources
9.8. Preparing CDI scratch space
9.8.1. About scratch space
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts.
You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass
field of the HyperConverged
custom resource.
If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used.
CDI requires requesting scratch space with a file
volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block
volume mode, you must define a storage class capable of provisioning file
volume mode PVCs.
Manual provisioning
If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
9.8.2. CDI operations that require scratch space
Type | Reason |
---|---|
Registry imports | CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
9.8.3. Defining a storage class
You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass
field to the HyperConverged
custom resource (CR).
Prerequisites
-
Install the OpenShift CLI (
oc
).
Procedure
Edit the
HyperConverged
CR by running the following command:$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv
Add the
spec.scratchSpaceStorageClass
field to the CR, setting the value to the name of a storage class that exists in the cluster:apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>" 1
- 1
- If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated.
-
Save and exit your default editor to update the
HyperConverged
CR.
9.8.4. CDI supported operations matrix
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
---|---|---|---|---|---|
KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
9.8.5. Additional resources
9.9. Using preallocation for data volumes
The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes.
You can enable preallocation for specific data volumes.
9.9.1. About preallocation
The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes.
If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type:
fallocate
-
If the file system supports it, CDI uses the operating system’s
fallocate
call to preallocate space by using theposix_fallocate
function, which allocates blocks and marks them as uninitialized. full
-
If
fallocate
mode cannot be used,full
mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed.
9.9.2. Enabling preallocation for a data volume
You can enable preallocation for specific data volumes by including the spec.preallocation
field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI (oc
).
Preallocation mode is supported for all CDI source types.
Procedure
Specify the
spec.preallocation
field in the data volume manifest:apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true # ...
9.10. Managing data volume annotations
Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
9.10.1. Example: Data volume annotations
This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network
annotation causes the pod to use the multus network named bridge-network
as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name>
annotation.
Multus network annotation example
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: datavolume-example
annotations:
v1.multus-cni.io/default-network: bridge-network 1
# ...
- 1
- Multus network annotation