Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 9. Storage
9.1. Storage configuration overview Link kopierenLink in die Zwischenablage kopiert!
You can configure a default storage class, storage profiles, Containerized Data Importer (CDI), data volumes (DVs), and automatic boot source updates.
9.1.1. Storage Link kopierenLink in die Zwischenablage kopiert!
The following storage configuration tasks are mandatory:
- Configure a default storage class
-
You must configure a default storage class for the cluster. Otherwise, OpenShift Virtualization cannot automatically import boot source images.
DataVolumeobjects (DVs) andPersistentVolumeClaimobjects (PVCs) that do not explicitly specify a storage class remain in thePendingstate until you set a default storage class. - Configure storage profiles
- You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class.
The following storage configuration tasks are optional:
- Reserve additional PVC space for file system overhead
- By default, 5.5% of a file system PVC is reserved for overhead, reducing the space available for VM disks by that amount. You can configure a different overhead value.
- Configure local storage by using the hostpath provisioner
- You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the HPP Operator is automatically installed.
- Configure user permissions to clone data volumes between namespaces
- You can configure RBAC roles to enable users to clone data volumes between namespaces.
9.1.2. Containerized Data Importer Link kopierenLink in die Zwischenablage kopiert!
You can perform the following Containerized Data Importer (CDI) configuration tasks:
- Override the resource request limits of a namespace
- You can configure CDI to import, upload, and clone VM disks into namespaces that are subject to CPU and memory resource restrictions.
- Configure CDI scratch space
- CDI requires scratch space (temporary storage) to complete some operations, such as importing and uploading VM images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV).
9.1.3. Data volumes Link kopierenLink in die Zwischenablage kopiert!
You can perform the following data volume configuration tasks:
- Enable preallocation for data volumes
- CDI can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes.
- Manage data volume annotations
- Data volume annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
9.1.4. Boot source updates Link kopierenLink in die Zwischenablage kopiert!
You can perform the following boot source update configuration task:
- Manage automatic boot source updates
- Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, CDI imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. You can enable automatic updates for custom boot sources.
9.2. Configuring storage profiles Link kopierenLink in die Zwischenablage kopiert!
A storage profile provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class.
The Containerized Data Importer (CDI) recognizes a storage provider if it has been configured to identify and interact with the storage provider’s capabilities.
For recognized storage types, the CDI provides values that optimize the creation of PVCs. You can also configure automatic settings for the storage class by customizing the storage profile. If the CDI does not recognize your storage provider, you must configure storage profiles.
When using OpenShift Virtualization with Red Hat OpenShift Data Foundation, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and
VolumeMode: Block
9.2.1. Customizing the storage profile Link kopierenLink in die Zwischenablage kopiert!
You can specify default parameters by editing the
StorageProfile
DataVolume
An empty
status
If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created.
Prerequisites
- Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail.
Procedure
Edit the storage profile. In this example, the provisioner is not recognized by CDI.
$ oc edit storageprofile <storage_class>Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>Provide the needed attribute values in the storage profile:
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce1 volumeMode: Filesystem2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>After you save your changes, the selected values appear in the storage profile
element.status
9.2.1.1. Setting a default cloning strategy using a storage profile Link kopierenLink in die Zwischenablage kopiert!
You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy. Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance.
Cloning strategies can be specified by setting the
cloneStrategy
-
is used by default when snapshots are configured. The CDI will use the snapshot method if it recognizes the storage provider and the provider supports Container Storage Interface (CSI) snapshots. This cloning strategy uses a temporary volume snapshot to clone the volume.
snapshot -
uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning.
copy -
uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike
csi-cloneorsnapshot, which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in thecopyobject for the provisioner’s storage class.StorageProfile
You can also set clone strategies using the CLI without modifying the default
claimPropertySets
spec
Example storage profile
apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
name: <provisioner_class>
# ...
spec:
claimPropertySets:
- accessModes:
- ReadWriteOnce
volumeMode:
Filesystem
cloneStrategy: csi-clone
status:
provisioner: <provisioner>
storageClass: <provisioner_class>
| Storage provider | Default behavior |
|---|---|
| rook-ceph.rbd.csi.ceph.com | Snapshot |
| openshift-storage.rbd.csi.ceph.com | Snapshot |
| csi-vxflexos.dellemc.com | CSI Clone |
| csi-isilon.dellemc.com | CSI Clone |
| csi-powermax.dellemc.com | CSI Clone |
| csi-powerstore.dellemc.com | CSI Clone |
| hspc.csi.hitachi.com | CSI Clone |
| csi.hpe.com | CSI Clone |
| spectrumscale.csi.ibm.com | CSI Clone |
| rook-ceph.rbd.csi.ceph.com | CSI Clone |
| openshift-storage.rbd.csi.ceph.com | CSI Clone |
| cephfs.csi.ceph.com | CSI Clone |
| openshift-storage.cephfs.csi.ceph.com | CSI Clone |
9.3. Managing automatic boot source updates Link kopierenLink in die Zwischenablage kopiert!
You can manage automatic updates for the following boot sources:
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources.
9.3.1. Managing Red Hat boot source updates Link kopierenLink in die Zwischenablage kopiert!
You can opt out of automatic updates for all system-defined boot sources by disabling the
enableCommonBootImageImport
DataImportCron
When the
enableCommonBootImageImport
DataSource
DataSource
9.3.1.1. Managing automatic updates for all system-defined boot sources Link kopierenLink in die Zwischenablage kopiert!
Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents
CDIDataImportCronOutdated
To disable automatic updates for all system-defined boot sources, turn off the
enableCommonBootImageImport
false
true
Custom boot sources are not affected by this setting.
Procedure
Toggle the feature gate for automatic boot source updates by editing the
custom resource (CR).HyperConvergedTo disable automatic boot source updates, set the
field in thespec.featureGates.enableCommonBootImageImportCR toHyperConverged. For example:false$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": false}]'To re-enable automatic boot source updates, set the
field in thespec.featureGates.enableCommonBootImageImportCR toHyperConverged. For example:true$ oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": true}]'
9.3.2. Managing custom boot source updates Link kopierenLink in die Zwischenablage kopiert!
Custom boot sources that are not provided by OpenShift Virtualization are not controlled by the feature gate. You must manage them individually by editing the
HyperConverged
You must configure a storage class. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Defining a storage class for details.
9.3.2.1. Configuring a storage class for custom boot source updates Link kopierenLink in die Zwischenablage kopiert!
You can override the default storage class by editing the
HyperConverged
Boot sources are created from storage using the default storage class. If your cluster does not have a default storage class, you must define one before configuring automatic updates for custom boot sources.
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvDefine a new storage class by entering a value in the
field:storageClassNameapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <new_storage_class> schedule: "0 */12 * * *" managedDataSource: <data_source> # ...-
specifies the storage class.
spec.dataImportCronTemplates.spec.template.spec.storageClassName -
is a required field that specifies the schedule for the job in cron format.
spec.dataImportCronTemplates.spec.schedule - is a required field that specifies the data source to use.
spec.dataImportCronTemplates.spec.managedDataSourceNoteFor the custom image to be detected as an available boot source, the value of the
parameter in the VM template must match this value.spec.dataVolumeTemplates.spec.sourceRef.name
-
Remove the
annotation from the current default storage class.storageclass.kubernetes.io/is-default-classRetrieve the name of the current default storage class by running the following command:
$ oc get storageclassExample output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11dIn this example, the current default storage class is named
.hostpath-csi-basicRemove the annotation from the current default storage class by running the following command:
$ oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'Replace
with the<current_default_storage_class>value of the default storage class.storageClassName
Set the new storage class as the default by running the following command:
$ oc patch storageclass <new_storage_class> -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'Replace
with the<new_storage_class>value that you added to thestorageClassNameCR.HyperConverged
9.3.2.2. Enabling automatic updates for custom boot sources Link kopierenLink in die Zwischenablage kopiert!
OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the
HyperConverged
Prerequisites
- The cluster has a default storage class.
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvEdit the
CR, adding the appropriate template and boot source in theHyperConvergedsection. For example:dataImportCronTemplatesExample custom resource
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" spec: schedule: "0 */12 * * *" template: spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 retentionPolicy: "None"-
specifies a required annotation for storage classes with
spec.dataImportCronTemplates.metadata.annotationsset tovolumeBindingMode.WaitForFirstConsumer -
specifies the schedule for the job, specified in cron format.
spec.dataImportCronTemplates.spec.schedule -
specifies the registry source to use to create a data volume. Use the default
spec.dataImportCronTemplates.spec.template.spec.source.registrypodand notpullMethodnode, which is based on thepullMethoddocker cache. Thenodedocker cache is useful when a registry image is available vianode, but the CDI importer is not authorized to access it.Container.Image -
specifies the name of the managed data source. For the custom image to be detected as an available boot source, the name of the image’s
spec.dataImportCronTemplates.spec.managedDataSourcemust match the name of the template’smanagedDataSource, which is found underDataSourcein the VM template YAML file.spec.dataVolumeTemplates.spec.sourceRef.name -
specifies whether to retain data volumes and data sources after the cron job is deleted. Use
spec.dataImportCronTemplates.spec.retentionPolicyto retain data volumes and data sources. UseAllto delete data volumes and data sources.None
-
- Save the file.
9.3.2.3. Enabling volume snapshot boot sources Link kopierenLink in die Zwischenablage kopiert!
Enable volume snapshot boot sources by setting the parameter in the
StorageProfile
DataImportCron
VolumeSnapshot
Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot.
Prerequisites
- You must have access to a volume snapshot with the operating system image.
- The storage must support snapshotting.
Procedure
Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command:
$ oc edit storageprofile <storage_class>-
Review the specification of the
dataImportCronSourceFormatto confirm whether or not the VM is using PVC or volume snapshot by default.StorageProfile Edit the storage profile, if needed, by updating the
specification todataImportCronSourceFormat.snapshotExample storage profile
apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: # ... spec: dataImportCronSourceFormat: snapshot
Verification
Open the storage profile object that corresponds to the storage class used to provision boot sources.
$ oc get storageprofile <storage_class> -oyaml-
Confirm that the specification of the
dataImportCronSourceFormatis set to 'snapshot', and that anyStorageProfileobjects that theDataSourcepoints to now reference volume snapshots.DataImportCron
You can now use these boot sources to create virtual machines.
9.3.3. Disabling automatic updates for a single boot source Link kopierenLink in die Zwischenablage kopiert!
You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the
HyperConverged
Procedure
Open the
CR in your default editor by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvDisable automatic updates for an individual boot source by editing the
field.spec.dataImportCronTemplates- Custom boot source
-
Remove the boot source from the field. Automatic updates are disabled for custom boot sources by default.
spec.dataImportCronTemplates
-
Remove the boot source from the
- System-defined boot source
Add the boot source to
.spec.dataImportCronTemplatesNoteAutomatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them.
Set the value of the
annotation todataimportcrontemplate.kubevirt.io/enable.'false'For example:
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron # ...
- Save the file.
9.3.4. Verifying the status of a boot source Link kopierenLink in die Zwischenablage kopiert!
You can determine if a boot source is system-defined or custom by viewing the
HyperConverged
Procedure
View the contents of the
CR by running the following command:HyperConverged$ oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yamlExample output
apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: # ... status: # ... dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true # ... - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} # ...-
specifies a system-defined boot source.
status.dataImportCronTemplates.status.commonTemplate -
specifies a custom boot source.
status.dataImportCronTemplates.status
-
Verify the status of the boot source by reviewing the
field.status.dataImportCronTemplates.status-
If the field contains , it is a system-defined boot source.
commonTemplate: true -
If the field has the value
status.dataImportCronTemplates.status, it is a custom boot source.{}
-
If the field contains
9.4. Reserving PVC space for file system overhead Link kopierenLink in die Zwischenablage kopiert!
When you add a virtual machine disk to a persistent volume claim (PVC) that uses the
Filesystem
By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount.
You can configure a different overhead value by editing the
HCO
9.4.1. Overriding the default file system overhead value Link kopierenLink in die Zwischenablage kopiert!
Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the
spec.filesystemOverhead
HCO
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Open the
object for editing by running the following command:HCO$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvEdit the
fields, populating them with your chosen values:spec.filesystemOverhead# ... spec: filesystemOverhead: global: "<new_global_value>" storageClass: <storage_class_name>: "<new_value_for_this_storage_class>"-
specifies the default file system overhead percentage used for any storage classes that do not already have a set value. For example,
spec.filesystemOverhead.globalreserves 7% of the PVC for file system overhead.global: "0.07" -
specifies the file system overhead percentage for the specified storage class. For example,
spec.filesystemOverhead.storageClasschanges the default overhead value for PVCs in themystorageclass: "0.04"storage class to 4%.mystorageclass
-
-
Save and exit the editor to update the object.
HCO
Verification
View the
status and verify your changes by running one of the following commands:CDIConfigTo generally verify changes to
:CDIConfig$ oc get cdiconfig -o yamlTo view your specific changes to
:CDIConfig$ oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'
9.5. Configuring local storage by using the hostpath provisioner Link kopierenLink in die Zwischenablage kopiert!
You can configure local storage for virtual machines by using the hostpath provisioner (HPP).
When you install the OpenShift Virtualization Operator, the Hostpath Provisioner Operator is automatically installed. HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use HPP, you create an HPP custom resource (CR) with a basic storage pool.
9.5.1. Creating a hostpath provisioner with a basic storage pool Link kopierenLink in die Zwischenablage kopiert!
You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a
storagePools
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in must have read/write access.
spec.storagePools.path
Procedure
Create an
file with ahpp_cr.yamlstanza as in the following example:storagePoolsapiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools:1 - name: any_name path: "/var/myvolumes"2 workload: nodeSelector: kubernetes.io/os: linux- Save the file and exit.
Create the HPP by running the following command:
$ oc create -f hpp_cr.yaml
9.5.1.1. About creating storage classes Link kopierenLink in die Zwischenablage kopiert!
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a
StorageClass
In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the
storagePools
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the
StorageClass
volumeBindingMode
WaitForFirstConsumer
9.5.1.2. Creating a storage class for the CSI driver with the storagePools stanza Link kopierenLink in die Zwischenablage kopiert!
To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver.
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a
StorageClass
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.
To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the
StorageClass
volumeBindingMode
WaitForFirstConsumer
Procedure
Create a
file to define the storage class:storageclass_csi.yamlapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete1 volumeBindingMode: WaitForFirstConsumer2 parameters: storagePool: my-storage-pool3 -
specifies whether the underlying storage is deleted or retained when a user deletes a PVC. The two possible
reclaimPolicyvalues arereclaimPolicyandDelete. If you do not specify a value, the default value isRetain.Delete -
specifies the timing of PV creation. The
volumeBindingModeconfiguration in this example means that PV creation is delayed until a pod is scheduled to a specific node.WaitForFirstConsumer -
specifies the name of the storage pool defined in the HPP custom resource (CR).
parameters.storagePool
-
- Save the file and exit.
Create the
object by running the following command:StorageClass$ oc create -f storageclass_csi.yaml
9.5.2. About storage pools created with PVC templates Link kopierenLink in die Zwischenablage kopiert!
If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR).
A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation.
The PVC template is based on the
spec
PersistentVolumeClaim
Example PersistentVolumeClaim object
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: iso-pvc
spec:
volumeMode: Block
storageClassName: my-storage-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
The
spec.volumeMode
You define a storage pool using a
pvcTemplate
pvcTemplate
You can combine basic storage pools with storage pools created from PVC templates.
9.5.2.1. Creating a storage pool with a PVC template Link kopierenLink in die Zwischenablage kopiert!
You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR).
Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable.
Prerequisites
-
The directories specified in must have read/write access.
spec.storagePools.path
Procedure
Create an
file for the HPP CR that specifies a persistent volume (PVC) template in thehpp_pvc_template_pool.yamlstanza according to the following example:storagePoolsapiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools:1 - name: my-storage-pool path: "/var/myvolumes"2 pvcTemplate: volumeMode: Block3 storageClassName: my-storage-class4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi5 workload: nodeSelector: kubernetes.io/os: linux- 1 1
- The
storagePoolsstanza is an array that can contain both basic and PVC template storage pools. - 2 2
- Specify the storage pool directories under this node path.
- 3 3
- Optional: The
volumeModeparameter can be eitherBlockorFilesystemas long as it matches the provisioned volume format. If no value is specified, the default isFilesystem. If thevolumeModeisBlock, the mounting pod creates an XFS file system on the block volume before mounting it. - 4
- If the
storageClassNameparameter is omitted, the default storage class is used to create PVCs. If you omitstorageClassName, ensure that the HPP storage class is not the default storage class. - 5
- You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request.
- Save the file and exit.
Create the HPP with a storage pool by running the following command:
$ oc create -f hpp_pvc_template_pool.yaml
9.6. Enabling user permissions to clone data volumes across namespaces Link kopierenLink in die Zwischenablage kopiert!
The isolating nature of namespaces means that users cannot by default clone resources between namespaces.
To enable a user to clone a virtual machine to another namespace, a user with the
cluster-admin
9.6.1. Creating RBAC resources for cloning data volumes Link kopierenLink in die Zwischenablage kopiert!
Create a new cluster role that enables permissions for all actions for the
datavolumes
Prerequisites
- You must have cluster admin privileges.
If you are a non-admin user that is an administrator for both the source and target namespaces, you can create a
Role
ClusterRole
Procedure
Create a
manifest:ClusterRoleapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner>1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"]- 1
- Unique name for the cluster role.
Create the cluster role in the cluster:
$ oc create -f <datavolume-cloner.yaml>1 - 1
- The file name of the
ClusterRolemanifest created in the previous step.
Create a
manifest that applies to both the source and destination namespaces and references the cluster role created in the previous step.RoleBindingapiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user>1 namespace: <Source namespace>2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace>3 roleRef: kind: ClusterRole name: datavolume-cloner4 apiGroup: rbac.authorization.k8s.ioCreate the role binding in the cluster:
$ oc create -f <datavolume-cloner.yaml>1 - 1
- The file name of the
RoleBindingmanifest created in the previous step.
9.7. Configuring CDI to override CPU and memory quotas Link kopierenLink in die Zwischenablage kopiert!
You can configure the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.
9.7.1. About CPU and memory quotas in a namespace Link kopierenLink in die Zwischenablage kopiert!
A resource quota, defined by the
ResourceQuota
The
HyperConverged
0
9.7.2. Overriding CPU and memory defaults Link kopierenLink in die Zwischenablage kopiert!
Modify the default settings for CPU and memory requests and limits for your use case by adding the
spec.resourceRequirements.storageWorkloads
HyperConverged
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Edit the
CR by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the
stanza to the CR, setting the values based on your use case. For example:spec.resourceRequirements.storageWorkloadsapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi"-
Save and exit the editor to update the CR.
HyperConverged
9.8. Preparing CDI scratch space Link kopierenLink in die Zwischenablage kopiert!
9.8.1. About scratch space Link kopierenLink in die Zwischenablage kopiert!
The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts.
You can define the storage class that is used to bind the scratch space PVC in the
spec.scratchSpaceStorageClass
HyperConverged
If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used.
CDI requires requesting scratch space with a
file
block
file
Manual provisioning
If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.
9.8.2. CDI operations that require scratch space Link kopierenLink in die Zwischenablage kopiert!
| Type | Reason |
|---|---|
| Registry imports | CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. |
| Upload image | QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. |
| HTTP imports of archived images | QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. |
| HTTP imports of authenticated images | QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. |
| HTTP imports of custom certificates | QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. |
9.8.3. Defining a storage class Link kopierenLink in die Zwischenablage kopiert!
You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the
spec.scratchSpaceStorageClass
HyperConverged
Prerequisites
-
Install the OpenShift CLI ().
oc
Procedure
Edit the
CR by running the following command:HyperConverged$ oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnvAdd the
field to the CR, setting the value to the name of a storage class that exists in the cluster:spec.scratchSpaceStorageClassapiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>"1 - 1
- If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated.
-
Save and exit your default editor to update the CR.
HyperConverged
9.8.4. CDI supported operations matrix Link kopierenLink in die Zwischenablage kopiert!
This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.
| Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload |
|---|---|---|---|---|---|
| KubeVirt (QCOW2) |
✓ QCOW2 |
✓ QCOW2** |
✓ QCOW2 |
✓ QCOW2* |
✓ QCOW2* |
| KubeVirt (RAW) |
✓ RAW |
✓ RAW |
✓ RAW |
✓ RAW* |
✓ RAW* |
✓ Supported operation
□ Unsupported operation
* Requires scratch space
** Requires scratch space if a custom certificate authority is required
9.9. Using preallocation for data volumes Link kopierenLink in die Zwischenablage kopiert!
The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes.
You can enable preallocation for specific data volumes.
9.9.1. About preallocation Link kopierenLink in die Zwischenablage kopiert!
The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes.
If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type:
fallocate-
If the file system supports it, CDI uses the operating system’s
fallocatecall to preallocate space by using theposix_fallocatefunction, which allocates blocks and marks them as uninitialized. full-
If
fallocatemode cannot be used,fullmode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed.
9.9.2. Enabling preallocation for a data volume Link kopierenLink in die Zwischenablage kopiert!
You can enable preallocation for specific data volumes by including the
spec.preallocation
oc
Preallocation mode is supported for all CDI source types.
Procedure
Specify the
field in the data volume manifest:spec.preallocationapiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source:1 registry: url: <image_url>2 storage: resources: requests: storage: 1Gi # ...
9.10. Managing data volume annotations Link kopierenLink in die Zwischenablage kopiert!
Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods.
9.10.1. Example: Data volume annotations Link kopierenLink in die Zwischenablage kopiert!
This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The
v1.multus-cni.io/default-network: bridge-network
bridge-network
k8s.v1.cni.cncf.io/networks: <network_name>
Multus network annotation example
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: datavolume-example
annotations:
v1.multus-cni.io/default-network: bridge-network
# ...
- 1
- Multus network annotation