搜索

7.15. Virtual machine disks

download PDF

7.15.1. Storage features

Use the following table to determine feature availability for local and shared persistent storage in OpenShift Virtualization.

7.15.1.1. OpenShift Virtualization storage feature matrix

表 7.8. OpenShift Virtualization storage feature matrix
 Virtual machine live migrationHost-assisted virtual machine disk cloningStorage-assisted virtual machine disk cloningVirtual machine snapshots

OpenShift Container Storage: RBD block-mode volumes

Yes

Yes

Yes

Yes

OpenShift Virtualization hostpath provisioner

No

Yes

No

No

Other multi-node writable storage

Yes [1]

Yes

Yes [2]

Yes [2]

Other single-node writable storage

No

Yes

Yes [2]

Yes [2]

  1. PVCs must request a ReadWriteMany access mode.
  2. Storage provider must support both Kubernetes and CSI snapshot APIs
注意

You cannot live migrate virtual machines that use:

  • A storage class with ReadWriteOnce (RWO) access mode
  • Passthrough features such as SRI-OV and GPU

Do not set the evictionStrategy field to LiveMigrate for these virtual machines.

7.15.2. Configuring local storage for virtual machines

You can configure local storage for your virtual machines by using the hostpath provisioner feature.

7.15.2.1. About the hostpath provisioner

The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

When you install the OpenShift Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:

  • Configure SELinux:

    • If you use Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig object on each node.
    • Otherwise, apply the SELinux label container_file_t to the PersistentVolume (PV) backing directory on each node.
  • Create a HostPathProvisioner custom resource.
  • Create a StorageClass object for the hostpath provisioner.

The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the PersistentVolumes that the hostpath provisioner creates.

7.15.2.2. Configuring SELinux for the hostpath provisioner on Red Hat Enterprise Linux CoreOS 8

You must configure SELinux before you create the HostPathProvisioner custom resource. To configure SELinux on Red Hat Enterprise Linux CoreOS 8 workers, you must create a MachineConfig object on each node.

注意

If you do not use Red Hat Enterprise Linux CoreOS workers, skip this procedure.

Prerequisites

  • Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.

Procedure

  1. Create the MachineConfig file. For example:

    $ touch machineconfig.yaml
  2. Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      name: 50-set-selinux-for-hostpath-provisioner
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 2.2.0
        systemd:
          units:
            - contents: |
                [Unit]
                Description=Set SELinux chcon for hostpath provisioner
                Before=kubelet.service
    
                [Service]
                ExecStart=/usr/bin/chcon -Rt container_file_t <path/to/backing/directory> 1
    
                [Install]
                WantedBy=multi-user.target
              enabled: true
              name: hostpath-provisioner.service
    1
    Specify the backing directory where you want the provisioner to create PVs.
  3. Create the MachineConfig object:

    $ oc create -f machineconfig.yaml -n <namespace>

7.15.2.3. Using the hostpath provisioner to enable local storage

To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner custom resource.

Prerequisites

  • Create a backing directory on each node for the PersistentVolumes (PVs) that the hostpath provisioner creates.
  • Apply the SELinux context container_file_t to the PV backing directory on each node. For example:

    $ sudo chcon -t container_file_t -R </path/to/backing/directory>
    注意

    If you use Red Hat Enterprise Linux CoreOS 8 workers, you must configure SELinux by using a MachineConfig manifest instead.

Procedure

  1. Create the HostPathProvisioner custom resource file. For example:

    $ touch hostpathprovisioner_cr.yaml
  2. Edit the file, ensuring that the spec.pathConfig.path value is the directory where you want the hostpath provisioner to create PVs. For example:

    apiVersion: hostpathprovisioner.kubevirt.io/v1alpha1
    kind: HostPathProvisioner
    metadata:
      name: hostpath-provisioner
    spec:
      imagePullPolicy: IfNotPresent
      pathConfig:
        path: "</path/to/backing/directory>" 1
        useNamingPrefix: "false" 2
    1
    Specify the backing directory where you want the provisioner to create PVs.
    2
    Change this value to true if you want to use the name of the PersistentVolumeClaim (PVC) that is bound to the created PV as the prefix of the directory name.
    注意

    If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the container_file_t SELinux context, this can cause Permission denied errors.

  3. Create the custom resource in the openshift-cnv namespace:

    $ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv

7.15.2.4. Creating a StorageClass object

When you create a StorageClass object, you set parameters that affect the dynamic provisioning of PersistentVolumes (PVs) that belong to that storage class.

注意

You cannot update a StorageClass object’s parameters after you create it.

Procedure

  1. Create a YAML file for defining the storage class. For example:

    $ touch storageclass.yaml
  2. Edit the file. For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: hostpath-provisioner 1
    provisioner: kubevirt.io/hostpath-provisioner
    reclaimPolicy: Delete 2
    volumeBindingMode: WaitForFirstConsumer 3
    1
    You can optionally rename the storage class by changing this value.
    2
    The two possible reclaimPolicy values are Delete and Retain. If you do not specify a value, the storage class defaults to Delete.
    3
    The volumeBindingMode value determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a PV until after a Pod that uses the PersistentVolumeClaim (PVC) is created. This ensures that the PV meets the Pod’s scheduling requirements.
注意

Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.

To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using StorageClass with volumeBindingMode set to WaitForFirstConsumer, the binding and provisioning of the PV is delayed until a Pod is created using the PVC.

  1. Create the StorageClass object:

    $ oc create -f storageclass.yaml

Additional information

7.15.3. Configuring CDI to work with namespaces that have a compute resource quota

You can use the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions.

7.15.3.1. About CPU and memory quotas in a namespace

A resource quota, defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.

The CDIConfig object defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values for the CDIConfig object are set to a default value of 0. This ensures that pods created by CDI that make no compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.

7.15.3.2. Editing the CDIConfig object to override CPU and memory defaults

Modify the default settings for CPU and memory requests and limits for your use case by editing the spec attribute of the CDIConfig object.

Prerequisites

  • Install the OpenShift CLI (oc).

Procedure

  1. Edit the cdiconfig/config by running the following command:

    $ oc edit cdiconfig/config
  2. Change the default CPU and memory requests and limits by editing the spec: podResourceRequirements property of the CDIConfig object:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: CDIConfig
    metadata:
      labels:
        app: containerized-data-importer
        cdi.kubevirt.io: ""
      name: config
    spec:
        podResourceRequirements:
        limits:
          cpu: "4"
          memory: "1Gi"
        requests:
          cpu: "1"
          memory: "250Mi"
    ...
  3. Save and exit the editor to update the CDIConfig object.

Verification

  • View the CDIConfig status and verify your changes by running the following command:

    $ oc get cdiconfig config -o yaml

7.15.3.3. Additional resources

7.15.4. Uploading local disk images by using the virtctl tool

You can upload a locally stored disk image to a new or existing DataVolume by using the virtctl command-line utility.

7.15.4.1. Prerequisites

7.15.4.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.4.3. Creating an upload DataVolume

You can manually create a DataVolume with an upload data source to use for uploading local disk images.

Procedure

  1. Create a DataVolume configuration that specifies spec: source: upload{}:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: <upload-datavolume> 1
    spec:
      source:
          upload: {}
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 2
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. Ensure that this value is greater than or equal to the size of the disk that you upload.
  2. Create the DataVolume by running the following command:

    $ oc create -f <upload-datavolume>.yaml

7.15.4.4. Uploading a local disk image to a DataVolume

You can use the virtctl CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.

注意

After you upload a local disk image, you can add it to a virtual machine.

Prerequisites

  • A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
  • The kubevirt-virtctl package must be installed on the client machine.
  • The client machine must be configured to trust the OpenShift Container Platform router’s certificate.

Procedure

  1. Identify the following items:

    • The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
    • The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
    • The file location of the virtual machine disk image that you want to upload.
  2. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the previous step. For example:

    $ virtctl image-upload dv <datavolume_name> \ 1
    --size=<datavolume_size> \ 2
    --image-path=</path/to/image> \ 3
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. For example: --size=500Mi, --size=1G
    3
    The file path of the virtual machine disk image.
    注意
    • If you do not want to create a new DataVolume, omit the --size parameter and include the --no-create flag.
    • When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
    • To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified.
  3. Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:

    $ oc get dvs

7.15.4.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.15.5. Uploading a local disk image to a block storage DataVolume

You can upload a local disk image into a block DataVolume by using the virtctl command-line utility.

In this workflow, you create a local block device to use as a PersistentVolume, associate this block volume with an upload DataVolume, and use virtctl to upload the local disk image into the DataVolume.

7.15.5.1. Prerequisites

7.15.5.2. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.5.3. About block PersistentVolumes

A block PersistentVolume (PV) is a PV that is backed by a raw block device. These volumes do not have a filesystem and can provide performance benefits for virtual machines by reducing overhead.

Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PersistentVolumeClaim (PVC) specification.

7.15.5.4. Creating a local block PersistentVolume

Create a local block PersistentVolume (PV) on a node by populating a file and mounting it as a loop device. You can then reference this loop device in a PV configuration as a Block volume and use it as a block device for a virtual machine image.

Procedure

  1. Log in as root to the node on which to create the local PV. This procedure uses node01 for its examples.
  2. Create a file and populate it with null characters so that it can be used as a block device. The following example creates a file loop10 with a size of 2Gb (20 100Mb blocks):

    $ dd if=/dev/zero of=<loop10> bs=100M count=20
  3. Mount the loop10 file as a loop device.

    $ losetup </dev/loop10>d3 <loop10> 1 2
    1
    File path where the loop device is mounted.
    2
    The file created in the previous step to be mounted as the loop device.
  4. Create a PersistentVolume configuration that references the mounted loop device.

    kind: PersistentVolume
    apiVersion: v1
    metadata:
      name: <local-block-pv10>
      annotations:
    spec:
      local:
        path: </dev/loop10> 1
      capacity:
        storage: <2Gi>
      volumeMode: Block 2
      storageClassName: local 3
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Delete
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - <node01> 4
    1
    The path of the loop device on the node.
    2
    Specifies it is a block PV.
    3
    Optional: Set a StorageClass for the PV. If you omit it, the cluster default is used.
    4
    The node on which the block device was mounted.
  5. Create the block PV.

    # oc create -f <local-block-pv10.yaml>1
    1
    The filename of the PersistentVolume created in the previous step.

7.15.5.5. Creating an upload DataVolume

You can manually create a DataVolume with an upload data source to use for uploading local disk images.

Procedure

  1. Create a DataVolume configuration that specifies spec: source: upload{}:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: <upload-datavolume> 1
    spec:
      source:
          upload: {}
      pvc:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: <2Gi> 2
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. Ensure that this value is greater than or equal to the size of the disk that you upload.
  2. Create the DataVolume by running the following command:

    $ oc create -f <upload-datavolume>.yaml

7.15.5.6. Uploading a local disk image to a DataVolume

You can use the virtctl CLI utility to upload a local disk image from a client machine to a DataVolume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure.

注意

After you upload a local disk image, you can add it to a virtual machine.

Prerequisites

  • A virtual machine disk image, in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.
  • The kubevirt-virtctl package must be installed on the client machine.
  • The client machine must be configured to trust the OpenShift Container Platform router’s certificate.

Procedure

  1. Identify the following items:

    • The name of the upload DataVolume that you want to use. If this DataVolume does not exist, it is created automatically.
    • The size of the DataVolume, if you want it to be created during the upload procedure. The size must be greater than or equal to the size of the disk image.
    • The file location of the virtual machine disk image that you want to upload.
  2. Upload the disk image by running the virtctl image-upload command. Specify the parameters that you identified in the previous step. For example:

    $ virtctl image-upload dv <datavolume_name> \ 1
    --size=<datavolume_size> \ 2
    --image-path=</path/to/image> \ 3
    1
    The name of the DataVolume.
    2
    The size of the DataVolume. For example: --size=500Mi, --size=1G
    3
    The file path of the virtual machine disk image.
    注意
    • If you do not want to create a new DataVolume, omit the --size parameter and include the --no-create flag.
    • When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk.
    • To allow insecure server connections when using HTTPS, use the --insecure parameter. Be aware that when you use the --insecure flag, the authenticity of the upload endpoint is not verified.
  3. Optional. To verify that a DataVolume was created, view all DataVolume objects by running the following command:

    $ oc get dvs

7.15.5.7. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

7.15.6. Moving a local virtual machine disk to a different node

Virtual machines that use local volume storage can be moved so that they run on a specific node.

You might want to move the virtual machine to a specific node for the following reasons:

  • The current node has limitations to the local storage configuration.
  • The new node is better optimized for the workload of that virtual machine.

To move a virtual machine that uses local storage, you must clone the underlying volume by using a DataVolume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new DataVolume, or add the new DataVolume to another virtual machine.

注意

Users without the cluster-admin role require additional user permissions in order to clone volumes across namespaces.

7.15.6.1. Cloning a local volume to another node

You can move a virtual machine disk so that it runs on a specific node by cloning the underlying PersistentVolumeClaim (PVC).

To ensure the virtual machine disk is cloned to the correct node, you must either create a new PersistentVolume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the DataVolume.

注意

The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.

Prerequisites

  • The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.

Procedure

  1. Either create a new local PV on the node, or identify a local PV already on the node:

    • Create a local PV that includes the nodeAffinity.nodeSelectorTerms parameters. The following manifest creates a 10Gi local PV on node01.

      kind: PersistentVolume
      apiVersion: v1
      metadata:
        name: <destination-pv> 1
        annotations:
      spec:
        accessModes:
        - ReadWriteOnce
        capacity:
          storage: 10Gi 2
        local:
          path: /mnt/local-storage/local/disk1 3
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - node01 4
        persistentVolumeReclaimPolicy: Delete
        storageClassName: local
        volumeMode: Filesystem
      1
      The name of the PV.
      2
      The size of the PV. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
      3
      The mount path on the node.
      4
      The name of the node where you want to create the PV.
    • Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the nodeAffinity field in its configuration:

      $ oc get pv <destination-pv> -o yaml

      The following snippet shows that the PV is on node01:

      Example output

      ...
      spec:
        nodeAffinity:
          required:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname 1
                operator: In
                values:
                - node01 2
      ...

      1
      The kubernetes.io/hostname key uses the node host name to select a node.
      2
      The host name of the node.
  2. Add a unique label to the PV:

    $ oc label pv <destination-pv> node=node01
  3. Create a DataVolume manifest that references the following:

    • The PVC name and namespace of the virtual machine.
    • The label you applied to the PV in the previous step.
    • The size of the destination PV.

      apiVersion: cdi.kubevirt.io/v1alpha1
      kind: DataVolume
      metadata:
        name: <clone-datavolume> 1
      spec:
        source:
          pvc:
            name: "<source-vm-disk>" 2
            namespace: "<source-namespace>" 3
        pvc:
          accessModes:
            - ReadWriteOnce
          selector:
            matchLabels:
              node: node01 4
          resources:
            requests:
              storage: <10Gi> 5
      1
      The name of the new DataVolume.
      2
      The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration: spec.volumes.persistentVolumeClaim.claimName.
      3
      The namespace where the source PVC exists.
      4
      The label that you applied to the PV in the previous step.
      5
      The size of the destination PV.
  4. Start the cloning operation by applying the DataVolume manifest to your cluster:

    $ oc apply -f <clone-datavolume.yaml>

The DataVolume clones the PVC of the virtual machine into the PV on the specific node.

7.15.7. Expanding virtual storage by adding blank disk images

You can increase your storage capacity or create new data partitions by adding blank disk images to OpenShift Virtualization.

7.15.7.1. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.7.2. Creating a blank disk image with DataVolumes

You can create a new blank disk image in a PersistentVolumeClaim by customizing and deploying a DataVolume configuration file.

Prerequisites

  • At least one available PersistentVolume.
  • Install the OpenShift CLI (oc).

Procedure

  1. Edit the DataVolume configuration file:

    apiVersion: cdi.kubevirt.io/v1alpha1
    kind: DataVolume
    metadata:
      name: blank-image-datavolume
    spec:
      source:
          blank: {}
      pvc:
        # Optional: Set the storage class or omit to accept the default
        # storageClassName: "hostpath"
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 500Mi
  2. Create the blank disk image by running the following command:

    $ oc create -f <blank-image-datavolume>.yaml

7.15.7.3. Template: DataVolume configuration file for blank disk images

blank-image-datavolume.yaml

apiVersion: cdi.kubevirt.io/v1alpha1
kind: DataVolume
metadata:
  name: blank-image-datavolume
spec:
  source:
      blank: {}
  pvc:
    # Optional: Set the storage class or omit to accept the default
    # storageClassName: "hostpath"
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 500Mi

7.15.8. Storage defaults for DataVolumes

The kubevirt-storage-class-defaults ConfigMap provides access mode and volume mode defaults for DataVolumes. You can edit or add storage class defaults to the ConfigMap in order to create DataVolumes in the web console that better match the underlying storage.

7.15.8.1. About storage settings for DataVolumes

DataVolumes require a defined access mode and volume mode to be created in the web console. These storage settings are configured by default with a ReadWriteOnce access mode and Filesystem volume mode.

You can modify these settings by editing the kubevirt-storage-class-defaults ConfigMap in the openshift-cnv namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.

注意

You must configure storage settings that are supported by the underlying storage.

All DataVolumes that you create in the web console use the default storage settings unless you specify a storage class that is also defined in the ConfigMap.

7.15.8.1.1. Access modes

DataVolumes support the following access modes:

  • ReadWriteOnce: The volume can be mounted as read-write by a single node. ReadWriteOnce has greater versatility and is the default setting.
  • ReadWriteMany: The volume can be mounted as read-write by many nodes. ReadWriteMany is required for some features, such as live migration of virtual machines between nodes.

ReadWriteMany is recommended if the underlying storage supports it.

7.15.8.1.2. Volume modes

The volume mode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. DataVolumes support the following volume modes:

  • Filesystem: Creates a filesystem on the DataVolume. This is the default setting.
  • Block: Creates a block DataVolume. Only use Block if the underlying storage supports it.

7.15.8.2. Editing the kubevirt-storage-class-defaults config map in the web console

Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults ConfigMap in the openshift-cnv namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.

注意

You must configure storage settings that are supported by the underlying storage.

Procedure

  1. Click Workloads Config Maps from the side menu.
  2. In the Project list, select openshift-cnv.
  3. Click kubevirt-storage-class-defaults to open the Config Map Overview.
  4. Click the YAML tab to display the editable configuration.
  5. Update the data values with the storage configuration that is appropriate for your underlying storage:

    ...
    data:
      accessMode: ReadWriteOnce 1
      volumeMode: Filesystem 2
      <new>.accessMode: ReadWriteMany 3
      <new>.volumeMode: Block 4
    1
    The default accessMode is ReadWriteOnce.
    2
    The default volumeMode is Filesystem.
    3
    If you add an access mode for a storage class, replace the <new> part of the parameter with the storage class name.
    4
    If you add a volume mode for a storage class, replace the <new> part of the parameter with the storage class name.
  6. Click Save to update the config map.

7.15.8.3. Editing the kubevirt-storage-class-defaults config map in the CLI

Modify the storage settings for DataVolumes by editing the kubevirt-storage-class-defaults ConfigMap in the openshift-cnv namespace. You can also add settings for other storage classes in order to create DataVolumes in the web console for different storage types.

注意

You must configure storage settings that are supported by the underlying storage.

Procedure

  1. Edit the ConfigMap by running the following command:

    $ oc edit configmap kubevirt-storage-class-defaults -n openshift-cnv
  2. Update the data values of the ConfigMap:

    ...
    data:
      accessMode: ReadWriteOnce 1
      volumeMode: Filesystem 2
      <new>.accessMode: ReadWriteMany 3
      <new>.volumeMode: Block 4
    1
    The default accessMode is ReadWriteOnce.
    2
    The default volumeMode is Filesystem.
    3
    If you add an access mode for storage class, replace the <new> part of the parameter with the storage class name.
    4
    If you add a volume mode for a storage class, replace the <new> part of the parameter with the storage class name.
  3. Save and exit the editor to update the config map.

7.15.8.4. Example of multiple storage class defaults

The following YAML file is an example of a kubevirt-storage-class-defaults ConfigMap that has storage settings configured for two storage classes, migration and block.

Ensure that all settings are supported by your underlying storage before you update the ConfigMap.

kind: ConfigMap
apiVersion: v1
metadata:
  name: kubevirt-storage-class-defaults
  namespace: openshift-cnv
...
data:
  accessMode: ReadWriteOnce
  volumeMode: Filesystem
  nfs-sc.accessMode: ReadWriteMany
  nfs-sc.volumeMode: Filesystem
  block-sc.accessMode: ReadWriteMany
  block-sc.volumeMode: Block

7.15.9. Using container disks with virtual machines

You can build a virtual machine image into a container disk and store it in your container registry. You can then import the container disk into persistent storage for a virtual machine or attach it directly to the virtual machine for ephemeral storage.

7.15.9.1. About container disks

A container disk is a virtual machine image that is stored as a container image in a container image registry. You can use container disks to deliver the same disk images to multiple virtual machines and to create large numbers of virtual machine clones.

A container disk can either be imported into a persistent volume claim (PVC) by using a DataVolume that is attached to a virtual machine, or attached directly to a virtual machine as an ephemeral containerDisk volume.

7.15.9.1.1. Importing a container disk into a PVC by using a DataVolume

Use the Containerized Data Importer (CDI) to import the container disk into a PVC by using a DataVolume. You can then attach the DataVolume to a virtual machine for persistent storage.

7.15.9.1.2. Attaching a container disk to a virtual machine as a containerDisk volume

A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. When a virtual machine with a containerDisk volume starts, the container image is pulled from the registry and hosted on the node that is hosting the virtual machine.

Use containerDisk volumes for read-only filesystems such as CD-ROMs or for disposable virtual machines.

重要

Using containerDisk volumes for read-write filesystems is not recommended because the data is temporarily written to local storage on the hosting node. This slows live migration of the virtual machine, such as in the case of node maintenance, because the data must be migrated to the destination node. Additionally, all data is lost if the node loses power or otherwise shuts down unexpectedly.

7.15.9.2. Preparing a container disk for virtual machines

You must build a container disk with a virtual machine image and push it to a container registry before it can used with a virtual machine. You can then either import the container disk into a PVC using a DataVolume and attach it to a virtual machine, or you can attach the container disk directly to a virtual machine as an ephemeral containerDisk volume.

Prerequisites

  • Install podman if it is not already installed.
  • The virtual machine image must be either QCOW2 or RAW format.

Procedure

  1. Create a Dockerfile to build the virtual machine image into a container image. The virtual machine image must be owned by QEMU, which has a UID of 107, and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440.

    The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result:

    $ cat > Dockerfile << EOF
    FROM registry.access.redhat.com/ubi8/ubi:latest AS builder
    ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1
    RUN chmod 0440 /disk/*
    
    FROM scratch
    COPY --from=builder /disk/* /disk/
    EOF
    1
    Where <vm_image> is the virtual machine image in either QCOW2 or RAW format.
    To use a remote virtual machine image, replace <vm_image>.qcow2 with the complete url for the remote image.
  2. Build and tag the container:

    $ podman build -t <registry>/<container_disk_name>:latest .
  3. Push the container image to the registry:

    $ podman push <registry>/<container_disk_name>:latest

If your container registry does not have TLS you must add it as an insecure registry before you can import container disks into persistent storage.

7.15.9.3. Disabling TLS for a container registry to use as insecure registry

You can disable TLS (transport layer security) for a container registry by adding the registry to the cdi-insecure-registries ConfigMap.

Prerequisites

  • Log in to the cluster as a user with the cluster-admin role.

Procedure

  • Add the registry to the cdi-insecure-registries ConfigMap in the cdi namespace.

    $ oc patch configmap cdi-insecure-registries -n cdi \
      --type merge -p '{"data":{"mykey": "<insecure-registry-host>:5000"}}' 1
    1
    Replace <insecure-registry-host> with the registry hostname.

7.15.9.4. Next steps

7.15.10. Preparing CDI scratch space

7.15.10.1. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.10.2. Understanding scratch space

The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, the CDI provisions a scratch space PVC equal to the size of the PVC backing the destination DataVolume (DV). The scratch space PVC is deleted after the operation completes or aborts.

The CDIConfig object allows you to define which StorageClass to use to bind the scratch space PVC by setting the scratchSpaceStorageClass in the spec: section of the CDIConfig object.

If the defined StorageClass does not match a StorageClass in the cluster, then the default StorageClass defined for the cluster is used. If there is no default StorageClass defined in the cluster, the StorageClass used to provision the original DV or PVC is used.

注意

The CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin DataVolume. If the origin PVC is backed by block volume mode, you must define a StorageClass capable of provisioning file volume mode PVCs.

Manual provisioning

If there are no storage classes, the CDI will use any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod will remain in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod.

7.15.10.3. CDI operations that require scratch space

TypeReason

Registry imports

The CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk.

Upload image

QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion.

HTTP imports of archived images

QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG.

HTTP imports of authenticated images

QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG.

HTTP imports of custom certificates

QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, the CDI downloads the image to scratch space before passing the file to QEMU-IMG.

7.15.10.4. Defining a StorageClass in the CDI configuration

Define a StorageClass in the CDI configuration to dynamically provision scratch space for CDI operations.

Procedure

  • Use the oc client to edit the cdiconfig/config and add or edit the spec: scratchSpaceStorageClass to match a StorageClass in the cluster.

    $ oc edit cdiconfig/config
    API Version:  cdi.kubevirt.io/v1alpha1
    kind: CDIConfig
    metadata:
      name: config
    ...
    spec:
      scratchSpaceStorageClass: "<storage_class>"
    ...

7.15.10.5. CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt(QCOW2)

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2**
✓ GZ*
✓ XZ*

✓ QCOW2
✓ GZ*
✓ XZ*

✓ QCOW2*
□ GZ
□ XZ

✓ QCOW2*
✓ GZ*
✓ XZ*

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW*
□ GZ
□ XZ

✓ RAW*
✓ GZ*
✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

Additional resources

  • See the Dynamic provisioning section for more information on StorageClasses and how these are defined in the cluster.

7.15.11. Re-using persistent volumes

In order to re-use a statically provisioned persistent volume (PV), you must first reclaim the volume. This involves deleting the PV so that the storage configuration can be re-used.

7.15.11.1. About reclaiming statically provisioned persistent volumes

When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage.

You can then re-use the PV configuration to create a PV with a different name.

Statically provisioned PVs must have a reclaim policy of Retain to be reclaimed. If they do not, the PV enters a failed state when the PVC is unbound from the PV.

重要

The Recycle reclaim policy is deprecated in OpenShift Container Platform 4.

7.15.11.2. Reclaiming statically provisioned persistent volumes

Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage.

Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage.

Procedure

  1. Ensure that the reclaim policy of the PV is set to Retain:

    1. Check the reclaim policy of the PV:

      $ oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'
    2. If the persistentVolumeReclaimPolicy is not set to Retain, edit the reclaim policy with the following command:

      $ oc patch pv <pv_name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
  2. Ensure that no resources are using the PV:

    $ oc describe pvc <pvc_name> | grep 'Mounted By:'

    Remove any resources that use the PVC before continuing.

  3. Delete the PVC to release the PV:

    $ oc delete pvc <pvc_name>
  4. Optional: Export the PV configuration to a YAML file. If you manually remove the shared storage later in this procedure, you can refer to this configuration. You can also use spec parameters in this file as the basis to create a new PV with the same storage configuration after you reclaim the PV:

    $ oc get pv <pv_name> -o yaml > <file_name>.yaml
  5. Delete the PV:

    $ oc delete pv <pv_name>
  6. Optional: Depending on the storage type, you might need to remove the contents of the shared storage folder:

    $ rm -rf <path_to_share_storage>
  7. Optional: Create a PV that uses the same storage configuration as the deleted PV. If you exported the reclaimed PV configuration earlier, you can use the spec parameters of that file as the basis for a new PV manifest:

    注意

    To avoid possible conflict, it is good practice to give the new PV object a different name than the one that you deleted.

    $ oc create -f <new_pv_name>.yaml

Additional Resources

7.15.12. Deleting DataVolumes

You can manually delete a DataVolume by using the oc command-line interface.

注意

When you delete a virtual machine, the DataVolume it uses is automatically deleted.

7.15.12.1. About DataVolumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. DataVolumes orchestrate import, clone, and upload operations that are associated with an underlying PersistentVolumeClaim (PVC). DataVolumes are integrated with KubeVirt, and they prevent a virtual machine from being started before the PVC has been prepared.

7.15.12.2. Listing all DataVolumes

You can list the DataVolumes in your cluster by using the oc command-line interface.

Procedure

  • List all DataVolumes by running the following command:

    $ oc get dvs

7.15.12.3. Deleting a DataVolume

You can delete a DataVolume by using the oc command-line interface (CLI).

Prerequisites

  • Identify the name of the DataVolume that you want to delete.

Procedure

  • Delete the DataVolume by running the following command:

    $ oc delete dv <datavolume_name>
    注意

    This command only deletes objects that exist in the current project. Specify the -n <project_name> option if the object you want to delete is in a different project or namespace.

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.