Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 6. Dynamic storage using the LVMS plugin


MicroShift enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plugin for managing logical volume management (LVM) logical volumes (LVs) for Kubernetes.

LVMS provisions new LVM logical volumes for container workloads with appropriately configured persistent volume claims (PVCs). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.

6.1. LVMS system requirements

Using LVMS in MicroShift requires the following system specifications.

6.1.1. Volume group name

If you did not configure LVMS in an lvmd.yaml file placed in the /etc/microshift/ directory, MicroShift attempts to assign a default volume group (VG) dynamically by running the vgs command.

  • MicroShift assigns a default VG when only one VG is found.
  • If more than one VG is present, the VG named microshift is assigned as the default.
  • If a VG named microshift does not exist, LVMS is not deployed.

If there are no volume groups on the MicroShift host, LVMS is disabled.

If you want to use a specific VG, LVMS must be configured to select that VG. You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.

You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.

After MicroShift starts, you can update the lvmd.yaml to include or remove VGs. To implement changes, you must restart MicroShift. If the lvmd.yaml is deleted, MicroShift attempts to find a default VG again.

6.1.2. Volume size increments

The LVMS provisions storage in increments of 1 gigabyte (GB). Storage requests are rounded up to the nearest GB. When the capacity of a VG is less than 1 GB, the PersistentVolumeClaim registers a ProvisioningFailed event, for example:

Example output

Warning  ProvisioningFailed    3s (x2 over 5s)  topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)

6.2. Disabling and uninstalling LVMS CSI provider and CSI snapshot deployments

You can reduce the use of runtime resources such as RAM, CPU, and storage by removing or disabling the following storage components:

  • You can configure MicroShift to disable the built-in logical volume manager storage (LVMS) Container Storage Interface (CSI) provider.
  • You can configure MicroShift to disable the Container Storage Interface (CSI) snapshot capabilities.
  • You can uninstall the installed CSI implementations using oc commands.
Important

Automated uninstallation is not supported as this can cause orphaning of the provisioned volumes. Without the LVMS CSI driver, the cluster does not have knowledge of the underlying storage interface and cannot perform provisioning and deprovisioning or mounting and unmounting operations.

Note

You can configure MicroShift to disable CSI provider and CSI snapshot only before installing and running MicroShift.After MicroShift is installed and running, you must update the configuration file and uninstall the components.

6.3. Disabling deployments that run CSI snapshot implementations

Use the following procedure to disable installation of the CSI implementation pods.

Important

This procedure is for users who are defining the configuration file before installing and running MicroShift. If MicroShift is already started then CSI snapshot implementation will be running. Users must manually remove it by following the uninstallation instructions.

Note

MicroShift will not delete CSI snapshot implementation pods. You must configure MicroShift to disable installation of the CSI snapshot implementation pods during the startup process.

Procedure

  1. Disable installation of the CSI snapshot controller by entering the optionalCsiComponents value under the storage section of the MicroShift configuration file in /etc/microshift/config.yaml:

    # ...
      storage: {} 1
    # ...
    1
    Accepted values are:
    • Not defining optionalCsiComponents.
    • Specifying optionalCsiComponents field with an empty value ([]) or a single empty string element ([""]).
    • Specifying optionalCsiComponents with one of the accepted values which are snapshot-controller, snapshot-webhook, or none. none is mutually exclusive with all other values.

      Note

      If the optionalCsiComponents value is empty or null, MicroShift defaults to deploying snapshot-controller and snapshot-webhook.

  2. After the optionalCsiComponents field is specified with a supported value in the config.yaml, start MicroShift by running the following command:

    $ sudo systemctl start microshift
    Note

    MicroShift does not redeploy the disabled components after a restart.

6.4. Disabling deployments that run the CSI driver implementations

Use the following procedure to disable installation of the CSI implementation pods.

Important

This procedure is for users who are defining the configuration file before installing and running MicroShift. If MicroShift is already started then CSI driver implementation will be running. Users must manually remove it by following the uninstallation instructions.

Note

MicroShift will not delete CSI driver implementation pods. You must configure MicroShift to disable installation of the CSI driver implementation pods during the startup process.

Procedure

  1. Disable installation of the CSI driver by entering the driver value under the storage section of the MicroShift configuration file in /etc/microshift/config.yaml:

    # ...
      storage
       driver:
       - "none" 1
    # ...
    1
    Valid values are none or lvms.
    Note

    By default, the driver value is empty or null and LVMS is deployed.

  2. Start MicroShift after the driver field is specified with a supported value in the /etc/microshift/config.yaml file by running the following command:

    $ sudo systemctl enable --now microshift
    Note

    MicroShift does not redeploy the disabled components after a restart operation.

6.5. Uninstalling the CSI snapshot implementation

To uninstall the installed CSI snapshot implementation, use the following procedure.

Prerequisites

  • MicroShift is installed and running.
  • The CSI snapshot implementation is deployed on the MicroShift cluster.

Procedure

  1. Uninstall the CSI snapshot implementation by running the following command:

    $ oc delete -n kube-system deployment.apps/snapshot-controller deployment.apps/snapshot-webhook

    Example output

    deployment.apps "snapshot-controller" deleted
    deployment.apps "snapshot-webhook" deleted

6.6. Uninstalling the CSI driver implementation

To uninstall the installed CSI driver implementation, use the following procedure.

Prerequisites

  • MicroShift is installed and running.
  • The CSI driver implementation is deployed on the MicroShift cluster.

Procedure

  1. Delete the lvmclusters object by running the following command:

    $ oc delete -n openshift-storage lvmclusters.lvm.topolvm.io/lvms

    Example output

    lvmcluster.lvm.topolvm.io "lvms" deleted

  2. Delete the lvms-operator by running the following command:

    $ oc delete -n openshift-storage deployment.apps/lvms-operator

    Example output

    deployment.apps "lvms-operator" deleted

  3. Delete the topolvm-provisioner StorageClass by running the following command:

    $ oc delete storageclasses.storage.k8s.io/topolvm-provisioner

    Example output

    storageclass.storage.k8s.io "topolvm-provisioner" deleted

6.7. LVMS deployment

LVMS is automatically deployed on to the cluster in the openshift-storage namespace after MicroShift starts.

LVMS uses StorageCapacity tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the free storage of the volume group. For more information about StorageCapacity tracking, read Storage Capacity.

6.8. Limitations to configure the size of the devices used in LVM Storage

The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:

  • The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
  • The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

    • You can define the size of PE and LE during the physical and logical device creation.
    • The default PE and LE size is 4 MB.
    • If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
    • The size limit for Red Hat Enterprise Linux (RHEL) 9 using the default PE and LE size is 8 EB.
    • The following are the minimum storage sizes that you can request for each file system type:

      • block: 8 MiB
      • xfs: 300 MiB
      • ext4: 32 MiB

6.9. Creating an LVMS configuration file

When MicroShift runs, it uses LVMS configuration from /etc/microshift/lvmd.yaml, if provided. You must place any configuration files that you create into the /etc/microshift/ directory.

Procedure

  • To create the lvmd.yaml configuration file, run the following command:

    $ sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml

6.10. Basic LVMS configuration example

MicroShift supports passing through your LVM configuration and allows you to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. You can edit the LVMS configuration file you created at any time. You must restart MicroShift to deploy configuration changes after editing the file.

Note

If you need to take volume snapshots, you must use thin provisioning in your lvmd.conf file. If you do not need to take volume snapshots, you can use thick volumes.

The following lvmd.yaml example file shows a basic LVMS configuration:

LVMS configuration example

socket-name: 1
device-classes: 2
  - name: "default" 3
    volume-group: "VGNAMEHERE" 4
    spare-gb: 0 5
    default: 6

1
String. The UNIX domain socket endpoint of gRPC. Defaults to '/run/lvmd/lvmd.socket'.
2
A list of maps for the settings for each device-class.
3
String. The name of the device-class.
4
String. The group where the device-class creates the logical volumes.
5
Unsigned 64-bit integer. Storage capacity in GB to be left unallocated in the volume group. Defaults to 0.
6
Boolean. Indicates that the device-class is used by default. Defaults to false. At least one value must be entered in the YAML file values when this is set to true.
Important

A race condition prevents LVMS from accurately tracking the allocated space and preserving the spare-gb for a device class when multiple PVCs are created simultaneously. Use separate volume groups and device classes to protect the storage of highly dynamic workloads from each other.

6.11. Using the LVMS

The LVMS StorageClass is deployed with a default StorageClass. Any PersistentVolumeClaim objects without a .spec.storageClassName defined automatically has a PersistentVolume provisioned from the default StorageClass. Use the following procedure to provision and mount a logical volume to a pod.

Procedure

  • To provision and mount a logical volume to a pod, run the following command:

    $ cat <<EOF | oc apply -f -
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: my-lv-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1G
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
      - name: nginx
        image: nginx
        command: ["/usr/bin/sh", "-c"]
        args: ["sleep", "1h"]
        volumeMounts:
        - mountPath: /mnt
          name: my-volume
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - ALL
          runAsNonRoot: true
          seccompProfile:
            type: RuntimeDefault
      volumes:
        - name: my-volume
          persistentVolumeClaim:
            claimName: my-lv-pvc
    EOF

6.11.1. Device classes

You can create custom device classes by adding a device-classes array to your logical volume manager storage (LVMS) configuration. Add the array to the /etc/microshift/lvmd.yaml configuration file. A single device class must be set as the default. You must restart MicroShift for configuration changes to take effect.

Warning

Removing a device class while there are still persistent volumes or VolumeSnapshotContent objects connected to that device class breaks both thick and thin provisioning.

You can define multiple device classes in the device-classes array. These classes can be a mix of thick and thin volume configurations.

Example of a mixed device-class array

socket-name: /run/topolvm/lvmd.sock
device-classes:
  - name: ssd
    volume-group: ssd-vg
    spare-gb: 0 1
    default: true
  - name: hdd
    volume-group: hdd-vg
    spare-gb: 0
  - name: thin
    spare-gb: 0
    thin-pool:
      name: thin
      overprovision-ratio: 10
    type: thin
    volume-group: ssd
  - name: striped
    volume-group: multi-pv-vg
    spare-gb: 0
    stripe: 2
    stripe-size: "64"
    lvcreate-options:2

1
When you set the spare capacity to anything other than 0, more space can be allocated than expected.
2
Extra arguments to pass to the lvcreate command, such as --type=<type>. Neither MicroShift nor the LVMS verifies lvcreate-options values. These optional values are passed as is to the lvcreate command. Ensure that the options specified here are correct.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.