MicroShift is Developer Preview software only.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.Chapter 6. Dynamic storage using the LVMS plugin
Red Hat build of MicroShift enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plugin for managing LVM volumes for Kubernetes.
LVMS provisions new logical volume management (LVM) logical volumes (LVs) for container workloads with appropriately configured persistent volume claims (PVC). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.
6.1. LVMS system requirements
Using LVMS in Red Hat build of MicroShift requires the following system specifications.
6.1.1. Volume group name
The default integration of LVMS selects the default volume group (VG) dynamically. If there are no volume groups on the Red Hat build of MicroShift host, LVMS is disabled.
If there is only one VG on the Red Hat build of MicroShift host, that VG is used. If there are multiple volume groups, the group microshift
is used. If the microshift
group is not found, LVMS is disabled.
If you want to use a specific VG, LVMS must be configured to select that VG. You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.
You can change the default name of the VG in the configuration file. For details, read the "Configuring the LVMS" section of this document.
Prior to launching, the lvmd.yaml
configuration file must specify an existing VG on the node with sufficient capacity for workload storage. If the VG does not exist, the node controller fails to start and enters a CrashLoopBackoff
state.
6.1.2. Volume size increments
The LVMS provisions storage in increments of 1 gigabyte (GB). Storage requests are rounded up to the nearest GB. When the capacity of a VG is less than 1 GB, the PersistentVolumeClaim
registers a ProvisioningFailed
event, for example:
Example output
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
6.2. LVMS deployment
LVMS is automatically deployed on to the cluster in the openshift-storage
namespace after Red Hat build of MicroShift starts.
LVMS uses StorageCapacity
tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the free storage of the volume group. For more information about StorageCapacity
tracking, read Storage Capacity.
6.3. Creating an LVMS configuration file
When Red Hat build of MicroShift runs, it uses LVMS configuration from /etc/microshift/lvmd.yaml
, if provided. You must place any configuration files that you create into the /etc/microshift/
directory.
Procedure
To create the
lvmd.yaml
configuration file, run the following command:$ sudo cp /etc/microshift/lvmd.yaml.default /etc/microshift/lvmd.yaml
6.4. Configuring the LVMS
Red Hat build of MicroShift supports passing through your LVM configuration and allows you to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. You can edit the LVMS configuration file you created at any time. You must restart Red Hat build of MicroShift to deploy configuration changes after editing the file.
The following lvmd.yaml
example file shows a basic LVMS configuration:
LVMS configuration example
socket-name: 1 device-classes: 2 - name: 3 volume-group: 4 spare-gb: 5 default: 6 - name: hdd volume-group: hdd-vg spare-gb: 10 - name: striped volume-group: multi-pv-vg spare-gb: 10 stripe: 7 stripe-size: 8 - name: raid volume-group: raid-vg lvcreate-options: 9 - --type=raid1
- 1
- String. The UNIX domain socket endpoint of gRPC. Defaults to
/run/topolvm/lvmd.sock
. - 2
map[string]DeviceClass
. Thedevice-class
settings.- 3
- String. The name of the
device-class
. - 4
- String. The group where the
device-class
creates the logical volumes. - 5
- Unit64. Storage capacity in GiB to be left unallocated in the volume group. Defaults to
0
. - 6
- Boolean. Indicates that the
device-class
is used by default. Defaults tofalse
. - 7
- Unit. The number of stripes in the logical volume.
- 8
- String. The amount of data that is written to one device before moving to the next device.
- 9
- String. Extra arguments to pas
lvcreate
, for example,[--type=raid1"
].
A race condition prevents LVMS from accurately tracking the allocated space and preserving the spare-gb
for a device class when multiple PVCs are created simultaneously. Use separate volume groups and device classes to protect the storage of highly dynamic workloads from each other.
Striping can be configured by using the dedicated options (stripe
and stripe-size
) and lvcreate-options
. Either option can be used, but they cannot be used together. Using stripe
and stripe-size
with lvcreate-options
leads to duplicate arguments to lvcreate
. You should never set lvcreate-options: ["--stripes=n"]
and stripe: n
at the same time. You can, however, use both, when lvcreate-options
is not used for striping. For example:
stripe: 2 lvcreate-options: ["--mirrors=1"]
6.5. Using the LVMS
The LVMS StorageClass
is deployed with a default StorageClass
. Any PersistentVolumeClaim
objects without a .spec.storageClassName
defined automatically has a PersistentVolume
provisioned from the default StorageClass
. Use the following procedure to provision and mount a logical volume to a pod.
Procedure
To provision and mount a logical volume to a pod, run the following command:
$ cat <<'EOF' | oc apply -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-lv-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1G --- apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: nginx image: nginx command: ["/usr/bin/sh", "-c"] args: ["sleep", "1h"] volumeMounts: - mountPath: /mnt name: my-volume securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: my-volume persistentVolumeClaim: claimName: my-lv-pvc EOF