Chapter 26. Configuring Local Volumes
26.1. Overview
OpenShift Container Platform can be configured to access local volumes for application data.
Local volumes are persistent volumes (PV) that represent locally-mounted file systems, including raw block devices. A raw device offers a more direct route to the physical device and allows an application more control over the timing of I/O operations to that physical device. This makes raw devices suitable for complex applications such as database management systems that typically do their own caching. Local volumes have a few unique features. Any pod that uses a local volume PV is scheduled on the node where the local volume is mounted.
In addition, local volumes include a provisioner that automatically creates PVs for locally-mounted devices. This provisioner currently scans only pre-configured directories. This provisioner cannot dynamically provision volumes, but this feature might be implemented in a future release.
The local volume provisioner allows using local storage within OpenShift Container Platform and supports:
- Volumes
- PVs
Local volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.
26.2. Mounting local volumes
All local volumes must be manually mounted before they can be consumed by OpenShift Container Platform as PVs.
To mount local volumes:
Mount all volumes into the /mnt/local-storage/<storage-class-name>/<volume> path. Administrators must create local devices as needed using any method such as disk partition or LVM, create suitable file systems on these devices, and mount these devices using a script or
/etc/fstab
entries, for example:# device name # mount point # FS # options # extra /dev/sdb1 /mnt/local-storage/ssd/disk1 ext4 defaults 1 2 /dev/sdb2 /mnt/local-storage/ssd/disk2 ext4 defaults 1 2 /dev/sdb3 /mnt/local-storage/ssd/disk3 ext4 defaults 1 2 /dev/sdc1 /mnt/local-storage/hdd/disk1 ext4 defaults 1 2 /dev/sdc2 /mnt/local-storage/hdd/disk2 ext4 defaults 1 2
Make all volumes accessible to the processes running within the containers. You can change the labels of mounted file systems to allow this, for example:
--- $ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/ ---
26.3. Configuring the local provisioner
OpenShift Container Platform depends on an external provisioner to create PVs for local devices and to clean up PVs when they are not in use to enable reuse.
- The local volume provisioner is different from most provisioners and does not support dynamic provisioning.
- The local volume provisioner requires administrators to preconfigure the local volumes on each node and mount them under discovery directories. The provisioner then manages the volumes by creating and cleaning up PVs for each volume.
To configure the local provisioner:
Configure the external provisioner using a ConfigMap to relate directories with storage classes. This configuration must be created before the provisioner is deployed, for example:
apiVersion: v1 kind: ConfigMap metadata: name: local-volume-config data: storageClassMap: | local-ssd: 1 hostDir: /mnt/local-storage/ssd 2 mountDir: /mnt/local-storage/ssd 3 local-hdd: hostDir: /mnt/local-storage/hdd mountDir: /mnt/local-storage/hdd
-
(Optional) Create a standalone namespace for the local volume provisioner and its configuration, for example:
oc new-project local-storage
.
With this configuration, the provisioner creates:
-
One PV with storage class
local-ssd
for every subdirectory mounted in the /mnt/local-storage/ssd directory -
One PV with storage class
local-hdd
for every subdirectory mounted in the /mnt/local-storage/hdd directory
26.4. Deploying the local provisioner
Before starting the provisioner, mount all local devices and create a ConfigMap with storage classes and their directories.
To deploy the local provisioner:
- Install the local provisioner from the local-storage-provisioner-template.yaml file.
Create a service account that allows running pods as a root user, using hostPath volumes, and using any SELinux context to monitor, manage, and clean local volumes:
$ oc create serviceaccount local-storage-admin $ oc adm policy add-scc-to-user privileged -z local-storage-admin
To allow the provisioner pod to delete content on local volumes created by any pod, root privileges and any SELinux context are required. hostPath is required to access the /mnt/local-storage path on the host.
Install the template:
$ oc create -f https://raw.githubusercontent.com/openshift/origin/release-3.11/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
Instantiate the template by specifying values for the
CONFIGMAP
,SERVICE_ACCOUNT
,NAMESPACE
, andPROVISIONER_IMAGE
parameters:$ oc new-app -p CONFIGMAP=local-volume-config \ -p SERVICE_ACCOUNT=local-storage-admin \ -p NAMESPACE=local-storage \ -p PROVISIONER_IMAGE=registry.redhat.io/openshift3/local-storage-provisioner:v3.11 \ 1 local-storage-provisioner
- 1
- Provide your OpenShift Container Platform version number, such as
v3.11
.
Add the necessary storage classes:
$ oc create -f ./storage-class-ssd.yaml $ oc create -f ./storage-class-hdd.yaml
For example:
storage-class-ssd.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-ssd provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
storage-class-hdd.yaml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-hdd provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
See the local storage provisioner template for other configurable options. This template creates a DaemonSet that runs a pod on every node. The pod watches the directories that are specified in the ConfigMap and automatically creates PVs for them.
The provisioner runs with root permissions because it removes all data from the modified directories when a PV is released.
26.5. Adding new devices
Adding a new device is semi-automatic. The provisioner periodically checks for new mounts in configured directories. Administrators must create a new subdirectory, mount a device, and allow pods to use the device by applying the SELinux label, for example:
$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
Omitting any of these steps may result in the wrong PV being created.
26.6. Configuring raw block devices
It is possible to statically provision raw block devices using the local volume provisioner. This feature is disabled by default and requires additional configuration.
To configure raw block devices:
Enable the
BlockVolume
feature gate on all masters. Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and addBlockVolume=true
under theapiServerArguments
andcontrollerArguments
sections:apiServerArguments: feature-gates: - BlockVolume=true ... controllerArguments: feature-gates: - BlockVolume=true ...
Enable the feature gate on all nodes by editing the node configuration ConfigMap:
$ oc edit configmap node-config-compute --namespace openshift-node $ oc edit configmap node-config-master --namespace openshift-node $ oc edit configmap node-config-infra --namespace openshift-node
Ensure that all ConfigMaps contain
BlockVolume=true
in the feature gates array of thekubeletArguments
, for example:node configmap feature-gates setting
kubeletArguments: feature-gates: - RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,BlockVolume=true
- Restart the master. The nodes restart automatically after the configuration change. This may take several minutes.
26.6.1. Preparing raw block devices
Before you start the provisioner, link all the raw block devices that pods can use to the /mnt/local-storage/<storage class> directory structure. For example, to make directory /dev/dm-36 available:
Create a directory for the device’s storage class in /mnt/local-storage:
$ mkdir -p /mnt/local-storage/block-devices
Create a symbolic link that points to the device:
$ ln -s /dev/dm-36 dm-uuid-LVM-1234
NoteTo avoid possible name conflicts, use the same name for the symbolic link and the link from the /dev/disk/by-uuid or /dev/disk/by-id directory .
Create or update the ConfigMap that configures the provisioner:
apiVersion: v1 kind: ConfigMap metadata: name: local-volume-config data: storageClassMap: | block-devices: 1 hostDir: /mnt/local-storage/block-devices 2 mountDir: /mnt/local-storage/block-devices 3
Change the
SELinux
label of the device and the /mnt/local-storage/:$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/ $ chcon unconfined_u:object_r:svirt_sandbox_file_t:s0 /dev/dm-36
Create a storage class for the raw block devices:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: block-devices provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
The block device /dev/dm-36 is now ready to be used by the provisioner and provisioned as a PV.
26.6.2. Deploying raw block device provisioners
Deploying the provisioner for raw block devices is similar to deploying the provisioner on local volumes. There are two differences:
- The provisioner must run in a privileged container.
- The provisioner must have access to the /dev file system from the host.
To deploy the provisioner for raw block devices:
- Download the template from the local-storage-provisioner-template.yaml file.
Edit the template:
Set the
privileged
attribute of thesecurityContext
of the container spec totrue
:... containers: ... name: provisioner ... securityContext: privileged: true ...
Mount the host /dev/ file system to the container using
hostPath
:... containers: ... name: provisioner ... volumeMounts: - mountPath: /dev name: dev ... volumes: - hostPath: path: /dev name: dev ...
Create the template from the modified YAML file:
$ oc create -f local-storage-provisioner-template.yaml
Start the provisioner:
$ oc new-app -p CONFIGMAP=local-volume-config \ -p SERVICE_ACCOUNT=local-storage-admin \ -p NAMESPACE=local-storage \ -p PROVISIONER_IMAGE=registry.redhat.io/openshift3/local-storage-provisioner:v3.11 \ local-storage-provisioner
26.6.3. Using raw block device persistent volumes
To use the raw block device in the pod, create a persistent volume claim (PVC) with volumeMode:
set to Block
and storageClassName
set to block-devices
, for example:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: storageClassName: block-devices accessModes: - ReadWriteOnce volumeMode: Block resources: requests: storage: 1Gi
Pod using the raw block device PVC
apiVersion: v1 kind: Pod metadata: name: busybox-test labels: name: busybox-test spec: restartPolicy: Never containers: - resources: limits : cpu: 0.5 image: gcr.io/google_containers/busybox command: - "/bin/sh" - "-c" - "while true; do date; sleep 1; done" name: busybox volumeDevices: - name: vol devicePath: /dev/xvda volumes: - name: vol persistentVolumeClaim: claimName: block-pvc
The volume is not mounted in the pod but is exposed as the /dev/xvda raw block device.