This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Questo contenuto non è disponibile nella lingua selezionata.
Chapter 22. Configuring for Local Volume
22.1. Overview Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform can be configured to access local volumes for application data.
Local volumes are persistent volumes (PV) representing locally-mounted file systems. In the future, they may be extended to raw block devices.
Local volumes are different from a hostPath. They have a special annotation that makes any pod that uses the PV to be scheduled on the same node where the local volume is mounted.
In addition, local volume includes a provisioner that automatically creates PVs for locally mounted devices. This provisioner is currently limited and it only scans pre-configured directories. It cannot dynamically provision volumes, which may be implemented in a future release.
The local volume provisioner allows using local storage within OpenShift Container Platform and supports:
- Volumes
- PVs
Local volumes is an alpha feature and may change in a future release of OpenShift Container Platform.
22.1.1. Enable Local Volumes Copia collegamentoCollegamento copiato negli appunti!
Enable the PersistentLocalVolumes
feature gate on all masters and nodes.
Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and add
PersistentLocalVolumes=true
under theapiServerArguments
andcontrollerArguments
sections:Copy to Clipboard Copied! Toggle word wrap Toggle overflow On all nodes, edit or create the node configuration file (/etc/origin/node/node-config.yaml by default) and add
PersistentLocalVolumes=true
feature gate underkubeletArguments
:kubeletArguments: feature-gates: - PersistentLocalVolumes=true
kubeletArguments: feature-gates: - PersistentLocalVolumes=true
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
22.1.2. Mount Local Volumes Copia collegamentoCollegamento copiato negli appunti!
All local volumes must be manually mounted before they can be consumed by OpenShift Container Platform as PVs.
All volumes must be mounted into the /mnt/local-storage/<storage-class-name>/<volume> path. The administrators are required to create the local devices as needed (by using any method such as a disk partition or an LVM), create suitable file systems on these devices, and mount them using a script or /etc/fstab
entries.
Example /etc/fstab
entries
All volumes must be accessible to processes running within Docker containers. Change the labels of mounted file systems to allow that:
chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
22.1.3. Configure Local Provisioner Copia collegamentoCollegamento copiato negli appunti!
OpenShift Container Platform depends on an external provisioner to create PVs for local devices and to clean them up when they are not needed (to enable reuse).
- The local volume provisioner is different from most provisioners and does not support dynamic provisioning.
- The local volume provisioner requires that the administrators preconfigure the local volumes on each node and mount them under discovery directories. The provisioner then manages the volumes by creating and cleaning up PVs for each volume.
This external provisioner should be configured using a ConfigMap
to relate directories with StorageClasses. This configuration must be created before the provisioner is deployed.
(Optional) Create a standalone namespace for local volume provisioner and its configuration, for example: oc new-project local-storage
With this configuration, the provisioner creates:
-
One PV with StorageClass
local-ssd
for every subdirectory in /mnt/local-storage/ssd. -
One PV with StorageClass
local-hdd
for every subdirectory in /mnt/local-storage/hdd.
The LocalPersistentVolumes
alpha feature now also requires the VolumeScheduling
alpha feature. This is a breaking change, and the following changes are required:
-
The
VolumeScheduling
feature gate must also be enabled on kube-scheduler and kube-controller-manager components. -
The
NoVolumeNodeConflict
predicate has been removed. For non-default schedulers, update your scheduler policy. -
The
CheckVolumeBinding
predicate must be enabled in non-default schedulers.
22.1.4. Deploy Local Provisioner Copia collegamentoCollegamento copiato negli appunti!
Before starting the provisioner, mount all local devices and create a ConfigMap
with storage classes and their directories.
Install the local provisioner from the local-storage-provisioner-template.yaml file.
Create a service account that allows running pods as a root user, use hostPath volumes, and use any SELinux context to be able to monitor, manage, and clean local volumes:
oc create serviceaccount local-storage-admin oc adm policy add-scc-to-user privileged -z local-storage-admin
$ oc create serviceaccount local-storage-admin $ oc adm policy add-scc-to-user privileged -z local-storage-admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To allow the provisioner pod to delete content on local volumes created by any pod, root privileges and any SELinux context are required. hostPath is required to access the /mnt/local-storage path on the host.
Install the template:
oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
$ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Instantiate the template by specifying values for
configmap
,account
, andprovisioner_image
parameters:oc new-app -p CONFIGMAP=local-volume-config \ -p SERVICE_ACCOUNT=local-storage-admin \ -p NAMESPACE=local-storage \ -p PROVISIONER_IMAGE=registry.access.redhat.com/openshift3/local-storage-provisioner:v3.9 \ local-storage-provisioner
$ oc new-app -p CONFIGMAP=local-volume-config \ -p SERVICE_ACCOUNT=local-storage-admin \ -p NAMESPACE=local-storage \ -p PROVISIONER_IMAGE=registry.access.redhat.com/openshift3/local-storage-provisioner:v3.9 \
1 local-storage-provisioner
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
v3.9
with the right OpenShift Container Platform version.
Add the necessary storage classes:
oc create -f ./storage-class-ssd.yaml oc create -f ./storage-class-hdd.yaml
oc create -f ./storage-class-ssd.yaml oc create -f ./storage-class-hdd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow storage-class-ssd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow storage-class-hdd.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow See the template for other configurable options. This template creates a DaemonSet that runs a pod on every node. The pod watches directories specified in the
ConfigMap
and creates PVs for them automatically.The provisioner runs as root to be able to clean up the directories when a PV is released and all data needs to be removed.
22.1.5. Adding New Devices Copia collegamentoCollegamento copiato negli appunti!
Adding a new device requires several manual steps:
- Stop DaemonSet with the provisioner.
- Create a subdirectory in the right directory on the node with the new device and mount it there.
- Start the DaemonSet with the provisioner.
Omitting any of these steps may result in the wrong PV being created.