此内容没有您所选择的语言版本。

Chapter 21. Configuring for Local Volume


21.1. Overview

OpenShift Container Platform can be configured to access local volumes for application data.

Local volumes are persistent volumes (PV) representing locally-mounted file systems. In the future, they may be extended to raw block devices.

Local volumes are different from a hostPath. They have a special annotation that makes any pod that uses the PV to be scheduled on the same node where the local volume is mounted.

In addition, local volume includes a provisioner that automatically creates PVs for locally mounted devices. This provisioner is currently limited and it only scans pre-configured directories. It cannot dynamically provision volumes, which may be implemented in a future release.

The local volume provisioner allows using local storage within OpenShift Container Platform and supports:

  • Volumes
  • PVs
Note

Local volumes is an alpha feature and may change in a future release of OpenShift Container Platform.

21.2. Enabling local volumes

Enable the PersistentLocalVolumes feature gate on all masters and nodes:

  1. Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and add PersistentLocalVolumes=true under the apiServerArguments and controllerArguments sections:

    apiServerArguments:
       feature-gates:
       - PersistentLocalVolumes=true
    ...
    
    controllerArguments:
       feature-gates:
       - PersistentLocalVolumes=true
    ...
  2. On all nodes, edit or create the node configuration file (/etc/origin/node/node-config.yaml by default) and add PersistentLocalVolumes=true feature gate under kubeletArguments:

    kubeletArguments:
       feature-gates:
         - PersistentLocalVolumes=true

21.3. Mounting local volumes

All local volumes must be manually mounted before they can be consumed by OpenShift Container Platform as PVs.

  1. Mount all volumes into the /mnt/local-storage/<storage-class-name>/<volume> path. Administrators are required to create the local devices as needed (by using any method such as a disk partition or an LVM), create suitable file systems on these devices, and mount them using a script or /etc/fstab entries.

    Example /etc/fstab entries

    # device name   # mount point                  # FS    # options # extra
    /dev/sdb1       /mnt/local-storage/ssd/disk1 ext4     defaults 1 2
    /dev/sdb2       /mnt/local-storage/ssd/disk2 ext4     defaults 1 2
    /dev/sdb3       /mnt/local-storage/ssd/disk3 ext4     defaults 1 2
    /dev/sdc1       /mnt/local-storage/hdd/disk1 ext4     defaults 1 2
    /dev/sdc2       /mnt/local-storage/hdd/disk2 ext4     defaults 1 2

  2. Change the labels of mounted filesystems so that all volumes are accessible to processes that run within Docker containers:

    ---
    $ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
    ---

21.4. Configuring the local provisioner

OpenShift Container Platform depends on an external provisioner to create PVs for local devices and to clean them up when they are not needed (to enable reuse).

Note
  • The local volume provisioner is different from most provisioners and does not support dynamic provisioning.
  • The local volume provisioner requires that the administrators preconfigure the local volumes on each node and mount them under discovery directories. The provisioner then manages the volumes by creating and cleaning up PVs for each volume.

This external provisioner should be configured using a ConfigMap to relate directories with StorageClasses. This configuration must be created before the provisioner is deployed.

Note

(Optional) Create a standalone namespace for local volume provisioner and its configuration, for example: oc new-project local-storage

apiVersion: v1
kind: ConfigMap
metadata:
  name: local-volume-config
data:
    "local-ssd": | 1
      {
        "hostDir": "/mnt/local-storage/ssd", 2
        "mountDir": "/mnt/local-storage/ssd" 3
      }
    "local-hdd": |
      {
        "hostDir": "/mnt/local-storage/hdd",
        "mountDir": "/mnt/local-storage/hdd"
      }
1
Name of the StorageClass.
2
Path to the directory on the host. It must be a subdirectory of /mnt/local-storage.
3
Path to the directory in the provisioner pod. We recommend using the same directory structure as used on the host.

With this configuration, the provisioner creates:

  • One PV with StorageClass local-ssd for every subdirectory in /mnt/local-storage/ssd.
  • One PV with StorageClass local-hdd for every subdirectory in /mnt/local-storage/hdd.

21.5. Deploying the local provisioner

Note

Before starting the provisioner, mount all local devices and create a ConfigMap with storage classes and their directories.

  1. Install the local provisioner from the local-storage-provisioner-template.yaml file.
  2. Create a service account that allows running pods as a root user and use HostPath volumes:

    $ oc create serviceaccount local-storage-admin
    $ oc adm policy add-scc-to-user privileged -z local-storage-admin
    Note

    Root privileges are required for the provisioner pod to delete content on local volumes. hostPath is required to access the /mnt/local-storage path on the host.

  3. Install the template:

    $ oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
  4. Instantiate the template by specifying values for configmap and account parameters:

    $ oc new-app -p CONFIGMAP=local-volume-config \
      -p SERVICE_ACCOUNT=local-storage-admin \
      -p NAMESPACE=local-storage local-storage-provisioner
  5. Create the SSD and HDD files:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: local-ssd
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer

    storage-class-hdd.yaml example

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: local-hdd
    provisioner: kubernetes.io/no-provisioner
    volumeBindingMode: WaitForFirstConsumer

  6. Add the necessary storage classes:

    oc create -f ./storage-class-ssd.yaml
    oc create -f ./storage-class-hdd.yaml

See the template for other configurable options. This template creates a DaemonSet that runs a pod on every node. The pod watches directories specified in the ConfigMap and creates PVs for them automatically.

The provisioner runs as root to be able to clean up the directories when a PV is released and all data needs to be removed.

21.6. Adding new devices

To add a new device:

  1. Stop DaemonSet with the provisioner.
  2. Create a subdirectory in the right directory on the node with the new device and mount it there.
  3. Start the DaemonSet with the provisioner.
Important

Omitting any of these steps may result in the wrong PV being created.

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.