このコンテンツは選択した言語では利用できません。

Chapter 5. Creating Persistent Volumes


OpenShift Enterprise clusters can be provisioned with persistent storage using GlusterFS.
Persistent volumes (PVs) and persistent volume claims (PVCs) can share volumes across a single project. While the GlusterFS-specific information contained in a PV definition could also be defined directly in a pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts.
To enable persistent volume support in OpenShift and Kubernetes, few endpoints and a service must be created:

The sample glusterfs-endpoints.json file (sample-gluster-endpoint.json) and the sample glusterfs-service.json file (sample-gluster-service.json) are available at /usr/share/heketi/openshift/.
  1. To specify the endpoints you want to create, update the sample-gluster-endpoint.json file with the endpoints to be created based on the environment. Each Red Hat Gluster Storage trusted storage pool requires its own endpoint with the IP of the nodes in the trusted storage pool.
    {
      "kind": "Endpoints",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-cluster"
      },
      "subsets": [
        {
          "addresses": [
            {
              "ip": "192.168.121.168"
            }
          ],
          "ports": [
            {
              "port": 1
            }
          ]
        },
        {
          "addresses": [
            {
              "ip": "192.168.121.172"
            }
          ],
          "ports": [
            {
              "port": 1
            }
          ]
        },
        {
          "addresses": [
            {
              "ip": "192.168.121.233"
            }
          ],
          "ports": [
            {
              "port": 1
            }
          ]
        }
      ]
    }
    
    name: is the name of the endpoint
    ip: is the ip address of the Red Hat Gluster Storage nodes.
  2. Execute the following command to create the endpoints:
    # oc create -f <name_of_endpoint_file>
    For example:
    # oc create -f sample-gluster-endpoint.json
    endpoints "glusterfs-cluster" created
  3. To verify that the endpoints are created, execute the following command:
    # oc get endpoints
    For example:
    # oc get endpoints
    NAME                       ENDPOINTS                                                     AGE
    storage-project-router                192.168.121.233:80,192.168.121.233:443,192.168.121.233:1936   2d
    glusterfs-cluster          192.168.121.168:1,192.168.121.172:1,192.168.121.233:1         3s
    heketi                     10.1.1.3:8080                                                 2m
    heketi-storage-endpoints   192.168.121.168:1,192.168.121.172:1,192.168.121.233:1         3m
  4. Execute the following command to create a gluster service:
    # oc create -f <name_of_service_file>
    For example:
    # oc create -f sample-gluster-service.json
    service "glusterfs-cluster" created
    # cat sample-gluster-service.json
    
    {
      "kind": "Service",
      "apiVersion": "v1",
      "metadata": {
        "name": "glusterfs-cluster"
      },
      "spec": {
        "ports": [
          {"port": 1}
        ]
      }
    }
  5. To verify that the service is created, execute the following command:
    # oc get service
    For example:
    # oc get service
    NAME                       CLUSTER-IP      EXTERNAL-IP   PORT(S)                   AGE
    storage-project-router     172.30.94.109   <none>        80/TCP,443/TCP,1936/TCP   2d
    glusterfs-cluster          172.30.212.6    <none>        1/TCP                     5s
    heketi                     172.30.175.7    <none>        8080/TCP                  2m
    heketi-storage-endpoints   172.30.18.24    <none>        1/TCP                     3m

    Note

    The endpoints and the services must be created for each project that requires a persistent storage.
  6. Create a 100G persistent volume with Replica 3 from GlusterFS and output a persistent volume spec describing this volume to the file pv001.json:
    $ heketi-cli volume create --size=100 --persistent-volume-file=pv001.json
    $ cat pv001.json
    {
    "kind": "PersistentVolume",
    "apiVersion": "v1",
    "metadata": {
        "name": "glusterfs-4fc22ff9",
        "creationTimestamp": null
    },
    "spec": {
        "capacity": {
        "storage": "100Gi"
        },
        "glusterfs": {
        "endpoints": "TYPE ENDPOINT HERE",
        "path": "vol_4fc22ff934e531dec3830cfbcad1eeae"
        },
        "accessModes": [
        "ReadWriteMany"
        ],
        "persistentVolumeReclaimPolicy": "Retain"
    },
    "status": {}
    }
    name: The name of the volume.
    storage: The amount of storage allocated to this volume
    glusterfs: The volume type being used, in this case the glusterfs plug-in
    endpoints: The endpoints name that defines the trusted storage pool created
    path: The Red Hat Gluster Storage volume that will be accessed from the Trusted Storage Pool.
    accessModes: accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.

    Note

    • heketi-cli also accepts the endpoint name on the command line (--persistent-volume-endpoint=”TYPE ENDPOINT HERE”). This can then be piped to oc create -f - to create the persistent volume immediately.
    • Creation of more than 100 volumes per 3 nodes per cluster is not supported.
    • If there are multiple Red Hat Gluster Storage trusted storage pools in your environment, you can check on which trusted storage pool the volume is created using the heketi-cli volume list command. This command lists the cluster name. You can then update the endpoint information in the pv001.json file accordingly.
    • When creating a Heketi volume with only two nodes with the replica count set to the default value of three (replica 3), an error "No space" is displayed by Heketi as there is no space to create a replica set of three disks on three different nodes.
    • If all the heketi-cli write operations (ex: volume create, cluster create..etc) fails and the read operations ( ex: topology info, volume info ..etc) are successful, then the possibility is that the gluster volume is operating in read-only mode.
  7. Edit the pv001.json file and enter the name of the endpoint in the endpoints section.
    # oc create -f pv001.json
    For example:
    # oc create -f pv001.json
    persistentvolume "glusterfs-4fc22ff9" created
  8. To verify that the persistent volume is created, execute the following command:
    # oc get pv
    For example:
    # oc get pv
    
    NAME                 CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
    glusterfs-4fc22ff9   100Gi      RWX           Available                       4s
  9. Bind the persistent volume to the persistent volume claim by executing the following command:
    # oc create -f pvc.json
    For example:
    # oc create -f pvc.json
    persistentvolumeclaim "glusterfs-claim" created
    # cat pvc.json 
    
    {
      "apiVersion": "v1",
      "kind": "PersistentVolumeClaim",
      "metadata": {
        "name": "glusterfs-claim"
      },
      "spec": {
        "accessModes": [
          "ReadWriteMany"
        ],
        "resources": {
          "requests": {
            "storage": "8Gi"
          }
        }
      }
    }
    
  10. To verify that the persistent volume and the persistent volume claim is bound, execute the following commands:
    # oc get pv
    # oc get pvc
    For example:
    # oc get pv
    
    NAME                 CAPACITY   ACCESSMODES   STATUS    CLAIM                  REASON    AGE
    glusterfs-4fc22ff9   100Gi      RWX           Bound     storage-project/glusterfs-claim             1m
    # oc get pvc
    
    NAME              STATUS    VOLUME               CAPACITY   ACCESSMODES   AGE
    glusterfs-claim   Bound     glusterfs-4fc22ff9   100Gi      RWX           11s
  11. The claim can now be used in the application:
    For example:
    # cat app.yml
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
    spec:
      containers:
        - image: busybox
          command:
            - sleep
            - "3600"
          name: busybox
          volumeMounts:
            - mountPath: /usr/share/busybox
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: glusterfs-claim
    # oc create -f app.yml
    pod "busybox" created
  12. To verify that the pod is created, execute the following command:
    # oc get pods
  13. To verify that the persistent volume is mounted inside the container, execute the following command:
    # oc rsh busybox
    / $ df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mapper/docker-253:0-1310998-81732b5fd87c197f627a24bcd2777f12eec4ee937cc2660656908b2fa6359129
                          100.0G     34.1M     99.9G   0% /
    tmpfs                     1.5G         0      1.5G   0% /dev
    tmpfs                     1.5G         0      1.5G   0% /sys/fs/cgroup
    192.168.121.168:vol_4fc22ff934e531dec3830cfbcad1eeae
                           99.9G     66.1M     99.9G   0% /usr/share/busybox
    tmpfs                     1.5G         0      1.5G   0% /run/secrets
    /dev/mapper/vg_vagrant-lv_root
                           37.7G      3.8G     32.0G  11% /dev/termination-log
    tmpfs                     1.5G     12.0K      1.5G   0% /var/run/secrets/kubernetes.io/serviceaccount

Note

If you encounter a permission denied error on the mount point, then refer to section Gluster Volume Security at: https://access.redhat.com/documentation/en/openshift-enterprise/3.2/single/installation-and-configuration/#gluster-volume-security.
Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.