このコンテンツは選択した言語では利用できません。

9.2. Block Storage


Block storage allows the creation of high performance individual storage units. Unlike the traditional file storage capability that glusterfs supports, each storage volume/block device can be treated as an independent disk drive, so that each storage volume/block device can support an individual file system.
gluster-block is a distributed management framework for block devices. It aims to make Gluster-backed block storage creation and maintenance as simple as possible. gluster-block can provision block devices and export them as iSCSI LUN's across multiple nodes, and uses iSCSI protocol for data transfer as SCSI block/commands.

Note

Static provisioning of volumes is not supported for Block storage. Dynamic provisioning of volumes is the only method supported.
Block volume expansion is not supported in Container-Native Storage 3.6.

9.2.1. Dynamic Provisioning of Volumes for Block Storage

Dynamic provisioning enables provisioning of Red Hat Gluster Storage volume to a running application container without having to pre-create the volume. The volume will be created dynamically as the claim request comes in, and a volume of exactly the same size will be provisioned to the application containers.

Note

If you are upgrading from Container-Native Storage 3.5 to Container-Native Storage 3.6, then ensure you refer Chapter 13, Upgrading your Container-Native Storage Environment before proceeding with the following steps.

9.2.1.1. Configuring Dynamic Provisioning of Volumes

To configure dynamic provisioning of volumes, the administrator must define StorageClass objects that describe named "classes" of storage offered in a cluster. After creating a Storage Class, a secret for heketi authentication must be created before proceeding with the creation of persistent volume claim.
9.2.1.1.1. Configuring Multipathing on all Initiators
To ensure the iSCSI initiator can communicate with the iSCSI targets and achieve HA using multipathing, execute the following steps on all the OpenShift nodes (iSCSI initiator) where the app pods are hosted:
  1. To install initiator related packages on all the nodes where initiator has to be configured, execute the following command:
    # yum install iscsi-initiator-utils device-mapper-multipath
    Copy to Clipboard Toggle word wrap
  2. To enable multipath, execute the following command:
    # mpathconf --enable
    Copy to Clipboard Toggle word wrap
  3. Create and add the following content to the multipath.conf file:
    # cat > /etc/multipath.conf <<EOF
    # LIO iSCSI
    devices {
            device {
                    vendor "LIO-ORG"
                    user_friendly_names "yes" # names like mpatha
                    path_grouping_policy "failover" # one path per group
                    path_selector "round-robin 0"
                    failback immediate
                    path_checker "tur"
                    prio "const"
                    no_path_retry 120
                    rr_weight "uniform"
            }
    }
    EOF
    Copy to Clipboard Toggle word wrap
  4. Execute the following command to restart the multipath service:
    # systemctl restart multipathd
    Copy to Clipboard Toggle word wrap
9.2.1.1.2. Creating Secret for Heketi Authentication
To create a secret for Heketi authentication, execute the following commands:

Note

If the admin-key value (secret to access heketi to get the volume details) was not set during the deployment of Container-Native Storage, then the following steps can be omitted.
  1. Create an encoded value for the password by executing the following command:
    # echo -n "<key>" | base64
    Copy to Clipboard Toggle word wrap
    where “key” is the value for admin-key that was created while deploying CNS
    For example:
    # echo -n "mypassword" | base64
    bXlwYXNzd29yZA==
    Copy to Clipboard Toggle word wrap
  2. Create a secret file. A sample secret file is provided below:
    # cat glusterfs-secret.yaml
                                   
    apiVersion: v1
    kind: Secret
    metadata:
      name: heketi-secret
      namespace: default
    data:
      # base64 encoded password. E.g.: echo -n "mypassword" | base64
      key: bXlwYXNzd29yZA==
    type: gluster.org/glusterblock
    Copy to Clipboard Toggle word wrap
  3. Register the secret on Openshift by executing the following command:
    # oc create -f glusterfs-secret.yaml
    secret "heketi-secret" created
    Copy to Clipboard Toggle word wrap
9.2.1.1.3. Registering a Storage Class
When configuring a StorageClass object for persistent volume provisioning, the administrator must describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a PersistentVolume belonging to the class.
  1. Create a storage class. A sample storage class file is presented below:
    # cat glusterfs-block-storageclass.yaml
    
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
     name: gluster-block
    provisioner: gluster.org/glusterblock
    parameters:
     resturl: "http://heketi-storage-project.cloudapps.mystorage.com"
     restuser: "admin"
     restsecretnamespace: "default"
     restsecretname: "heketi-secret"
     hacount: "3"
     clusterids: "630372ccdc720a92c681fb928f27b53f,796e6db1981f369ea0340913eeea4c9a"
     chapauthenabled: "true"
    
    
    Copy to Clipboard Toggle word wrap
    where,
    resturl: Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format must be IPaddress:Port and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to http://heketi-storage-project.cloudapps.mystorage.com where the fqdn is a resolvable heketi service url.
    restuser : Gluster REST service/Heketi user who has access to create volumes in the trusted storage pool
    restsecretnamespace + restsecretname : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional. Empty password will be used when both restsecretnamespace and restsecretname are omitted.
    hacount: It is the count of the number of paths to the block target server. hacount provides high availability via multipathing capability of iSCSI. If there is a path failure, the I/Os will not be interrupted and will be served via another available paths.
    clusterids: It is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of comma separated cluster IDs. This is an optional parameter.

    Note

    To get the cluster ID, execute the following command:
    # heketi-cli cluster list
    Copy to Clipboard Toggle word wrap
    chapauthenabled: If you want to provision block volume with CHAP authentication enabled, this value has to be set to true. This is an optional parameter.
  2. To register the storage class to Openshift, execute the following command:
    # oc create -f glusterfs-block-storageclass.yaml
    storageclass "gluster-block" created
    Copy to Clipboard Toggle word wrap
  3. To get the details of the storage class, execute the following command:
    # oc describe storageclass gluster-block
    Name:        gluster-block
    IsDefaultClass:    No
    Annotations:    <none>
    Provisioner:    gluster.org/glusterblock
    Parameters:    chapauthenabled=true,hacount=3,opmode=heketi,restsecretname=heketi-secret,restsecretnamespace=default,resturl=http://heketi-storage-project.cloudapps.mystorage.com,restuser=admin
    Events:        <none>
    Copy to Clipboard Toggle word wrap
9.2.1.1.4. Creating a Persistent Volume Claim
To create a persistent volume claim execute the following commands:
  1. Create a Persistent Volume Claim file. A sample persistent volume claim is provided below:
    # cat glusterfs-block-pvc-claim.yaml
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: claim1
      annotations:
        volume.beta.kubernetes.io/storage-class: gluster-block
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    
    Copy to Clipboard Toggle word wrap
  2. Register the claim by executing the following command:
    # oc create -f glusterfs-block-pvc-claim.yaml
    persistentvolumeclaim "claim1" created
    Copy to Clipboard Toggle word wrap
  3. To get the details of the claim, execute the following command:
    # oc describe pvc <claim_name>
    Copy to Clipboard Toggle word wrap
    For example:
    # oc describe pvc claim1
    
    Name:        claim1
    Namespace:    block-test
    StorageClass:    gluster-block
    Status:        Bound
    Volume:        pvc-ee30ff43-7ddc-11e7-89da-5254002ec671
    Labels:        <none>
    Annotations:    control-plane.alpha.kubernetes.io/leader={"holderIdentity":"8d7fecb4-7dba-11e7-a347-0a580a830002","leaseDurationSeconds":15,"acquireTime":"2017-08-10T15:02:30Z","renewTime":"2017-08-10T15:02:58Z","lea...
           pv.kubernetes.io/bind-completed=yes
           pv.kubernetes.io/bound-by-controller=yes
           volume.beta.kubernetes.io/storage-class=gluster-block
           volume.beta.kubernetes.io/storage-provisioner=gluster.org/glusterblock
    Capacity:    5Gi
    Access Modes:    RWO
    Events:
     FirstSeen    LastSeen    Count    From                            SubObjectPath    Type        Reason            Message
     ---------    --------    -----    ----                            -------------    --------    ------            -------
     1m        1m        1    gluster.org/glusterblock 8d7fecb4-7dba-11e7-a347-0a580a830002            Normal        Provisioning        External provisioner is provisioning volume for claim "block-test/claim1"
     1m        1m        18    persistentvolume-controller                Normal        ExternalProvisioning    cannot find provisioner "gluster.org/glusterblock", expecting that a volume for the claim is provisioned either manually or via external software
     1m        1m        1    gluster.org/glusterblock 8d7fecb4-7dba-11e7-a347-0a580a830002            Normal        ProvisioningSucceeded    Successfully provisioned volume pvc-ee30ff43-7ddc-11e7-89da-5254002ec671
    Copy to Clipboard Toggle word wrap
9.2.1.1.5. Verifying Claim Creation
To verify if the claim is created, execute the following commands:
  1. To get the details of the persistent volume claim and persistent volume, execute the following command:
    # oc get pv,pvc
    
    NAME                                          CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM               STORAGECLASS    REASON    AGE
    pv/pvc-ee30ff43-7ddc-11e7-89da-5254002ec671   5Gi        RWO           Delete          Bound     block-test/claim1   gluster-block             3m
    
    NAME         STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS    AGE
    pvc/claim1   Bound     pvc-ee30ff43-7ddc-11e7-89da-5254002ec671   5Gi        RWO           gluster-block   4m
    Copy to Clipboard Toggle word wrap
9.2.1.1.6. Using the Claim in a Pod
Execute the following steps to use the claim in a pod.
  1. To use the claim in the application, for example
    # cat app.yaml
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
    spec:
      containers:
        - image: busybox
          command:
            - sleep
            - "3600"
          name: busybox
          volumeMounts:
            - mountPath: /usr/share/busybox
              name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: claim1
    Copy to Clipboard Toggle word wrap
    # oc create -f app.yaml
    pod "busybox" created
    Copy to Clipboard Toggle word wrap
  2. To verify that the pod is created, execute the following command:
    # oc get pods
    
    NAME                               READY     STATUS    RESTARTS   AGE
    block-test-router-1-deploy         0/1       Running     0          4h
    busybox                            1/1       Running   0          43s
    glusterblock-provisioner-1-bjpz4   1/1       Running   0          4h
    glusterfs-7l5xf                    1/1       Running   0          4h
    glusterfs-hhxtk                    1/1       Running   3          4h
    glusterfs-m4rbc                    1/1       Running   0          4h
    heketi-1-3h9nb                     1/1       Running   0          4h
    Copy to Clipboard Toggle word wrap
  3. To verify that the persistent volume is mounted inside the container, execute the following command:
    # oc rsh busybox
    Copy to Clipboard Toggle word wrap
    /  # df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/mapper/docker-253:1-11438-39febd9d64f3a3594fc11da83d6cbaf5caf32e758eb9e2d7bdd798752130de7e
                            10.0G     33.9M      9.9G   0% /
    tmpfs                     3.8G         0      3.8G   0% /dev
    tmpfs                     3.8G         0      3.8G   0% /sys/fs/cgroup
    /dev/mapper/VolGroup00-LogVol00
                             7.7G      2.8G      4.5G  39% /dev/termination-log
    /dev/mapper/VolGroup00-LogVol00
                             7.7G      2.8G      4.5G  39% /run/secrets
    /dev/mapper/VolGroup00-LogVol00
                             7.7G      2.8G      4.5G  39% /etc/resolv.conf
    /dev/mapper/VolGroup00-LogVol00
                             7.7G      2.8G      4.5G  39% /etc/hostname
    /dev/mapper/VolGroup00-LogVol00
                             7.7G      2.8G      4.5G  39% /etc/hosts
    shm                      64.0M         0     64.0M   0% /dev/shm
    /dev/mpatha                  5.0G     32.2M      5.0G   1% /usr/share/busybox
    tmpfs                     3.8G     16.0K      3.8G   0% /var/run/secrets/kubernetes.io/serviceaccount
    tmpfs                     3.8G         0      3.8G   0% /proc/kcore
    tmpfs                     3.8G         0      3.8G   0% /proc/timer_list
    tmpfs                     3.8G         0      3.8G   0% /proc/timer_stats
    tmpfs                     3.8G         0      3.8G   0% /proc/sched_debug
    Copy to Clipboard Toggle word wrap
9.2.1.1.7. Deleting a Persistent Volume Claim
  1. To delete a claim, execute the following command:
    # oc delete pvc <claim-name>
    Copy to Clipboard Toggle word wrap
    For example:
    # oc delete pvc claim1
    persistentvolumeclaim "claim1" deleted
    Copy to Clipboard Toggle word wrap
  2. To verify if the claim is deleted, execute the following command:
    # oc get pvc <claim-name>
    Copy to Clipboard Toggle word wrap
    For example:
    # oc get pvc claim1
    No resources found.
    Copy to Clipboard Toggle word wrap
    When the user deletes a persistent volume claim that is bound to a persistent volume created by dynamic provisioning, apart from deleting the persistent volume claim, Kubernetes will also delete the persistent volume, endpoints, service, and the actual volume. Execute the following commands if this has to be verified:
    • To verify if the persistent volume is deleted, execute the following command:
      # oc get pv <pv-name>
      Copy to Clipboard Toggle word wrap
      For example:
      # oc get pv pvc-962aa6d1-bddb-11e6-be23-5254009fc65b 
      No resources found.
      Copy to Clipboard Toggle word wrap
トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat