此内容没有您所选择的语言版本。

Chapter 5. S3 Compatible Object Store in a Red Hat Openshift Container Storage Environment


Important

Support for S3 compatible Object Store in Container-Native Storage is under technology preview. Technology Preview features are not fully supported under Red Hat service-level agreements (SLAs), may not be functionally complete, and are not intended for production use.
Tech Preview features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.
As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
Object Store provides a system for data storage that enables users to access the same data, both as an object and as a file, thus simplifying management and controlling storage costs. The S3 API is the de facto standard for HTTP based access to object storage services.

Note

Ensure that cns-deploy package has been installed before setting up S3 Compatible Object Store. For more information on how to install cns-deploy package, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.10/html-single/deployment_guide/#part-Appendix
Execute the following steps from the /usr/share/heketi/templates/ directory to set up S3 compatible object store for Red Hat Openshift Container Storage:
  1. (Optional): If you want to create a secret for heketi, then execute the following command:
    # oc create secret generic heketi-${NAMESPACE}-admin-secret
    --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs
    Copy to Clipboard Toggle word wrap
    For example:
    # oc create secret generic heketi-storage-project-admin-secret
    --from-literal=key=  --type=kubernetes.io/glusterfs
    Copy to Clipboard Toggle word wrap
    1. Execute the following command to label the secret:
      # oc label --overwrite secret heketi-${NAMESPACE}-admin-secret
      glusterfs=s3-heketi-${NAMESPACE}-admin-secret
      gluster-s3=heketi-${NAMESPACE}-admin-secret
      Copy to Clipboard Toggle word wrap
      For example:
      # oc label --overwrite secret heketi-storage-project-admin-secret
      glusterfs=s3-heketi-storage-project-admin-secret
      gluster-s3=heketi-storage-project-admin-secret
      Copy to Clipboard Toggle word wrap
  2. Create a GlusterFS StorageClass file. Use the HEKETI_URL and NAMESPACE from the current setup and set a STORAGE_CLASS name.
    # sed -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g'  -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e  's/${NAMESPACE}/storage-project/g'   /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -
    Copy to Clipboard Toggle word wrap
    For example:
    # sed  -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g'  -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e 's/${NAMESPACE}/storage-project/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -storageclass "gluster-s3-store" created
    Copy to Clipboard Toggle word wrap

    Note

    • You can run the following command to obtain the HEKETI_URL:
      # oc get routes --all-namespaces | grep heketi
      Copy to Clipboard Toggle word wrap
      A sample output of the command is as follows:
                  glusterfs   heketi-storage
                  heketi-storage-glusterfs.router.default.svc.cluster.local
                  heketi-storage   <all>          None
      Copy to Clipboard Toggle word wrap
      If there are multiple lines in the output then you can choose the most relevant one.
    • You can run the following command to obtain the NAMESPACE:
      oc get project
      Copy to Clipboard Toggle word wrap
      A sample output of the command is as follows:
      # oc project
                Using project "glusterfs" on server "master.example.com:8443"
      Copy to Clipboard Toggle word wrap
      where, glusterfs is the NAMESPACE.
  3. Create the Persistent Volume Claims using the storage class.
    # sed -e 's/${VOLUME_CAPACITY}/2Gi/g'  -e  's/${STORAGE_CLASS}/gluster-s3-store/g'  /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f -
    
    Copy to Clipboard Toggle word wrap
    For Example:
              # sed -e 's/${VOLUME_CAPACITY}/2Gi/g'  -e  's/${STORAGE_CLASS}/gluster-s3-store/g'  /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f -
    persistentvolumeclaim "gluster-s3-claim" created
    persistentvolumeclaim "gluster-s3-meta-claim" created
    
    Copy to Clipboard Toggle word wrap
    Use the STORAGE_CLASS created from the previous step. Modify the VOLUME_CAPACITY as per the environment requirements. Wait till the PVC is bound. Verify the same using the following command:
    # oc get pvc
    NAME                    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    gluster-s3-claim        Bound     pvc-0b7f75ef-9920-11e7-9309-00151e000016   2Gi        RWX           2m
    gluster-s3-meta-claim   Bound     pvc-0b87a698-9920-11e7-9309-00151e000016   1Gi        RWX           2m
    Copy to Clipboard Toggle word wrap
  4. Start the glusters3 object storage service using the template:

    Note

    Set the S3_ACCOUNT name, S3_USER name, and S3_PASSWORD. PVC and META_PVC are obtained from the previous step.
    # oc new-app  /usr/share/heketi/templates/gluster-s3-template.yaml \
    --param=S3_ACCOUNT=testvolume  --param=S3_USER=adminuser \
    --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim \
    --param=META_PVC=gluster-s3-meta-claim
    --> Deploying template "storage-project/gluster-s3" for "/usr/share/heketi/templates/gluster-s3-template.yaml" to project storage-project
    
         gluster-s3
         ---------
         Gluster s3 service template
    
    
         * With parameters:
            * S3 Account Name=testvolume
            * S3 User=adminuser
            * S3 User Password=itsmine
            * Primary GlusterFS-backed PVC=gluster-s3-claim
            * Metadata GlusterFS-backed PVC=gluster-s3-meta-claim
    
    --> Creating resources ...
        service "gluster-s3-service" created
        route "gluster-s3-route" created
        deploymentconfig "gluster-s3-dc" created
    --> Success
        Run 'oc status' to view your app.
    Copy to Clipboard Toggle word wrap
  5. Execute the following command to verify if the S3 pod is up:
    # oc get route
    NAME              HOST/PORT                                                            PATH      SERVICES           PORT      TERMINATION   WILDCARD
    gluster-S3-route   gluster-s3-route-storage-project.cloudapps.mystorage.com ... 1 more             gluster-s3-service   <all>                   None
    heketi            heketi-storage-project.cloudapps.mystorage.com ... 1 more                      heketi             <all>
    Copy to Clipboard Toggle word wrap
返回顶部
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2025 Red Hat