Chapter 5. S3 Compatible Object Store in a Red Hat Openshift Container Storage Environment
Support for S3 compatible Object Store in Container-Native Storage is under technology preview. Technology Preview features are not fully supported under Red Hat service-level agreements (SLAs), may not be functionally complete, and are not intended for production use.
Tech Preview features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process.
As Red Hat considers making future iterations of Technology Preview features generally available, we will provide commercially reasonable efforts to resolve any reported issues that customers experience when using these features.
Object Store provides a system for data storage that enables users to access the same data, both as an object and as a file, thus simplifying management and controlling storage costs. The S3 API is the de facto standard for HTTP based access to object storage services.
S3 compatible Object store is only available with Red Hat Openshift Container Storage 3.11.4 and older releases.
5.1. Setting up S3 Compatible Object Store for Red Hat Openshift Container Storage
Ensure that cns-deploy package has been installed before setting up S3 Compatible Object Store. For more information on how to install cns-deploy package, see https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/#part-Appendix
Execute the following steps from the /usr/share/heketi/templates/ directory to set up S3 compatible object store for Red Hat Openshift Container Storage:
(Optional): If you want to create a secret for heketi, then execute the following command:
# oc create secret generic heketi-${NAMESPACE}-admin-secret --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs
For example:
# oc create secret generic heketi-storage-project-admin-secret --from-literal=key=abcd --type=kubernetes.io/glusterfs
Execute the following command to label the secret:
# oc label --overwrite secret heketi-${NAMESPACE}-admin-secret glusterfs=s3-heketi-${NAMESPACE}-admin-secret gluster-s3=heketi-${NAMESPACE}-admin-secret
For example:
# oc label --overwrite secret heketi-storage-project-admin-secret glusterfs=s3-heketi-storage-project-admin-secret gluster-s3=heketi-storage-project-admin-secret
Create a GlusterFS StorageClass file. Use the
HEKETI_URL
andNAMESPACE
from the current setup and set aSTORAGE_CLASS
name.# sed -e 's/${HEKETI_URL}/<HEKETI_URL>/g' -e 's/${STORAGE_CLASS}/<STORAGE_CLASSNAME>/g' -e 's/${NAMESPACE}/<NAMESPACE_NAME>/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -
For example:
# sed -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e 's/${NAMESPACE}/storage-project/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -storageclass "gluster-s3-store" created
NoteYou can run the following command to obtain the HEKETI_URL:
# oc get routes --all-namespaces | grep heketi
A sample output of the command is as follows:
glusterfs heketi-storage heketi-storage-glusterfs.router.default.svc.cluster.local heketi-storage <all> None
If there are multiple lines in the output then you can choose the most relevant one.
You can run the following command to obtain the NAMESPACE:
oc get project
A sample output of the command is as follows:
# oc project Using project "glusterfs" on server "master.example.com:8443"
where, glusterfs is the NAMESPACE.
Create the Persistent Volume Claims using the storage class.
# sed -e 's/${VOLUME_CAPACITY}/<NEW SIZE in Gi>/g' -e 's/${STORAGE_CLASS}/<STORAGE_CLASSNAME>/g' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f -
For example:
# sed -e 's/${VOLUME_CAPACITY}/2Gi/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - persistentvolumeclaim "gluster-s3-claim" created persistentvolumeclaim "gluster-s3-meta-claim" created
Use the
STORAGE_CLASS
created from the previous step. Modify theVOLUME_CAPACITY
as per the environment requirements. Wait till the PVC is bound. Verify the same using the following command:# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-s3-claim Bound pvc-0b7f75ef-9920-11e7-9309-00151e000016 2Gi RWX 2m gluster-s3-meta-claim Bound pvc-0b87a698-9920-11e7-9309-00151e000016 1Gi RWX 2m
Start the glusters3 object storage service using the template. Set the
S3_ACCOUNT
name,S3_USER
name, andS3_PASSWORD
.PVC
andMETA_PVC
are obtained from the previous step.# oc new-app /usr/share/heketi/templates/gluster-s3-template.yaml \ --param=S3_ACCOUNT=testvolume --param=S3_USER=adminuser \ --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim \ --param=META_PVC=gluster-s3-meta-claim --> Deploying template "storage-project/gluster-s3" for "/usr/share/heketi/templates/gluster-s3-template.yaml" to project storage-project gluster-s3 --------- Gluster s3 service template * With parameters: * S3 Account Name=testvolume * S3 User=adminuser * S3 User Password=itsmine * Primary GlusterFS-backed PVC=gluster-s3-claim * Metadata GlusterFS-backed PVC=gluster-s3-meta-claim --> Creating resources ... service "gluster-s3-service" created route "gluster-s3-route" created deploymentconfig "gluster-s3-dc" created --> Success Run 'oc status' to view your app.
Execute the following command to verify if the S3 pod is up:
# oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE gluster-s3-azkys 1/1 Running 0 4m 10.130.0.29 node3 ..
5.2. Object Operations
This section lists some of the object operation that can be performed:
Get the URL of the route which provides S3 OS
# s3_storage_url=$(oc get routes | grep "gluster.*s3" | awk '{print $2}')
NoteEnsure to download the s3curl tool from https://aws.amazon.com/code/128. This tool will be used for verifying the object operations.
s3curl.pl requires Digest::HMAC_SHA1 and Digest::MD5. Install the perl-Digest-HMAC package to get this. You can install the perl-Digest-HMAC package by running this command:
# yum install perl-Digest-HMAC
Update the s3curl.pl perl script with glusters3object url which was retrieved:
For example:
my @endpoints = ( 'glusters3object-storage-project.cloudapps.mystorage.com');
To perform
PUT
operation of the bucket:s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" --put /dev/null -- -k -v http://$s3_storage_url/bucket1
To perform
PUT
operation of the object inside the bucket:s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" --put my_object.jpg -- -k -v -s http://$s3_storage_url/bucket1/my_object.jpg
To verify listing of objects in the bucket:
s3curl.pl --debug --id "testvolume:adminuser" --key "itsmine" -- -k -v -s http://$s3_storage_url/bucket1/