This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.此内容没有您所选择的语言版本。
Chapter 18. Persistent Storage Examples
18.1. Overview 复制链接链接已复制到粘贴板!
The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.
18.3. Complete Example Using Ceph RBD 复制链接链接已复制到粘贴板!
18.3.1. Overview 复制链接链接已复制到粘贴板!
This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Enterprise persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.
All oc …
commands are executed on the OpenShift Enterprise master host.
18.3.2. Installing the ceph-common Package 复制链接链接已复制到粘贴板!
The ceph-common library must be installed on all schedulable OpenShift Enterprise nodes:
The OpenShift Enterprise all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.
yum install -y ceph-common
# yum install -y ceph-common
18.3.3. Creating the Ceph Secret 复制链接链接已复制到粘贴板!
The ceph auth get-key
command is run on a Ceph MON node to display the key value for the client.admin user:
Example 18.5. Ceph Secret Definition
- 1
- This base64 key is generated on one of the Ceph MON nodes using the
ceph auth get-key client.admin | base64
command, then copying the output and pasting it as the secret key’s value.
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
oc create -f ceph-secret.yaml
$ oc create -f ceph-secret.yaml
secret "ceph-secret" created
Verify that the secret was created:
oc get secret ceph-secret
# oc get secret ceph-secret
NAME TYPE DATA AGE
ceph-secret Opaque 1 23d
18.3.4. Creating the Persistent Volume 复制链接链接已复制到粘贴板!
Next, before creating the PV object in OpenShift Enterprise, define the persistent volume file:
Example 18.6. Persistent Volume Object Definition Using Ceph RBD
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).- 4
- This defines the volume type being used. In this case, the rbd plug-in is defined.
- 5
- This is an array of Ceph monitor IP addresses and ports.
- 6
- This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Enterprise to the Ceph server.
- 7
- This is the file system type mounted on the Ceph RBD block device.
Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:
oc create -f ceph-pv.yaml
# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created
Verify that the persistent volume was created:
oc get pv
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
ceph-pv <none> 2147483648 RWO Available 2s
18.3.5. Creating the Persistent Volume Claim 复制链接链接已复制到粘贴板!
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 18.7. PVC Object Definition
Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:
- 1
- the claim was bound to the ceph-pv PV.
18.3.6. Creating the Pod 复制链接链接已复制到粘贴板!
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:
Example 18.8. Pod Object Definition
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 5
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 6
- The PVC that is bound to the Ceph RBD cluster.
Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:
- 1
- After a minute or so, the pod will be in the Running state.
18.3.7. Defining Group and Owner IDs (Optional) 复制链接链接已复制到粘贴板!
When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup
, as shown in the following pod definition fragment:
18.4. Complete Example Using GlusterFS 复制链接链接已复制到粘贴板!
18.4.1. Overview 复制链接链接已复制到粘贴板!
This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Enterprise persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.
Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.
All oc …
commands are executed on the OpenShift Enterprise master host.
18.4.2. Installing the glusterfs-fuse Package 复制链接链接已复制到粘贴板!
The glusterfs-fuse library must be installed on all schedulable OpenShift Enterprise nodes:
yum install -y glusterfs-fuse
# yum install -y glusterfs-fuse
The OpenShift Enterprise all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node.
The named endpoints define each node in the Gluster-trusted storage pool:
Example 18.10. GlusterFS Endpoint Definition
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:
oc create -f gluster-endpoints.yaml
# oc create -f gluster-endpoints.yaml
endpoints "gluster-endpoints" created
Verify that the endpoints were created:
oc get endpoints gluster-endpoints
# oc get endpoints gluster-endpoints
NAME ENDPOINTS AGE
gluster-endpoints 192.168.122.21:1,192.168.122.22:1 1m
To persist the Gluster endpoints, you also need to create a service.
Example 18.11. GlusterFS Service Definition
Save the service definition to a file, for example gluster-service.yaml, then create the endpoints object:
oc create -f gluster-service.yaml
# oc create -f gluster-service.yaml
endpoints "gluster-service" created
Verify that the service was created:
oc get service gluster-service
# oc get service gluster-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gluster-service 10.0.0.130 <none> 1/TCP 9s
18.4.4. Creating the Persistent Volume 复制链接链接已复制到粘贴板!
Next, before creating the PV object, define the persistent volume in OpenShift Enterprise:
Example 18.12. Persistent Volume Object Definition Using GlusterFS
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 4
- This defines the volume type being used. In this case, the glusterfs plug-in is defined.
- 5
- This references the endpoints named above.
- 6
- This is the Gluster volume name, preceded by
/
. - 7
- A volume reclaim policy of retain indicates that the volume will be preserved after the pods accessing it terminate. Accepted values include Retain, Delete, and Recycle.
Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:
oc create -f gluster-pv.yaml
# oc create -f gluster-pv.yaml
persistentvolume "gluster-pv" created
Verify that the persistent volume was created:
oc get pv
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-pv <none> 1Gi RWX Available 37s
18.4.5. Creating the Persistent Volume Claim 复制链接链接已复制到粘贴板!
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 18.13. PVC Object Definition
Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:
oc create -f gluster-claim.yaml
# oc create -f gluster-claim.yaml
persistentvolumeclaim "gluster-claim" created
Verify the PVC was created and bound to the expected PV:
oc get pvc
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
- 1
- The claim was bound to the gluster-pv PV.
18.4.6. Defining GlusterFS Volume Access 复制链接链接已复制到粘贴板!
Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:
In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
setsebool -P virt_sandbox_use_fusefs on
# setsebool -P virt_sandbox_use_fusefs on
The virt_sandbox_use_fusefs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
18.4.7. Creating the Pod using NGINX Web Server image 复制链接链接已复制到粘贴板!
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:
The NGINX image may require to run in privileged mode to create the mount and run properly. An easy way to accomplish this is to simply add your user to the privileged Security Context Constraint (SCC):
oadm policy add-scc-to-user privileged myuser
$ oadm policy add-scc-to-user privileged myuser
Then, add the privileged: true to the containers securityContext:
section of the YAML file (as seen in the example below).
Managing Security Context Constraints provides additional information regarding SCCs.
Example 18.14. Pod Object Definition using NGINX image
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are using a standard NGINX image.
- 3 6
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 5
- The
SupplementalGroup
ID (Linux Groups) to be assigned at the pod level and as discussed this should match the POSIX permissions on the Gluster volume. - 7
- The PVC that is bound to the Gluster cluster.
Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:
oc create -f gluster-pod1.yaml
# oc create -f gluster-pod1.yaml
pod "gluster-pod1" created
Verify the pod was created:
oc get pod
# oc get pod
NAME READY STATUS RESTARTS AGE
gluster-pod1 1/1 Running 0 31s
- 1
- After a minute or so, the pod will be in the Running state.
More details are shown in the oc describe pod
command:
There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more shown in the oc get pod <name> -o yaml
command:
18.5. Backing Docker Registry with GlusterFS Storage 复制链接链接已复制到粘贴板!
18.5.1. Overview 复制链接链接已复制到粘贴板!
This topic reviews how to attach a GlusterFS persistent volume to the Docker Registry.
It is assumed that the Docker registry service has already been started and the Gluster volume has been created.
18.5.2. Prerequisites 复制链接链接已复制到粘贴板!
- The docker-registry was deployed without configuring storage.
- A Gluster volume exists and glusterfs-fuse is installed on schedulable nodes.
Definitions written for GlusterFS endpoints and service, persistent volume (PV), and persistent volume claim (PVC).
For this guide, these will be:
- gluster-endpoints-service.yaml
- gluster-endpoints.yaml
- gluster-pv.yaml
- gluster-pvc.yaml
A user with the cluster-admin role binding.
- For this guide, that user is admin.
All oc
commands are executed on the master node as the admin user.
18.5.3. Create the Gluster Persistent Volume 复制链接链接已复制到粘贴板!
First, make the Gluster volume available to the registry.
oc create -f gluster-endpoints-service.yaml oc create -f gluster-endpoints.yaml oc create -f gluster-pv.yaml oc create -f gluster-pvc.yaml
$ oc create -f gluster-endpoints-service.yaml
$ oc create -f gluster-endpoints.yaml
$ oc create -f gluster-pv.yaml
$ oc create -f gluster-pvc.yaml
Check to make sure the PV and PVC were created and bound successfully. The expected output should resemble the following. Note that the PVC status is Bound, indicating that it has bound to the PV.
If either the PVC or PV failed to create or the PVC failed to bind, refer back to the GlusterFS Persistent Storage guide. Do not proceed until they initialize and the PVC status is Bound.
18.5.4. Attach the PVC to the Docker Registry 复制链接链接已复制到粘贴板!
Before moving forward, ensure that the docker-registry service is running.
oc get svc
$ oc get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m
If either the docker-registry service or its associated pod is not running, refer back to the docker-registry setup instructions for troubleshooting before continuing.
Then, attach the PVC:
oc volume deploymentconfigs/docker-registry --add --name=v1 -t pvc \ --claim-name=gluster-claim --overwrite
$ oc volume deploymentconfigs/docker-registry --add --name=v1 -t pvc \
--claim-name=gluster-claim --overwrite
Deploying a Docker Registry provides more information on using the Docker registry.
18.5.5. Known Issues 复制链接链接已复制到粘贴板!
18.5.5.1. Pod Cannot Resolve the Volume Host 复制链接链接已复制到粘贴板!
In non-production cases where the dnsmasq server is located on the same node as the OpenShift Enterprise master service, pods might not resolve to the host machines when mounting the volume, causing errors in the docker-registry-1-deploy pod. This can happen when dnsmasq.service fails to start because of a collision with OpenShift Enterprise DNS on port 53. To run the DNS server on the master host, some configurations needs to be changed.
In /etc/dnsmasq.conf, add:
With these settings, dnsmasq will pull from the /etc/hosts file on the master node.
Add the appropriate host names and IPs for all necessary hosts.
In master-config.yaml, change bindAddress
to:
dnsConfig: bindAddress: 127.0.0.1:8053
dnsConfig:
bindAddress: 127.0.0.1:8053
When pods are created, they receive a copy of /etc/resolv.conf, which typically contains only the master DNS server so they can resolve external DNS requests. To enable internal DNS resolution, insert the dnsmasq server at the top of the server list. This way, dnsmasq will attempt to resolve requests internally first.
In /etc/resolv.conf all scheduled nodes:
nameserver 192.168.1.100 nameserver 192.168.1.1
nameserver 192.168.1.100
nameserver 192.168.1.1
Once the configurations are changed, restart the OpenShift Enterprise master and dnsmasq services.
systemctl restart atomic-openshift-master systemctl restart dnsmasq
$ systemctl restart atomic-openshift-master
$ systemctl restart dnsmasq
18.6. Mounting Volumes on Privileged Pods 复制链接链接已复制到粘贴板!
18.6.1. Overview 复制链接链接已复制到粘贴板!
Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.
While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.
18.6.2. Prerequisites 复制链接链接已复制到粘贴板!
- An existing Gluster volume.
- glusterfs-fuse installed on all hosts.
Definitions for GlusterFS:
- Endpoints and services: gluster-endpoints-service.yaml and gluster-endpoints.yaml
- Persistent volumes: gluster-pv.yaml
- Persistent volume claims: gluster-pvc.yaml
- Privileged pods: gluster-S3-pod.yaml
-
A user with the cluster-admin role binding. For this guide, that user is called
admin
.
18.6.3. Creating the Persistent Volume 复制链接链接已复制到粘贴板!
Creating the PersistentVolume makes the storage accessible to users, regardless of projects.
As the admin, create the service, endpoint object, and persistent volume:
oc create -f gluster-endpoints-service.yaml oc create -f gluster-endpoints.yaml oc create -f gluster-pv.yaml
$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the objects were created:
oc get svc
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE gluster-cluster 172.30.151.58 <none> 1/TCP <none> 24s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ep
$ oc get ep NAME ENDPOINTS AGE gluster-cluster 192.168.59.102:1,192.168.59.103:1 2m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pv
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2Gi RWX Available 2d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.6.4. Creating a Regular User 复制链接链接已复制到粘贴板!
Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:
- As the admin, add a user to the SCC:
oadm policy add-scc-to-user privileged <username>
$ oadm policy add-scc-to-user privileged <username>
- Log in as the regular user:
oc login -u <username> -p <password>
$ oc login -u <username> -p <password>
- Then, create a new project:
oc new-project <project_name>
$ oc new-project <project_name>
18.6.5. Creating the Persistent Volume Claim 复制链接链接已复制到粘贴板!
As a regular user, create the PersistentVolumeClaim to access the volume:
oc create -f gluster-pvc.yaml -n <project_name>
$ oc create -f gluster-pvc.yaml -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define your pod to access the claim:
Upon pod creation, the mount directory is created and the volume is attached to that mount point.
As regular user, create a pod from the definition:
oc create -f gluster-S3-pod.yaml
$ oc create -f gluster-S3-pod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the pod created successfully:
oc get pods
$ oc get pods NAME READY STATUS RESTARTS AGE gluster-S3-pod 1/1 Running 0 36m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It can take several minutes for the pod to create.
18.6.6. Verifying the Setup 复制链接链接已复制到粘贴板!
18.6.6.1. Checking the Pod SCC 复制链接链接已复制到粘贴板!
Export the pod configuration:
oc export pod <pod_name>
$ oc export pod <pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the output. Check that
openshift.io/scc
has the value ofprivileged
:Example 18.16. Export Snippet
metadata: annotations: openshift.io/scc: privileged
metadata: annotations: openshift.io/scc: privileged
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
18.6.6.2. Verifying the Mount 复制链接链接已复制到粘贴板!
Access the pod and check that the volume is mounted:
oc rsh <pod_name>
$ oc rsh <pod_name> [root@gluster-S3-pvc /]# mount
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the output for the Gluster volume:
Example 18.17. Volume Mount
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow