This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 16. Persistent Storage Examples
16.1. Overview Link kopierenLink in die Zwischenablage kopiert!
The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.
16.3. Complete Example Using Ceph RBD Link kopierenLink in die Zwischenablage kopiert!
16.3.1. Overview Link kopierenLink in die Zwischenablage kopiert!
This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Enterprise persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.
All oc …
commands are executed on the OpenShift Enterprise master host.
16.3.2. Installing the ceph-common Package Link kopierenLink in die Zwischenablage kopiert!
The ceph-common library must be installed on all schedulable OpenShift Enterprise nodes:
The OpenShift Enterprise all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.
yum install -y ceph-common
# yum install -y ceph-common
16.3.3. Creating the Ceph Secret Link kopierenLink in die Zwischenablage kopiert!
The ceph auth get-key
command is run on a Ceph MON node to display the key value for the client.admin user:
Example 16.5. Ceph Secret Definition
- 1
- This base64 key is generated on one of the Ceph MON nodes using the
ceph auth get-key client.admin | base64
command, then copying the output and pasting it as the secret key’s value.
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
oc create -f ceph-secret.yaml
$ oc create -f ceph-secret.yaml
secret "ceph-secret" created
Verify that the secret was created:
oc get secret ceph-secret
# oc get secret ceph-secret
NAME TYPE DATA AGE
ceph-secret Opaque 1 23d
16.3.4. Creating the Persistent Volume Link kopierenLink in die Zwischenablage kopiert!
Next, before creating the PV object in OpenShift Enterprise, define the persistent volume file:
Example 16.6. Persistent Volume Object Definition Using Ceph RBD
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).- 4
- This defines the volume type being used. In this case, the rbd plug-in is defined.
- 5
- This is an array of Ceph monitor IP addresses and ports.
- 6
- This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Enterprise to the Ceph server.
- 7
- This is the file system type mounted on the Ceph RBD block device.
Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:
oc create -f ceph-pv.yaml
# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created
Verify that the persistent volume was created:
oc get pv
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
ceph-pv <none> 2147483648 RWO Available 2s
16.3.5. Creating the Persistent Volume Claim Link kopierenLink in die Zwischenablage kopiert!
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 16.7. PVC Object Definition
Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:
- 1
- the claim was bound to the ceph-pv PV.
16.3.6. Creating the Pod Link kopierenLink in die Zwischenablage kopiert!
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:
Example 16.8. Pod Object Definition
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 5
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 6
- The PVC that is bound to the Ceph RBD cluster.
Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:
- 1
- After a minute or so, the pod will be in the Running state.
16.3.7. Defining Group and Owner IDs (Optional) Link kopierenLink in die Zwischenablage kopiert!
When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup
, as shown in the following pod definition fragment:
16.4. Complete Example Using GlusterFS Link kopierenLink in die Zwischenablage kopiert!
16.4.1. Overview Link kopierenLink in die Zwischenablage kopiert!
This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Enterprise persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.
Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.
All oc …
commands are executed on the OpenShift Enterprise master host.
16.4.2. Installing the glusterfs-fuse Package Link kopierenLink in die Zwischenablage kopiert!
The glusterfs-fuse library must be installed on all schedulable OpenShift Enterprise nodes:
yum install -y glusterfs-fuse
# yum install -y glusterfs-fuse
The OpenShift Enterprise all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node.
16.4.3. Creating the Gluster Endpoints Link kopierenLink in die Zwischenablage kopiert!
The named endpoints define each node in the Gluster-trusted storage pool:
Example 16.10. GlusterFS Endpoint Definition
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:
oc create -f gluster-endpoints.yaml
# oc create -f gluster-endpoints.yaml
endpoints "gluster-endpoints" created
Verify that the endpoints were created:
oc get endpoints gluster-endpoints
# oc get endpoints gluster-endpoints
NAME ENDPOINTS AGE
gluster-endpoints 192.168.122.21:1,192.168.122.22:1 1m
16.4.4. Creating the Persistent Volume Link kopierenLink in die Zwischenablage kopiert!
Next, before creating the PV object, define the persistent volume in OpenShift Enterprise:
Example 16.11. Persistent Volume Object Definition Using GlusterFS
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 4
- This defines the volume type being used. In this case, the glusterfs plug-in is defined.
- 5
- This references the endpoints named above.
- 6
- This is the Gluster volume name, preceded by
/
. - 7
- A volume reclaim policy of retain indicates that the volume will be preserved after the pods accessing it terminate.
Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:
oc create -f gluster-pv.yaml
# oc create -f gluster-pv.yaml
persistentvolume "gluster-pv" created
Verify that the persistent volume was created:
oc get pv
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-pv <none> 1Gi RWX Available 37s
16.4.5. Creating the Persistent Volume Claim Link kopierenLink in die Zwischenablage kopiert!
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 16.12. PVC Object Definition
Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:
oc create -f gluster-claim.yaml
# oc create -f gluster-claim.yaml
persistentvolumeclaim "gluster-claim" created
Verify the PVC was created and bound to the expected PV:
oc get pvc
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
- 1
- The claim was bound to the gluster-pv PV.
16.4.6. Defining GlusterFS Volume Access Link kopierenLink in die Zwischenablage kopiert!
Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:
In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
setsebool -P virt_sandbox_use_fusefs on
# setsebool -P virt_sandbox_use_fusefs on
The virt_sandbox_use_fusefs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
16.4.7. Creating the Pod Link kopierenLink in die Zwischenablage kopiert!
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:
Example 16.13. Pod Object Definition
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 6
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 5
- The group ID to be assigned to the container.
- 7
- The PVC that is bound to the Gluster cluster.
Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:
oc create -f gluster-pod1.yaml
# oc create -f gluster-pod1.yaml
pod "gluster-pod1" created
Verify the pod was created:
oc get pod
# oc get pod
NAME READY STATUS RESTARTS AGE
gluster-pod1 1/1 Running 0 31s
- 1
- After a minute or so, the pod will be in the Running state.
More details are shown in the oc describe pod
command:
There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more shown in the oc get pod <name> -o yaml
command:
- 1
- The SCC used by the pod.
- 2
- The project (namespace) name.
- 3
- The UID of the busybox container.
- 4 5
- The SELinux label for the container, and the default SELinux label for the entire pod, which happen to be the same here.
- 6
- The supplemental group ID for the pod (all containers).
- 7
- The PVC name used by the pod.
16.5. Backing Docker Registry with GlusterFS Storage Link kopierenLink in die Zwischenablage kopiert!
16.5.1. Overview Link kopierenLink in die Zwischenablage kopiert!
This topic reviews how to attach a GlusterFS persistent volume to the Docker Registry.
It is assumed that the Docker registry service has already been started and the Gluster volume has been created.
16.5.2. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- The docker-registry was deployed without configuring storage.
- A Gluster volume exists and glusterfs-fuse is installed on schedulable nodes.
Definitions written for GlusterFS endpoints and service, persistent volume (PV), and persistent volume claim (PVC).
For this guide, these will be:
- gluster-endpoints-service.yaml
- gluster-endpoints.yaml
- gluster-pv.yaml
- gluster-pvc.yaml
A user with the cluster-admin role binding.
- For this guide, that user is admin.
All oc
commands are executed on the master node as the admin user.
16.5.3. Create the Gluster Persistent Volume Link kopierenLink in die Zwischenablage kopiert!
First, make the Gluster volume available to the registry.
oc create -f gluster-endpoints-service.yaml oc create -f gluster-endpoints.yaml oc create -f gluster-pv.yaml oc create -f gluster-pvc.yaml
$ oc create -f gluster-endpoints-service.yaml
$ oc create -f gluster-endpoints.yaml
$ oc create -f gluster-pv.yaml
$ oc create -f gluster-pvc.yaml
Check to make sure the PV and PVC were created and bound successfully. The expected output should resemble the following. Note that the PVC status is Bound, indicating that it has bound to the PV.
If either the PVC or PV failed to create or the PVC failed to bind, refer back to the GlusterFS Persistent Storage guide. Do not proceed until they initialize and the PVC status is Bound.
16.5.4. Attach the PVC to the Docker Registry Link kopierenLink in die Zwischenablage kopiert!
Before moving forward, ensure that the docker-registry service is running.
oc get svc
$ oc get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m
If either the docker-registry service or its associated pod is not running, refer back to the docker-registry setup instructions for troubleshooting before continuing.
Then, attach the PVC:
oc volume deploymentconfigs/docker-registry --add --name=v1 -t pvc \ --claim-name=gluster-claim --overwrite
$ oc volume deploymentconfigs/docker-registry --add --name=v1 -t pvc \
--claim-name=gluster-claim --overwrite
Deploying a Docker Registry provides more information on using the Docker registry.
16.5.5. Known Issues Link kopierenLink in die Zwischenablage kopiert!
16.5.5.1. Pod Cannot Resolve the Volume Host Link kopierenLink in die Zwischenablage kopiert!
In non-production cases where the dnsmasq server is located on the same node as the OpenShift Enterprise master service, pods might not resolve to the host machines when mounting the volume, causing errors in the docker-registry-1-deploy pod. This can happen when dnsmasq.service fails to start because of a collision with OpenShift DNS on port 53. To run the DNS server on the master host, some configurations needs to be changed.
In /etc/dnsmasq.conf, add:
With these settings, dnsmasq will pull from the /etc/hosts file on the master node.
Add the appropriate host names and IPs for all necessary hosts.
In master-config.yaml, change bindAddress
to:
dnsConfig: bindAddress: 127.0.0.1:8053
dnsConfig:
bindAddress: 127.0.0.1:8053
When pods are created, they receive a copy of /etc/resolv.conf, which typically contains only the master DNS server so they can resolve external DNS requests. To enable internal DNS resolution, insert the dnsmasq server at the top of the server list. This way, dnsmasq will attempt to resolve requests internally first.
In /etc/resolv.conf all scheduled nodes:
nameserver 192.168.1.100 nameserver 192.168.1.1
nameserver 192.168.1.100
nameserver 192.168.1.1
Once the configurations are changed, restart the OpenShift Enterprise master and dnsmasq services.
systemctl restart atomic-openshift-master systemctl restart dnsmasq
$ systemctl restart atomic-openshift-master
$ systemctl restart dnsmasq
16.6. Mounting Volumes on Privileged Pods Link kopierenLink in die Zwischenablage kopiert!
16.6.1. Overview Link kopierenLink in die Zwischenablage kopiert!
Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.
While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.
16.6.2. Prerequisites Link kopierenLink in die Zwischenablage kopiert!
- An existing Gluster volume.
- glusterfs-fuse installed on all hosts.
Definitions for GlusterFS:
- Endpoints and services: gluster-endpoints-service.yaml and gluster-endpoints.yaml
- Persistent volumes: gluster-pv.yaml
- Persistent volume claims: gluster-pvc.yaml
- Privileged pods: gluster-nginx-pod.yaml
-
A user with the cluster-admin role binding. For this guide, that user is called
admin
.
16.6.3. Creating the Persistent Volume Link kopierenLink in die Zwischenablage kopiert!
Creating the PersistentVolume makes the storage accessible to users, regardless of projects.
As the admin, create the service, endpoint object, and persistent volume:
oc create -f gluster-endpoints-service.yaml oc create -f gluster-endpoints.yaml oc create -f gluster-pv.yaml
$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the objects were created:
oc get svc
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE gluster-cluster 172.30.151.58 <none> 1/TCP <none> 24s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ep
$ oc get ep NAME ENDPOINTS AGE gluster-cluster 192.168.59.102:1,192.168.59.103:1 2m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pv
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2Gi RWX Available 2d
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
16.6.4. Creating a Regular User Link kopierenLink in die Zwischenablage kopiert!
Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:
- As the admin, add a user to the SCC:
oadm policy add-scc-to-user privileged <username>
$ oadm policy add-scc-to-user privileged <username>
- Log in as the regular user:
oc login -u <username> -p <password>
$ oc login -u <username> -p <password>
- Then, create a new project:
oc new-project <project_name>
$ oc new-project <project_name>
16.6.5. Creating the Persistent Volume Claim Link kopierenLink in die Zwischenablage kopiert!
As a regular user, create the PersistentVolumeClaim to access the volume:
oc create -f gluster-pvc.yaml -n <project_name>
$ oc create -f gluster-pvc.yaml -n <project_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Define your pod to access the claim:
Upon pod creation, the mount directory is created and the volume is attached to that mount point.
As regular user, create a pod from the definition:
oc create -f gluster-nginx-pod.yaml
$ oc create -f gluster-nginx-pod.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the pod created successfully:
oc get pods
$ oc get pods NAME READY STATUS RESTARTS AGE gluster-nginx-pod 1/1 Running 0 36m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It can take several minutes for the pod to create.
16.6.6. Verifying the Setup Link kopierenLink in die Zwischenablage kopiert!
16.6.6.1. Checking the Pod SCC Link kopierenLink in die Zwischenablage kopiert!
Export the pod configuration:
oc export pod <pod_name>
$ oc export pod <pod_name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the output. Check that
openshift.io/scc
has the value ofprivileged
:Example 16.15. Export Snippet
metadata: annotations: openshift.io/scc: privileged
metadata: annotations: openshift.io/scc: privileged
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
16.6.6.2. Verifying the Mount Link kopierenLink in die Zwischenablage kopiert!
Access the pod and check that the volume is mounted:
oc rsh <pod_name>
$ oc rsh <pod_name> [root@gluster-nginx-pvc /]# mount
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Examine the output for the Gluster volume:
Example 16.16. Volume Mount
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow