Chapter 18. Persistent Storage Examples


18.1. Overview

The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.

18.2. Sharing an NFS mount across two persistent volume claims

18.2.1. Overview

The following use case describes how a cluster administrator wanting to leverage shared storage for use by two separate containers would configure the solution. This example highlights the use of NFS, but can easily be adapted to other shared storage types, such as GlusterFS. In addition, this example will show configuration of pod security as it relates to shared storage.

Persistent Storage Using NFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using NFS as persistent storage. This topic shows and end-to-end example of using an existing NFS cluster and OpenShift Enterprise persistent store, and assumes an existing NFS server and exports exist in your OpenShift Enterprise infrastructure.

Note

All oc commands are executed on the OpenShift Enterprise master host.

18.2.2. Creating the Persistent Volume

Before creating the PV object in OpenShift Enterprise, the persistent volume (PV) file is defined:

Example 18.1. Persistent Volume Object Definition Using NFS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv 1
spec:
  capacity:
    storage: 1Gi 2
  accessModes:
    - ReadWriteMany 3
  persistentVolumeReclaimPolicy: Retain 4
  nfs: 5
    path: /opt/nfs 6
    server: nfs.f22 7
    readOnly: false
1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
4
A volume reclaim policy of retain indicates to preserve the volume after the pods.
5
This defines the volume type being used, in this case the NFS plug-in.
6
This is the NFS mount path.
7
This is the NFS server. This can also be specified by IP address.

Save the PV definition to a file, for example nfs-pv.yaml, and create the persistent volume:

# oc create -f nfs-pv.yaml
persistentvolume "nfs-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
nfs-pv       <none>    1Gi        RWX           Available                       37s

18.2.3. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC. This is the use case we are highlighting in this example.

Example 18.2. PVC Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc  1
spec:
  accessModes:
  - ReadWriteMany      2
  resources:
     requests:
       storage: 1Gi    3
1
The claim name is referenced by the pod under its volumes section.
2
As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
3
This claim will look for PVs offering 1Gi or greater capacity.

Save the PVC definition to a file, for example nfs-pvc.yaml, and create the PVC:

# oc create -f nfs-pvc.yaml
persistentvolumeclaim "nfs-pvc" created

Verify that the PVC was created and bound to the expected PV:

# oc get pvc
NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
nfs-pvc         <none>    Bound     nfs-pv       1Gi        RWX           24s
                                    1
1
The claim, nfs-pvc, was bound to the nfs-pv PV.

18.2.4. Ensuring NFS Volume Access

Access is necessary to a node in the NFS server. On this node, examine the NFS export mount:

[root@nfs nfs]# ls -lZ /opt/nfs/
total 8
-rw-r--r--. 1 root 100003  system_u:object_r:usr_t:s0     10 Oct 12 23:27 test2b
              1
                     2
1
the owner has ID 0.
2
the group has ID 100003.

In order to access the NFS mount, the container must match the SELinux label, and either run with a UID of 0, or with 100003 in its supplemental groups range. Gain access to the volume by matching the NFS mount’s groups, which will be defined in the pod definition below.

By default, SELinux does not allow writing from a pod to a remote NFS server. To enable writing to NFS volumes with SELinux enforcing on each node, run:

# setsebool -P virt_sandbox_use_nfs on
# setsebool -P virt_use_nfs on
Note

The virt_sandbox_use_nfs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

18.2.5. Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the NFS volume for read-write access:

Example 18.3. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: hello-openshift-nfs-pod 1
  labels:
    name: hello-openshift-nfs-pod
spec:
  containers:
    - name: hello-openshift-nfs-pod
      image: openshift/hello-openshift 2
      ports:
        - name: web
          containerPort: 80
      volumeMounts:
        - name: nfsvol 3
          mountPath: /usr/share/nginx/html 4
  securityContext:
      supplementalGroups: [100003] 5
      privileged: false
  volumes:
    - name: nfsvol
      persistentVolumeClaim:
        claimName: nfs-pvc 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod.
3
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The group ID to be assigned to the container.
6
The PVC that was created in the previous step.

Save the pod definition to a file, for example nfs.yaml, and create the pod:

# oc create -f nfs.yaml
pod "hello-openshift-nfs-pod" created

Verify that the pod was created:

# oc get pods
NAME                          READY     STATUS    RESTARTS   AGE
hello-openshift-nfs-pod       1/1       Running   0          4s

More details are shown in the oc describe pod command:

[root@ose70 nfs]# oc describe pod hello-openshift-nfs-pod
Name:				hello-openshift-nfs-pod
Namespace:			default 1
Image(s):			fedora/S3
Node:				ose70.rh7/192.168.234.148 2
Start Time:			Mon, 21 Mar 2016 09:59:47 -0400
Labels:				name=hello-openshift-nfs-pod
Status:				Running
Reason:
Message:
IP:				10.1.0.4
Replication Controllers:	<none>
Containers:
  hello-openshift-nfs-pod:
    Container ID:	docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
    Image:		fedora/S3
    Image ID:		docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Mon, 21 Mar 2016 09:59:49 -0400
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True
Volumes:
  nfsvol:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfs-pvc 3
    ReadOnly:	false
  default-token-a06zb:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-a06zb
Events: 4
  FirstSeen	LastSeen	Count	From			SubobjectPath				                      Reason		Message
  ─────────	────────	─────	────			─────────────				                      ──────		───────
  4m		4m		1	{scheduler }							                                      Scheduled	Successfully assigned hello-openshift-nfs-pod to ose70.rh7
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Pulled		Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Created		Created with docker id 866a37108041
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	          Started		Started with docker id 866a37108041
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Pulled		Container image "fedora/S3" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Created		Created with docker id a3292104d6c2
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{hello-openshift-nfs-pod}		Started		Started with docker id a3292104d6c2
1
The project (namespace) name.
2
The IP address of the OpenShift Enterprise node running the pod.
3
The PVC name used by the pod.
4
The list of events resulting in the pod being launched and the NFS volume being mounted. The container will not start correctly if the volume cannot mount.

There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more, shown in the oc get pod <name> -o yaml command:

[root@ose70 nfs]# oc get pod hello-openshift-nfs-pod -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: restricted 1
  creationTimestamp: 2016-03-21T13:59:47Z
  labels:
    name: hello-openshift-nfs-pod
  name: hello-openshift-nfs-pod
  namespace: default 2
  resourceVersion: "2814411"
  selflink: /api/v1/namespaces/default/pods/hello-openshift-nfs-pod
  uid: 2c22d2ea-ef6d-11e5-adc7-000c2900f1e3
spec:
  containers:
  - image: fedora/S3
    imagePullPolicy: IfNotPresent
    name: hello-openshift-nfs-pod
    ports:
    - containerPort: 80
      name: web
      protocol: TCP
    resources: {}
    securityContext:
      privileged: false
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /usr/share/S3/html
      name: nfsvol
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-a06zb
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ose70.rh7
  imagePullSecrets:
  - name: default-dockercfg-xvdew
  nodeName: ose70.rh7
  restartPolicy: Always
  securityContext:
    supplementalGroups:
    - 100003 3
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: nfsvol
    persistentVolumeClaim:
      claimName: nfs-pvc 4
  - name: default-token-a06zb
    secret:
      secretName: default-token-a06zb
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-03-21T13:59:49Z
    status: "True"
    type: Ready
  containerStatuses:
  - containerID: docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
    image: fedora/S3
    imageID: docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
    lastState: {}
    name: hello-openshift-nfs-pod
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2016-03-21T13:59:49Z
  hostIP: 192.168.234.148
  phase: Running
  podIP: 10.1.0.4
  startTime: 2016-03-21T13:59:47Z
1
The SCC used by the pod.
2
The project (namespace) name.
3
The supplemental group ID for the pod (all containers).
4
The PVC name used by the pod.

18.2.6. Creating an Additional Pod to Reference the Same PVC

This pod definition, created in the same namespace, uses a different container. However, we can use the same backing storage by specifying the claim name in the volumes section below:

Example 18.4. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: busybox-nfs-pod 1
  labels:
    name: busybox-nfs-pod
spec:
  containers:
  - name: busybox-nfs-pod
    image: busybox 2
    command: ["sleep", "60000"]
    volumeMounts:
    - name: nfsvol-2 3
      mountPath: /usr/share/busybox  4
      readOnly: false
  securityContext:
    supplementalGroups: [100003] 5
    privileged: false
  volumes:
  - name: nfsvol-2
    persistentVolumeClaim:
      claimName: nfs-pvc 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod.
3
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The group ID to be assigned to the container.
6
The PVC that was created earlier and is also being used by a different container.

Save the pod definition to a file, for example nfs-2.yaml, and create the pod:

# oc create -f nfs-2.yaml
pod "busybox-nfs-pod" created

Verify that the pod was created:

# oc get pods
NAME                READY     STATUS    RESTARTS   AGE
busybox-nfs-pod     1/1       Running   0          3s

More details are shown in the oc describe pod command:

[root@ose70 nfs]# oc describe pod busybox-nfs-pod
Name:				busybox-nfs-pod
Namespace:			default
Image(s):			busybox
Node:				ose70.rh7/192.168.234.148
Start Time:			Mon, 21 Mar 2016 10:19:46 -0400
Labels:				name=busybox-nfs-pod
Status:				Running
Reason:
Message:
IP:				10.1.0.5
Replication Controllers:	<none>
Containers:
  busybox-nfs-pod:
    Container ID:	docker://346d432e5a4824ebf5a47fceb4247e0568ecc64eadcc160e9bab481aecfb0594
    Image:		busybox
    Image ID:		docker://17583c7dd0dae6244203b8029733bdb7d17fccbb2b5d93e2b24cf48b8bfd06e2
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Running
      Started:		Mon, 21 Mar 2016 10:19:48 -0400
    Ready:		True
    Restart Count:	0
    Environment Variables:
Conditions:
  Type		Status
  Ready 	True
Volumes:
  nfsvol-2:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	nfs-pvc
    ReadOnly:	false
  default-token-32d2z:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-32d2z
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath				Reason		Message
  ─────────	────────	─────	────			─────────────				──────		───────
  4m		4m		1	{scheduler }							Scheduled	Successfully assigned busybox-nfs-pod to ose70.rh7
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Pulled		Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Created		Created with docker id 249b7d7519b1
  4m		4m		1	{kubelet ose70.rh7}	implicitly required container POD	Started		Started with docker id 249b7d7519b1
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Pulled		Container image "busybox" already present on machine
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Created		Created with docker id 346d432e5a48
  4m		4m		1	{kubelet ose70.rh7}	spec.containers{busybox-nfs-pod}	Started		Started with docker id 346d432e5a48

As you can see, both containers are using the same storage claim that is attached to the same NFS mount on the back end.

18.3. Complete Example Using Ceph RBD

18.3.1. Overview

This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Enterprise persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.

Note

All oc …​ commands are executed on the OpenShift Enterprise master host.

18.3.2. Installing the ceph-common Package

The ceph-common library must be installed on all schedulable OpenShift Enterprise nodes:

Note

The OpenShift Enterprise all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

# yum install -y ceph-common

18.3.3. Creating the Ceph Secret

The ceph auth get-key command is run on a Ceph MON node to display the key value for the client.admin user:

Example 18.5. Ceph Secret Definition

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1
1
This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.

Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:

$ oc create -f ceph-secret.yaml
secret "ceph-secret" created

Verify that the secret was created:

# oc get secret ceph-secret
NAME          TYPE      DATA      AGE
ceph-secret   Opaque    1         23d

18.3.4. Creating the Persistent Volume

Next, before creating the PV object in OpenShift Enterprise, define the persistent volume file:

Example 18.6. Persistent Volume Object Definition Using Ceph RBD

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv     1
spec:
  capacity:
    storage: 2Gi    2
  accessModes:
    - ReadWriteOnce 3
  rbd:              4
    monitors:       5
      - 192.168.122.133:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret 6
    fsType: ext4        7
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle
1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).
4
This defines the volume type being used. In this case, the rbd plug-in is defined.
5
This is an array of Ceph monitor IP addresses and ports.
6
This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Enterprise to the Ceph server.
7
This is the file system type mounted on the Ceph RBD block device.

Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:

# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME                     LABELS    CAPACITY     ACCESSMODES   STATUS      CLAIM     REASON    AGE
ceph-pv                  <none>    2147483648   RWO           Available                       2s

18.3.5. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.

Example 18.7. PVC Object Definition

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:     1
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi 2
1
As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
2
This claim will look for PVs offering 2Gi or greater capacity.

Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:

# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created

#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME         LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
ceph-claim   <none>    Bound     ceph-pv   1Gi        RWX           21s
                                 1
1
the claim was bound to the ceph-pv PV.

18.3.6. Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:

Example 18.8. Pod Object Definition

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1           1
spec:
  containers:
  - name: ceph-busybox
    image: busybox          2
    command: ["sleep", "60000"]
    volumeMounts:
    - name: ceph-vol1       3
      mountPath: /usr/share/busybox 4
      readOnly: false
  volumes:
  - name: ceph-vol1         5
    persistentVolumeClaim:
      claimName: ceph-claim 6
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod. In this case, we are telling busybox to sleep.
3 5
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
6
The PVC that is bound to the Ceph RBD cluster.

Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:

# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created

#verify pod was created
# oc get pod
NAME        READY     STATUS    RESTARTS   AGE
ceph-pod1   1/1       Running   0          2m
                      1
1
After a minute or so, the pod will be in the Running state.

18.3.7. Defining Group and Owner IDs (Optional)

When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup, as shown in the following pod definition fragment:

Example 18.9. Group ID Pod Definition

...
spec:
  containers:
    - name:
    ...
  securityContext: 1
    fsGroup: 7777  2
...
1
securityContext must be defined at the pod level, not under a specific container.
2
All containers in the pod will have the same fsGroup ID.

18.4. Complete Example Using GlusterFS

18.4.1. Overview

This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Enterprise persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.

Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.

Note

All oc …​ commands are executed on the OpenShift Enterprise master host.

18.4.2. Installing the glusterfs-fuse Package

The glusterfs-fuse library must be installed on all schedulable OpenShift Enterprise nodes:

# yum install -y glusterfs-fuse
Note

The OpenShift Enterprise all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node.

18.4.3. Creating the Gluster Endpoints and Gluster Service for Persistence

The named endpoints define each node in the Gluster-trusted storage pool:

Example 18.10. GlusterFS Endpoint Definition

apiVersion: v1
kind: Endpoints
metadata:
  name: gluster-endpoints 1
subsets:
- addresses:              2
  - ip: 192.168.122.21
  ports:                  3
  - port: 1
    protocol: TCP
- addresses:
  - ip: 192.168.122.22
  ports:
  - port: 1
    protocol: TCP
1
The name of the endpoints is used in the PV definition below.
2
An array of IP addresses for each node in the Gluster pool. Currently, host names are not supported.
3
The port numbers are ignored, but must be legal port numbers. The value 1 is commonly used.

Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:

# oc create -f gluster-endpoints.yaml
endpoints "gluster-endpoints" created

Verify that the endpoints were created:

# oc get endpoints gluster-endpoints
NAME                ENDPOINTS                           AGE
gluster-endpoints   192.168.122.21:1,192.168.122.22:1   1m
Note

To persist the Gluster endpoints, you also need to create a service.

Example 18.11. GlusterFS Service Definition

apiVersion: v1
kind: Service
metadata:
  name: gluster-service 1
spec:
  ports:
  - port: 1 2
1
The name of the service.
2
The port should match the same port used in the endpoints.

Save the service definition to a file, for example gluster-service.yaml, then create the endpoints object:

# oc create -f gluster-service.yaml
endpoints "gluster-service" created

Verify that the service was created:

# oc get service gluster-service
NAME                CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
gluster-service   10.0.0.130   <none>        1/TCP     9s

18.4.4. Creating the Persistent Volume

Next, before creating the PV object, define the persistent volume in OpenShift Enterprise:

Example 18.12. Persistent Volume Object Definition Using GlusterFS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-pv   1
spec:
  capacity:
    storage: 1Gi     2
  accessModes:
  - ReadWriteMany    3
  glusterfs:         4
    endpoints: gluster-endpoints        5
    path: /HadoopVol 6
    readOnly: false
  persistentVolumeReclaimPolicy: Retain 7
1
The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2
The amount of storage allocated to this volume.
3
accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
4
This defines the volume type being used. In this case, the glusterfs plug-in is defined.
5
This references the endpoints named above.
6
This is the Gluster volume name, preceded by /.
7
A volume reclaim policy of retain indicates that the volume will be preserved after the pods accessing it terminate. Accepted values include Retain, Delete, and Recycle.

Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:

# oc create -f gluster-pv.yaml
persistentvolume "gluster-pv" created

Verify that the persistent volume was created:

# oc get pv
NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
gluster-pv   <none>    1Gi        RWX           Available                       37s

18.4.5. Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.

Example 18.13. PVC Object Definition

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-claim  1
spec:
  accessModes:
  - ReadWriteMany      2
  resources:
     requests:
       storage: 1Gi    3
1
The claim name is referenced by the pod under its volumes section.
2
As mentioned above for PVs, the accessModes do not enforce access rights, but rather act as labels to match a PV to a PVC.
3
This claim will look for PVs offering 1Gi or greater capacity.

Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:

# oc create -f gluster-claim.yaml
persistentvolumeclaim "gluster-claim" created

Verify the PVC was created and bound to the expected PV:

# oc get pvc
NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
gluster-claim   <none>    Bound     gluster-pv   1Gi        RWX           24s
                                    1
1
The claim was bound to the gluster-pv PV.

18.4.6. Defining GlusterFS Volume Access

Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:

# ls -lZ /mnt/glusterfs/
drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0    HadoopVol

# id yarn
uid=592(yarn) gid=590(hadoop) groups=590(hadoop)
    1
                  2
1
The owner has ID 592.
2
The group has ID 590.

In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.

By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:

# setsebool -P virt_sandbox_use_fusefs on
Note

The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

18.4.7. Creating the Pod using NGINX Web Server image

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:

Note

The NGINX image may require to run in privileged mode to create the mount and run properly. An easy way to accomplish this is to simply add your user to the privileged Security Context Constraint (SCC):

$ oadm policy add-scc-to-user privileged myuser

Then, add the privileged: true to the containers securityContext: section of the YAML file (as seen in the example below).

Managing Security Context Constraints provides additional information regarding SCCs.

Example 18.14. Pod Object Definition using NGINX image

apiVersion: v1
kind: Pod
metadata:
  name: gluster-pod1
  labels:
    name: gluster-pod1   1
spec:
  containers:
  - name: gluster-pod1
    image: nginx       2
    ports:
    - name: web
      containerPort: 80
    securityContext:
      privileged: true
    volumeMounts:
    - name: gluster-vol1 3
      mountPath: /usr/share/nginx/html 4
      readOnly: false
  securityContext:
    supplementalGroups: [590]       5
  volumes:
  - name: gluster-vol1   6
    persistentVolumeClaim:
      claimName: gluster-claim      7
1
The name of this pod as displayed by oc get pod.
2
The image run by this pod. In this case, we are using a standard NGINX image.
3 6
The name of the volume. This name must be the same in both the containers and volumes sections.
4
The mount path as seen in the container.
5
The SupplementalGroup ID (Linux Groups) to be assigned at the pod level and as discussed this should match the POSIX permissions on the Gluster volume.
7
The PVC that is bound to the Gluster cluster.

Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:

# oc create -f gluster-pod1.yaml
pod "gluster-pod1" created

Verify the pod was created:

# oc get pod
NAME           READY     STATUS    RESTARTS   AGE
gluster-pod1   1/1       Running   0          31s

                         1
1
After a minute or so, the pod will be in the Running state.

More details are shown in the oc describe pod command:

# oc describe pod gluster-pod1
Name:			gluster-pod1
Namespace:		default  1
Security Policy:	privileged
Node:			ose1.rhs/192.168.122.251
Start Time:		Wed, 24 Aug 2016 12:37:45 -0400
Labels:			name=gluster-pod1
Status:			Running
IP:			172.17.0.2  2
Controllers:		<none>
Containers:
  gluster-pod1:
    Container ID:	docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
    Image:		nginx
    Image ID:		docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
    Port:		80/TCP
    State:		Running
      Started:		Wed, 24 Aug 2016 12:37:52 -0400
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /usr/share/nginx/html/test from glustervol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-1n70u (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True
  Ready 	True
  PodScheduled 	True
Volumes:
  glustervol:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	gluster-claim  3
    ReadOnly:	false
  default-token-1n70u:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-1n70u
QoS Tier:	BestEffort
Events:    4
  FirstSeen	LastSeen	Count	From			SubobjectPath			Type		Reason		Message
  ---------	--------	-----	----			-------------			--------	------		-------
  10s		10s		1	{default-scheduler }					Normal		Scheduled	Successfully assigned gluster-pod1 to ose1.rhs
  9s		9s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Pulling		pulling image "nginx"
  4s		4s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Pulled		Successfully pulled image "nginx"
  3s		3s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Created		Created container with docker id e67ed01729e1
  3s		3s		1	{kubelet ose1.rhs}	spec.containers{gluster-pod1}	Normal		Started		Started container with docker id e67ed01729e1
1
The project (namespace) name.
2
The IP address of the OpenShift Enterprise node running the pod.
3
The PVC name used by the pod.
4
The list of events resulting in the pod being launched and the Gluster volume being mounted.

There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the ⁠SELinux label, and more shown in the oc get pod <name> -o yaml command:

# oc get pod gluster-pod1 -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: privileged  1
  creationTimestamp: 2016-08-24T16:37:45Z
  labels:
    name: gluster-pod1
  name: gluster-pod1
  namespace: default  2
  resourceVersion: "482"
  selfLink: /api/v1/namespaces/default/pods/gluster-pod1
  uid: 15afda77-6a19-11e6-aadb-525400f7256d
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: gluster-pod1
    ports:
    - containerPort: 80
      name: web
      protocol: TCP
    resources: {}
    securityContext:
      privileged: true  3
    terminationMessagePath: /dev/termination-log
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: glustervol
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-1n70u
      readOnly: true
  dnsPolicy: ClusterFirst
  host: ose1.rhs
  imagePullSecrets:
  - name: default-dockercfg-20xg9
  nodeName: ose1.rhs
  restartPolicy: Always
  securityContext:
    supplementalGroups:
    - 590   4
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
  - name: glustervol
    persistentVolumeClaim:
      claimName: gluster-claim  5
  - name: default-token-1n70u
    secret:
      secretName: default-token-1n70u
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2016-08-24T16:37:45Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2016-08-24T16:37:53Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2016-08-24T16:37:45Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
    image: nginx
    imageID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
    lastState: {}
    name: gluster-pod1
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2016-08-24T16:37:52Z
  hostIP: 192.168.122.251
  phase: Running
  podIP: 172.17.0.2
  startTime: 2016-08-24T16:37:45Z
1
The SCC used by the pod.
2
The project (namespace) name.
3
The security context level requested, in this case privileged
4
The supplemental group ID for the pod (all containers).
5
The PVC name used by the pod.

18.5. Backing Docker Registry with GlusterFS Storage

18.5.1. Overview

This topic reviews how to attach a GlusterFS persistent volume to the Docker Registry.

It is assumed that the Docker registry service has already been started and the Gluster volume has been created.

18.5.2. Prerequisites

Note

All oc commands are executed on the master node as the admin user.

18.5.3. Create the Gluster Persistent Volume

First, make the Gluster volume available to the registry.

$ oc create -f gluster-endpoints-service.yaml
$ oc create -f gluster-endpoints.yaml
$ oc create -f gluster-pv.yaml
$ oc create -f gluster-pvc.yaml

Check to make sure the PV and PVC were created and bound successfully. The expected output should resemble the following. Note that the PVC status is Bound, indicating that it has bound to the PV.

$ oc get pv
NAME         LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
gluster-pv   <none>    1Gi        RWX           Available                       37s
$ oc get pvc
NAME            LABELS    STATUS    VOLUME       CAPACITY   ACCESSMODES   AGE
gluster-claim   <none>    Bound     gluster-pv   1Gi        RWX           24s
Note

If either the PVC or PV failed to create or the PVC failed to bind, refer back to the GlusterFS Persistent Storage guide. Do not proceed until they initialize and the PVC status is Bound.

18.5.4. Attach the PVC to the Docker Registry

Before moving forward, ensure that the docker-registry service is running.

$ oc get svc
NAME              CLUSTER_IP       EXTERNAL_IP   PORT(S)                 SELECTOR                  AGE
docker-registry   172.30.167.194   <none>        5000/TCP                docker-registry=default   18m
Note

If either the docker-registry service or its associated pod is not running, refer back to the docker-registry setup instructions for troubleshooting before continuing.

Then, attach the PVC:

$ oc volume deploymentconfigs/docker-registry --add --name=v1 -t pvc \
     --claim-name=gluster-claim --overwrite

Deploying a Docker Registry provides more information on using the Docker registry.

18.5.5. Known Issues

18.5.5.1. Pod Cannot Resolve the Volume Host

In non-production cases where the dnsmasq server is located on the same node as the OpenShift Enterprise master service, pods might not resolve to the host machines when mounting the volume, causing errors in the docker-registry-1-deploy pod. This can happen when dnsmasq.service fails to start because of a collision with OpenShift Enterprise DNS on port 53. To run the DNS server on the master host, some configurations needs to be changed.

In /etc/dnsmasq.conf, add:

# Reverse DNS record for master
host-record=master.example.com,<master-IP>
# Wildcard DNS for OpenShift Applications - Points to Router
address=/apps.example.com/<master-IP>
# Forward .local queries to SkyDNS
server=/local/127.0.0.1#8053
# Forward reverse queries for service network to SkyDNS.
# This is for default OpenShift SDN - change as needed.
server=/17.30.172.in-addr.arpa/127.0.0.1#8053

With these settings, dnsmasq will pull from the /etc/hosts file on the master node.

Add the appropriate host names and IPs for all necessary hosts.

In master-config.yaml, change bindAddress to:

dnsConfig:
 bindAddress: 127.0.0.1:8053

When pods are created, they receive a copy of /etc/resolv.conf, which typically contains only the master DNS server so they can resolve external DNS requests. To enable internal DNS resolution, insert the dnsmasq server at the top of the server list. This way, dnsmasq will attempt to resolve requests internally first.

In /etc/resolv.conf all scheduled nodes:

nameserver 192.168.1.100  1
nameserver 192.168.1.1    2
1
Add the internal DNS server.
2
Pre-existing external DNS server.

Once the configurations are changed, restart the OpenShift Enterprise master and dnsmasq services.

$ systemctl restart atomic-openshift-master
$ systemctl restart dnsmasq

18.6. Mounting Volumes on Privileged Pods

18.6.1. Overview

Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.

Note

While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.

18.6.2. Prerequisites

18.6.3. Creating the Persistent Volume

Creating the PersistentVolume makes the storage accessible to users, regardless of projects.

  1. As the admin, create the service, endpoint object, and persistent volume:

    $ oc create -f gluster-endpoints-service.yaml
    $ oc create -f gluster-endpoints.yaml
    $ oc create -f gluster-pv.yaml
  2. Verify that the objects were created:

    $ oc get svc
    NAME              CLUSTER_IP      EXTERNAL_IP   PORT(S)   SELECTOR   AGE
    gluster-cluster   172.30.151.58   <none>        1/TCP     <none>     24s
    $ oc get ep
    NAME              ENDPOINTS                           AGE
    gluster-cluster   192.168.59.102:1,192.168.59.103:1   2m
    $ oc get pv
    NAME                     LABELS    CAPACITY   ACCESSMODES   STATUS      CLAIM     REASON    AGE
    gluster-default-volume   <none>    2Gi        RWX           Available                       2d

18.6.4. Creating a Regular User

Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:

  1. As the admin, add a user to the SCC:
$ oadm policy add-scc-to-user privileged <username>
  1. Log in as the regular user:
$ oc login -u <username> -p <password>
  1. Then, create a new project:
$ oc new-project <project_name>

18.6.5. Creating the Persistent Volume Claim

  1. As a regular user, create the PersistentVolumeClaim to access the volume:

    $ oc create -f gluster-pvc.yaml -n <project_name>
  2. Define your pod to access the claim:

    Example 18.15. Pod Definition

    apiVersion: v1
    id: gluster-S3-pvc
    kind: Pod
    metadata:
      name: gluster-nginx-priv
    spec:
      containers:
        - name: gluster-nginx-priv
          image: fedora/nginx
          volumeMounts:
            - mountPath: /mnt/gluster 1
              name: gluster-volume-claim
          securityContext:
            privileged: true
      volumes:
        - name: gluster-volume-claim
          persistentVolumeClaim:
            claimName: gluster-claim 2
    1
    Volume mount within the pod.
    2
    The gluster-claim must reflect the name of the PersistentVolume.
  3. Upon pod creation, the mount directory is created and the volume is attached to that mount point.

    As regular user, create a pod from the definition:

    $ oc create -f gluster-S3-pod.yaml
  4. Verify that the pod created successfully:

    $ oc get pods
    NAME                 READY     STATUS    RESTARTS   AGE
    gluster-S3-pod   1/1       Running   0          36m

    It can take several minutes for the pod to create.

18.6.6. Verifying the Setup

18.6.6.1. Checking the Pod SCC

  1. Export the pod configuration:

    $ oc export pod <pod_name>
  2. Examine the output. Check that openshift.io/scc has the value of privileged:

    Example 18.16. Export Snippet

    metadata:
      annotations:
        openshift.io/scc: privileged

18.6.6.2. Verifying the Mount

  1. Access the pod and check that the volume is mounted:

    $ oc rsh <pod_name>
    [root@gluster-S3-pvc /]# mount
  2. Examine the output for the Gluster volume:

    Example 18.17. Volume Mount

    192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.