This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Chapter 22. Persistent Storage Examples
22.1. Overview
The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.
22.3. Complete Example Using Ceph RBD
22.3.1. Overview
This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Container Platform persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.
All oc …
commands are executed on the OpenShift Container Platform master host.
22.3.2. Installing the ceph-common Package
The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes:
The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.
yum install -y ceph-common
# yum install -y ceph-common
22.3.3. Creating the Ceph Secret
The ceph auth get-key
command is run on a Ceph MON node to display the key value for the client.admin user:
Example 22.5. Ceph Secret Definition
apiVersion: v1 kind: Secret metadata: name: ceph-secret data: key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
- 1
- This base64 key is generated on one of the Ceph MON nodes using the
ceph auth get-key client.admin | base64
command, then copying the output and pasting it as the secret key’s value.
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
oc create -f ceph-secret.yaml
$ oc create -f ceph-secret.yaml
secret "ceph-secret" created
Verify that the secret was created:
oc get secret ceph-secret
# oc get secret ceph-secret
NAME TYPE DATA AGE
ceph-secret Opaque 1 23d
22.3.4. Creating the Persistent Volume
Next, before creating the PV object in OpenShift Container Platform, define the persistent volume file:
Example 22.6. Persistent Volume Object Definition Using Ceph RBD
apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce rbd: monitors: - 192.168.122.133:6789 pool: rbd image: ceph-image user: admin secretRef: name: ceph-secret fsType: ext4 readOnly: false persistentVolumeReclaimPolicy: Recycle
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 192.168.122.133:6789
pool: rbd
image: ceph-image
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Recycle
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).- 4
- This defines the volume type being used. In this case, the rbd plug-in is defined.
- 5
- This is an array of Ceph monitor IP addresses and ports.
- 6
- This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Container Platform to the Ceph server.
- 7
- This is the file system type mounted on the Ceph RBD block device.
Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:
oc create -f ceph-pv.yaml
# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created
Verify that the persistent volume was created:
oc get pv
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
ceph-pv <none> 2147483648 RWO Available 2s
22.3.5. Creating the Persistent Volume Claim
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 22.7. PVC Object Definition
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:
oc create -f ceph-claim.yaml oc get pvc
# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created
#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim <none> Bound ceph-pv 1Gi RWX 21s
- 1
- the claim was bound to the ceph-pv PV.
22.3.6. Creating the Pod
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:
Example 22.8. Pod Object Definition
apiVersion: v1 kind: Pod metadata: name: ceph-pod1 spec: containers: - name: ceph-busybox image: busybox command: ["sleep", "60000"] volumeMounts: - name: ceph-vol1 mountPath: /usr/share/busybox readOnly: false volumes: - name: ceph-vol1 persistentVolumeClaim: claimName: ceph-claim
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1
spec:
containers:
- name: ceph-busybox
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 5
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 6
- The PVC that is bound to the Ceph RBD cluster.
Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:
oc create -f ceph-pod1.yaml oc get pod
# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created
#verify pod was created
# oc get pod
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 2m
- 1
- After a minute or so, the pod will be in the Running state.
22.3.7. Defining Group and Owner IDs (Optional)
When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup
, as shown in the following pod definition fragment:
Example 22.9. Group ID Pod Definition
... spec: containers: - name: ... securityContext: fsGroup: 7777 ...
...
spec:
containers:
- name:
...
securityContext:
fsGroup: 7777
...
22.4. Complete Example Using GlusterFS
22.4.1. Overview
This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Container Platform persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.
Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.
For an end-to-end example of how to dynamically provision GlusterFS volumes, see Complete Example of Dynamic Provisioning Using GlusterFS. The persistent volume (PV) and endpoints are both created dynamically by GlusterFS.
All oc …
commands are executed on the OpenShift Container Platform master host.
22.4.2. Installing the glusterfs-fuse Package
The glusterfs-fuse library must be installed on all schedulable OpenShift Container Platform nodes:
yum install -y glusterfs-fuse
# yum install -y glusterfs-fuse
The OpenShift Container Platform all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node.
22.4.3. Creating the Gluster Endpoints and Gluster Service for Persistence
The named endpoints define each node in the Gluster-trusted storage pool:
Example 22.10. GlusterFS Endpoint Definition
apiVersion: v1 kind: Endpoints metadata: name: gluster-cluster subsets: - addresses: - ip: 192.168.122.21 ports: - port: 1 protocol: TCP - addresses: - ip: 192.168.122.22 ports: - port: 1 protocol: TCP
apiVersion: v1
kind: Endpoints
metadata:
name: gluster-cluster
subsets:
- addresses:
- ip: 192.168.122.21
ports:
- port: 1
protocol: TCP
- addresses:
- ip: 192.168.122.22
ports:
- port: 1
protocol: TCP
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:
oc create -f gluster-endpoints.yaml
# oc create -f gluster-endpoints.yaml
endpoints "gluster-cluster" created
Verify that the endpoints were created:
oc get endpoints gluster-cluster
# oc get endpoints gluster-cluster
NAME ENDPOINTS AGE
gluster-cluster 192.168.122.21:1,192.168.122.22:1 1m
To persist the Gluster endpoints, you also need to create a service.
Endpoints are name-spaced. Each project accessing the Gluster volume needs its own endpoints.
Example 22.11. GlusterFS Service Definition
apiVersion: v1 kind: Service metadata: name: gluster-cluster spec: ports: - port: 1
apiVersion: v1
kind: Service
metadata:
name: gluster-cluster
spec:
ports:
- port: 1
Save the service definition to a file, for example gluster-service.yaml, then create the endpoints object:
oc create -f gluster-service.yaml
# oc create -f gluster-service.yaml
endpoints "gluster-cluster" created
Verify that the service was created:
oc get service gluster-cluster
# oc get service gluster-cluster
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gluster-cluster 10.0.0.130 <none> 1/TCP 9s
22.4.4. Creating the Persistent Volume
Next, before creating the PV object, define the persistent volume in OpenShift Container Platform:
Persistent Volume Object Definition Using GlusterFS
apiVersion: v1 kind: PersistentVolume metadata: name: gluster-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteMany glusterfs: endpoints: gluster-cluster path: /HadoopVol readOnly: false persistentVolumeReclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: gluster-cluster
path: /HadoopVol
readOnly: false
persistentVolumeReclaimPolicy: Retain
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 4
- This defines the volume type being used. In this case, the glusterfs plug-in is defined.
- 5
- This references the endpoints named above.
- 6
- This is the Gluster volume name, preceded by
/
. - 7
- The volume reclaim policy
Retain
indicates that the volume will be preserved after the pods accessing it terminates. For GlusterFS, the accepted values includeRetain
, andDelete
.
Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:
oc create -f gluster-pv.yaml
# oc create -f gluster-pv.yaml
persistentvolume "gluster-pv" created
Verify that the persistent volume was created:
oc get pv
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-pv <none> 1Gi RWX Available 37s
22.4.5. Creating the Persistent Volume Claim
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 22.12. PVC Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:
oc create -f gluster-claim.yaml
# oc create -f gluster-claim.yaml
persistentvolumeclaim "gluster-claim" created
Verify the PVC was created and bound to the expected PV:
oc get pvc
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
- 1
- The claim was bound to the gluster-pv PV.
22.4.6. Defining GlusterFS Volume Access
Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:
ls -lZ /mnt/glusterfs/ id yarn
# ls -lZ /mnt/glusterfs/
drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0 HadoopVol
# id yarn
uid=592(yarn) gid=590(hadoop) groups=590(hadoop)
In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
setsebool -P virt_sandbox_use_fusefs on
# setsebool -P virt_sandbox_use_fusefs on
The virt_sandbox_use_fusefs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
22.4.7. Creating the Pod using NGINX Web Server image
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:
The NGINX image may require to run in privileged mode to create the mount and run properly. An easy way to accomplish this is to simply add your user to the privileged Security Context Constraint (SCC):
oc adm policy add-scc-to-user privileged myuser
$ oc adm policy add-scc-to-user privileged myuser
Then, add the privileged: true to the containers securityContext:
section of the YAML file (as seen in the example below).
Managing Security Context Constraints provides additional information regarding SCCs.
Example 22.13. Pod Object Definition using NGINX image
apiVersion: v1 kind: Pod metadata: name: gluster-pod1 labels: name: gluster-pod1 spec: containers: - name: gluster-pod1 image: nginx ports: - name: web containerPort: 80 securityContext: privileged: true volumeMounts: - name: gluster-vol1 mountPath: /usr/share/nginx/html readOnly: false securityContext: supplementalGroups: [590] volumes: - name: gluster-vol1 persistentVolumeClaim: claimName: gluster-claim
apiVersion: v1
kind: Pod
metadata:
name: gluster-pod1
labels:
name: gluster-pod1
spec:
containers:
- name: gluster-pod1
image: nginx
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
readOnly: false
securityContext:
supplementalGroups: [590]
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster-claim
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are using a standard NGINX image.
- 3 6
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 5
- The
SupplementalGroup
ID (Linux Groups) to be assigned at the pod level and as discussed this should match the POSIX permissions on the Gluster volume. - 7
- The PVC that is bound to the Gluster cluster.
Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:
oc create -f gluster-pod1.yaml
# oc create -f gluster-pod1.yaml
pod "gluster-pod1" created
Verify the pod was created:
oc get pod
# oc get pod
NAME READY STATUS RESTARTS AGE
gluster-pod1 1/1 Running 0 31s
- 1
- After a minute or so, the pod will be in the Running state.
More details are shown in the oc describe pod
command:
oc describe pod gluster-pod1
# oc describe pod gluster-pod1
Name: gluster-pod1
Namespace: default
Security Policy: privileged
Node: ose1.rhs/192.168.122.251
Start Time: Wed, 24 Aug 2016 12:37:45 -0400
Labels: name=gluster-pod1
Status: Running
IP: 172.17.0.2
Controllers: <none>
Containers:
gluster-pod1:
Container ID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
Image: nginx
Image ID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
Port: 80/TCP
State: Running
Started: Wed, 24 Aug 2016 12:37:52 -0400
Ready: True
Restart Count: 0
Volume Mounts:
/usr/share/nginx/html/test from glustervol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1n70u (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
glustervol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: gluster-claim
ReadOnly: false
default-token-1n70u:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-1n70u
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10s 10s 1 {default-scheduler } Normal Scheduled Successfully assigned gluster-pod1 to ose1.rhs
9s 9s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Pulling pulling image "nginx"
4s 4s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Pulled Successfully pulled image "nginx"
3s 3s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Created Created container with docker id e67ed01729e1
3s 3s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Started Started container with docker id e67ed01729e1
There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more shown in the oc get pod <name> -o yaml
command:
oc get pod gluster-pod1 -o yaml
# oc get pod gluster-pod1 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
openshift.io/scc: privileged
creationTimestamp: 2016-08-24T16:37:45Z
labels:
name: gluster-pod1
name: gluster-pod1
namespace: default
resourceVersion: "482"
selfLink: /api/v1/namespaces/default/pods/gluster-pod1
uid: 15afda77-6a19-11e6-aadb-525400f7256d
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: gluster-pod1
ports:
- containerPort: 80
name: web
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /usr/share/nginx/html
name: glustervol
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-1n70u
readOnly: true
dnsPolicy: ClusterFirst
host: ose1.rhs
imagePullSecrets:
- name: default-dockercfg-20xg9
nodeName: ose1.rhs
restartPolicy: Always
securityContext:
supplementalGroups:
- 590
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: glustervol
persistentVolumeClaim:
claimName: gluster-claim
- name: default-token-1n70u
secret:
secretName: default-token-1n70u
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-08-24T16:37:45Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-08-24T16:37:53Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-08-24T16:37:45Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
image: nginx
imageID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
lastState: {}
name: gluster-pod1
ready: true
restartCount: 0
state:
running:
startedAt: 2016-08-24T16:37:52Z
hostIP: 192.168.122.251
phase: Running
podIP: 172.17.0.2
startTime: 2016-08-24T16:37:45Z
22.5. Complete Example of Dynamic Provisioning Using GlusterFS
This example assumes a working OpenShift Container Platform installed and functioning along with Heketi and GlusterFS.
All oc
commands are executed on the OpenShift Container Platform master host.
22.5.1. Overview
This topic provides an end-to-end example of how to dynamically provision GlusterFS volumes. In this example, a simple NGINX HelloWorld application is deployed using the Red Hat Container Native Storage (CNS) solution. CNS hyper-converges GlusterFS storage by containerizing it into the OpenShift Container Platform cluster.
The Red Hat Gluster Storage Administration Guide can also provide additional information about GlusterFS.
To get started, follow the gluster-kubernetes quickstart guide for an easy Vagrant-based installation and deployment of a working OpenShift Container Platform cluster with Heketi and GlusterFS containers.
22.5.2. Verify the Environment and Gather Needed Information
At this point, there should be a working OpenShift Container Platform cluster deployed, and a working Heketi server with GlusterFS.
Verify and view the cluster environment, including nodes and pods:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get nodes,pods
$ oc get nodes,pods NAME STATUS AGE master Ready 22h node0 Ready 22h node1 Ready 22h node2 Ready 22h NAME READY STATUS RESTARTS AGE 1/1 Running 0 1d glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0 glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1
1 glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2 heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0
2 If not already set in the environment, export the
HEKETI_CLI_SERVER
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
$ export HEKETI_CLI_SERVER=$(oc describe svc/heketi | grep "Endpoints:" | awk '{print "http://"$2}')
Identify the Heketi REST URL and server IP address:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo $HEKETI_CLI_SERVER
$ echo $HEKETI_CLI_SERVER http://10.42.0.0:8080
Identify the Gluster endpoints that are needed to pass in as a parameter into the storage class, which is used in a later step (
heketi-storage-endpoints
).Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get endpoints
$ oc get endpoints NAME ENDPOINTS AGE heketi 10.42.0.0:8080 22h heketi-storage-endpoints 192.168.10.100:1,192.168.10.101:1,192.168.10.102:1 22h
1 kubernetes 192.168.10.90:6443 23h
- 1
- The defined GlusterFS endpoints. In this example, they are called
heketi-storage-endpoints
.
By default, user_authorization
is disabled. If enabled, you may need to find the rest user and rest user secret key. (This is not applicable for this example, as any values will work).
22.5.3. Create a Storage Class for Your GlusterFS Dynamic Provisioner
Storage classes manage and enable persistent storage in OpenShift Container Platform. Below is an example of a Storage class requesting 5GB of on-demand storage to be used with your HelloWorld application.
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs parameters: endpoint: "heketi-storage-endpoints" resturl: "http://10.42.0.0:8080" restuser: "joe" restuserkey: "My Secret Life"
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
endpoint: "heketi-storage-endpoints"
resturl: "http://10.42.0.0:8080"
restuser: "joe"
restuserkey: "My Secret Life"
Create the Storage Class YAML file, save it, then submit it to OpenShift Container Platform:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f gluster-storage-class.yaml
$ oc create -f gluster-storage-class.yaml storageclass "gluster-heketi" created
View the storage class:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get storageclass
$ oc get storageclass NAME TYPE gluster-heketi kubernetes.io/glusterfs
22.5.4. Create a PVC to Request Storage for Your Application
Create a persistent volume claim (PVC) requesting 5GB of storage.
During that time, the Dynamic Provisioning Framework and Heketi will automatically provision a new GlusterFS volume and generate the persistent volume (PV) object:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster1 annotations: volume.beta.kubernetes.io/storage-class: gluster-heketi spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster1
annotations:
volume.beta.kubernetes.io/storage-class: gluster-heketi
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Create the PVC YAML file, save it, then submit it to OpenShift Container Platform:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f gluster-pvc.yaml
$ oc create -f gluster-pvc.yaml persistentvolumeclaim "gluster1" created
View the PVC:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster1 Bound pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO 14h
Notice that the PVC is bound to a dynamically created volume.
View the persistent volume (PV):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pv
$ oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pvc-7d37c7bd-bb5b-11e6-b81e-525400d87180 5Gi RWO Delete Bound default/gluster1 14h
22.5.5. Create a NGINX Pod That Uses the PVC
At this point, you have a dynamically created GlusterFS volume, bound to a PVC. Now, you can use this claim in a pod. Create a simple NGINX pod:
apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: name: nginx-pod spec: containers: - name: nginx-pod image: gcr.io/google_containers/nginx-slim:0.8 ports: - name: web containerPort: 80 securityContext: privileged: true volumeMounts: - name: gluster-vol1 mountPath: /usr/share/nginx/html volumes: - name: gluster-vol1 persistentVolumeClaim: claimName: gluster1
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
name: nginx-pod
spec:
containers:
- name: nginx-pod
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster1
- 1
- The name of the PVC created in the previous step.
Create the Pod YAML file, save it, then submit it to OpenShift Container Platform:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f nginx-pod.yaml
$ oc create -f nginx-pod.yaml pod "gluster-pod1" created
View the pod:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -o wide
$ oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-pod 1/1 Running 0 9m 10.38.0.0 node1 glusterfs-node0-2509304327-vpce1 1/1 Running 0 1d 192.168.10.100 node0 glusterfs-node1-3290690057-hhq92 1/1 Running 0 1d 192.168.10.101 node1 glusterfs-node2-4072075787-okzjv 1/1 Running 0 1d 192.168.10.102 node2 heketi-3017632314-yyngh 1/1 Running 0 1d 10.42.0.0 node0
NoteThis may take a few minutes, as the the pod may need to download the image if it does not already exist.
oc exec
into the container and create an index.html file in themountPath
definition of the pod:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc exec -ti nginx-pod /bin/sh cd /usr/share/nginx/html echo 'Hello World from GlusterFS!!!' > index.html ls exit
$ oc exec -ti nginx-pod /bin/sh $ cd /usr/share/nginx/html $ echo 'Hello World from GlusterFS!!!' > index.html $ ls index.html $ exit
Using the
curl
command from the master node,curl
the URL of the pod:Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl http://10.38.0.0
$ curl http://10.38.0.0 Hello World from GlusterFS!!!
Check your Gluster pod to ensure that the index.html file was written. Choose any of the Gluster pods:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh mount | grep heketi cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick ls cat index.html
$ oc exec -ti glusterfs-node1-3290690057-hhq92 /bin/sh $ mount | grep heketi /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick $ ls index.html $ cat index.html Hello World from GlusterFS!!!
22.6. Mounting Volumes on Privileged Pods
22.6.1. Overview
Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.
While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.
22.6.2. Prerequisites
- An existing Gluster volume.
- glusterfs-fuse installed on all hosts.
Definitions for GlusterFS:
- Endpoints and services: gluster-endpoints-service.yaml and gluster-endpoints.yaml
- Persistent volumes: gluster-pv.yaml
- Persistent volume claims: gluster-pvc.yaml
- Privileged pods: gluster-S3-pod.yaml
-
A user with the cluster-admin role binding. For this guide, that user is called
admin
.
22.6.3. Creating the Persistent Volume
Creating the PersistentVolume makes the storage accessible to users, regardless of projects.
As the admin, create the service, endpoint object, and persistent volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f gluster-endpoints-service.yaml oc create -f gluster-endpoints.yaml oc create -f gluster-pv.yaml
$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml
Verify that the objects were created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get svc
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE gluster-cluster 172.30.151.58 <none> 1/TCP <none> 24s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get ep
$ oc get ep NAME ENDPOINTS AGE gluster-cluster 192.168.59.102:1,192.168.59.103:1 2m
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pv
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2Gi RWX Available 2d
22.6.4. Creating a Regular User
Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:
As the admin, add a user to the SCC:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc adm policy add-scc-to-user privileged <username>
$ oc adm policy add-scc-to-user privileged <username>
Log in as the regular user:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc login -u <username> -p <password>
$ oc login -u <username> -p <password>
Then, create a new project:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project <project_name>
$ oc new-project <project_name>
22.6.5. Creating the Persistent Volume Claim
As a regular user, create the PersistentVolumeClaim to access the volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f gluster-pvc.yaml -n <project_name>
$ oc create -f gluster-pvc.yaml -n <project_name>
Define your pod to access the claim:
Example 22.14. Pod Definition
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 id: gluster-S3-pvc kind: Pod metadata: name: gluster-nginx-priv spec: containers: - name: gluster-nginx-priv image: fedora/nginx volumeMounts: - mountPath: /mnt/gluster name: gluster-volume-claim securityContext: privileged: true volumes: - name: gluster-volume-claim persistentVolumeClaim: claimName: gluster-claim
apiVersion: v1 id: gluster-S3-pvc kind: Pod metadata: name: gluster-nginx-priv spec: containers: - name: gluster-nginx-priv image: fedora/nginx volumeMounts: - mountPath: /mnt/gluster
1 name: gluster-volume-claim securityContext: privileged: true volumes: - name: gluster-volume-claim persistentVolumeClaim: claimName: gluster-claim
2 Upon pod creation, the mount directory is created and the volume is attached to that mount point.
As regular user, create a pod from the definition:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f gluster-S3-pod.yaml
$ oc create -f gluster-S3-pod.yaml
Verify that the pod created successfully:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods
$ oc get pods NAME READY STATUS RESTARTS AGE gluster-S3-pod 1/1 Running 0 36m
It can take several minutes for the pod to create.
22.6.6. Verifying the Setup
22.6.6.1. Checking the Pod SCC
Export the pod configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc export pod <pod_name>
$ oc export pod <pod_name>
Examine the output. Check that
openshift.io/scc
has the value ofprivileged
:Example 22.15. Export Snippet
Copy to Clipboard Copied! Toggle word wrap Toggle overflow metadata: annotations: openshift.io/scc: privileged
metadata: annotations: openshift.io/scc: privileged
22.6.6.2. Verifying the Mount
Access the pod and check that the volume is mounted:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsh <pod_name>
$ oc rsh <pod_name> [root@gluster-S3-pvc /]# mount
Examine the output for the Gluster volume:
Example 22.16. Volume Mount
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
22.7. Backing Docker Registry with GlusterFS Storage
22.7.1. Overview
This topic reviews how to attach a GlusterFS persistent volume to the Docker Registry.
It is assumed that the Docker registry service has already been started and the Gluster volume has been created.
22.7.2. Prerequisites
- The docker-registry was deployed without configuring storage.
- A Gluster volume exists and glusterfs-fuse is installed on schedulable nodes.
Definitions written for GlusterFS endpoints and service, persistent volume (PV), and persistent volume claim (PVC).
For this guide, these will be:
- gluster-endpoints-service.yaml
- gluster-endpoints.yaml
- gluster-pv.yaml
- gluster-pvc.yaml
A user with the cluster-admin role binding.
- For this guide, that user is admin.
All oc
commands are executed on the master node as the admin user.
22.7.3. Create the Gluster Persistent Volume
First, make the Gluster volume available to the registry.
oc create -f gluster-endpoints-service.yaml oc create -f gluster-endpoints.yaml oc create -f gluster-pv.yaml oc create -f gluster-pvc.yaml
$ oc create -f gluster-endpoints-service.yaml
$ oc create -f gluster-endpoints.yaml
$ oc create -f gluster-pv.yaml
$ oc create -f gluster-pvc.yaml
Check to make sure the PV and PVC were created and bound successfully. The expected output should resemble the following. Note that the PVC status is Bound, indicating that it has bound to the PV.
oc get pv oc get pvc
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-pv <none> 1Gi RWX Available 37s
$ oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
If either the PVC or PV failed to create or the PVC failed to bind, refer back to the GlusterFS Persistent Storage guide. Do not proceed until they initialize and the PVC status is Bound.
22.7.4. Attach the PVC to the Docker Registry
Before moving forward, ensure that the docker-registry service is running.
oc get svc
$ oc get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m
If either the docker-registry service or its associated pod is not running, refer back to the docker-registry setup instructions for troubleshooting before continuing.
Then, attach the PVC:
oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \ --claim-name=gluster-claim --overwrite
$ oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \
--claim-name=gluster-claim --overwrite
Deploying a Docker Registry provides more information on using the Docker registry.
22.7.5. Known Issues
22.7.5.1. Pod Cannot Resolve the Volume Host
In non-production cases where the dnsmasq server is located on the same node as the OpenShift Container Platform master service, pods might not resolve to the host machines when mounting the volume, causing errors in the docker-registry-1-deploy pod. This can happen when dnsmasq.service fails to start because of a collision with OpenShift Container Platform DNS on port 53. To run the DNS server on the master host, some configurations needs to be changed.
In /etc/dnsmasq.conf, add:
Reverse DNS record for master Wildcard DNS for OpenShift Applications - Points to Router Forward .local queries to SkyDNS Forward reverse queries for service network to SkyDNS. This is for default OpenShift SDN - change as needed.
# Reverse DNS record for master
host-record=master.example.com,<master-IP>
# Wildcard DNS for OpenShift Applications - Points to Router
address=/apps.example.com/<master-IP>
# Forward .local queries to SkyDNS
server=/local/127.0.0.1#8053
# Forward reverse queries for service network to SkyDNS.
# This is for default OpenShift SDN - change as needed.
server=/17.30.172.in-addr.arpa/127.0.0.1#8053
With these settings, dnsmasq will pull from the /etc/hosts file on the master node.
Add the appropriate host names and IPs for all necessary hosts.
In master-config.yaml, change bindAddress
to:
dnsConfig: bindAddress: 127.0.0.1:8053
dnsConfig:
bindAddress: 127.0.0.1:8053
When pods are created, they receive a copy of /etc/resolv.conf, which typically contains only the master DNS server so they can resolve external DNS requests. To enable internal DNS resolution, insert the dnsmasq server at the top of the server list. This way, dnsmasq will attempt to resolve requests internally first.
In /etc/resolv.conf all scheduled nodes:
nameserver 192.168.1.100 nameserver 192.168.1.1
nameserver 192.168.1.100
nameserver 192.168.1.1
Once the configurations are changed, restart the OpenShift Container Platform master and dnsmasq services.
systemctl restart atomic-openshift-master systemctl restart dnsmasq
$ systemctl restart atomic-openshift-master
$ systemctl restart dnsmasq
22.8. Binding Persistent Volumes by Labels
22.8.1. Overview
This topic provides an end-to-end example for binding persistent volume claims (PVCs) to persistent volumes (PVs), by defining labels in the PV and matching selectors in the PVC. This feature is available for all storage options. It is assumed that a OpenShift Container Platform cluster contains persistent storage resources which are available for binding by PVCs.
A Note on Labels and Selectors
Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV. For a more in-depth look at labels, see Pods and Services.
For this example, we will be using modified GlusterFS PV and PVC specifications. However, implementation of selectors and labels is generic across for all storage options. See the relevant storage option for your volume provider to learn more about its unique configuration.
22.8.1.1. Assumptions
It is assumed that you have:
- An existing OpenShift Container Platform cluster with at least one master and one node
- At least one supported storage volume
- A user with cluster-admin privileges
22.8.2. Defining Specifications
These specifications are tailored to GlusterFS. Consult the relevant storage option for your volume provider to learn more about its unique configuration.
22.8.2.1. Persistent Volume with Labels
Example 22.17. glusterfs-pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: gluster-volume labels: storage-tier: gold aws-availability-zone: us-east-1 spec: capacity: storage: 2Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster path: myVol1 readOnly: false persistentVolumeReclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-volume
labels:
storage-tier: gold
aws-availability-zone: us-east-1
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: myVol1
readOnly: false
persistentVolumeReclaimPolicy: Retain
- 1
- Use labels to identify common attributes or characteristics shared among volumes. In this case, we defined the Gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with
storage-tier=gold
to match this PV. - 2
- Endpoints define the Gluster trusted pool and are discussed below.
22.8.2.2. Persistent Volume Claim with Selectors
A claim with a selector stanza (see example below) attempts to match existing, unclaimed, and non-prebound PVs. The existence of a PVC selector ignores a PV’s capacity. However, accessModes are still considered in the matching criteria.
It is important to note that a claim must match all of the key-value pairs included in its selector stanza. If no PV matches the claim, then the PVC will remain unbound (Pending). A PV can subsequently be created and the claim will automatically check for a label match.
Example 22.18. glusterfs-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: storage-tier: gold aws-availability-zone: us-east-1
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
storage-tier: gold
aws-availability-zone: us-east-1
- 1
- The selector stanza defines all labels necessary in a PV in order to match this claim.
22.8.2.3. Volume Endpoints
To attach the PV to the Gluster volume, endpoints should be configured before creating our objects.
Example 22.19. glusterfs-ep.yaml
apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: 192.168.122.221 ports: - port: 1 - addresses: - ip: 192.168.122.222 ports: - port: 1
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 192.168.122.221
ports:
- port: 1
- addresses:
- ip: 192.168.122.222
ports:
- port: 1
22.8.2.4. Deploy the PV, PVC, and Endpoints
For this example, run the oc
commands as a cluster-admin privileged user. In a production environment, cluster clients might be expected to define and create the PVC.
oc create -f glusterfs-ep.yaml oc create -f glusterfs-pv.yaml oc create -f glusterfs-pvc.yaml
# oc create -f glusterfs-ep.yaml
endpoints "glusterfs-cluster" created
# oc create -f glusterfs-pv.yaml
persistentvolume "gluster-volume" created
# oc create -f glusterfs-pvc.yaml
persistentvolumeclaim "gluster-claim" created
Lastly, confirm that the PV and PVC bound successfully.
oc get pv,pvc
# oc get pv,pvc
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-volume 2Gi RWX Bound gfs-trial/gluster-claim 7s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim Bound gluster-volume 2Gi RWX 7s
PVCs are local to a project, whereas PVs are a cluster-wide, global resource. Developers and non-administrator users may not have access to see all (or any) of the available PVs.
22.9. Using Storage Classes for Dynamic Provisioning
22.9.1. Overview
In these examples we will walk through a few scenarios of various configuratons of StorageClasses and Dynamic Provisioning using Google Cloud Platform Compute Engine (GCE). These examples assume some familiarity with Kubernetes, GCE and Persistent Disks and OpenShift Container Platform is installed and properly configured to use GCE.
22.9.2. Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses
StorageClasses can be used to differentiate and delineate storage levels and usages. In this case, the cluster-admin
or storage-admin
sets up two distinct classes of storage in GCE.
-
slow
: Cheap, efficient, and optimized for sequential data operations (slower reading and writing) -
fast
: Optimized for higher rates of random IOPS and sustained throughput (faster reading and writing)
By creating these StorageClasses, the cluster-admin
or storage-admin
allows users to create claims requesting a particular level or service of StorageClass.
Example 22.20. StorageClass Slow Object Definitions
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: slow provisioner: kubernetes.io/gce-pd parameters: type: pd-standard zone: us-east1-d
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-east1-d
- 1
- Name of the StorageClass.
- 2
- The provisioner plug-in to be used. This is a required field for StorageClasses.
- 3
- PD type. This example uses
pd-standard
, which has a slightly lower cost, rate of sustained IOPS, and throughput versuspd-ssd
, which carries more sustained IOPS and throughput. - 4
- The zone is required.
Example 22.21. StorageClass Fast Object Definition
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: fast provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd zone: us-east1-d
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: us-east1-d
As a cluster-admin
or storage-admin
, save both definitions as YAML files. For example, slow-gce.yaml
and fast-gce.yaml
. Then create the StorageClasses.
oc create -f slow-gce.yaml oc create -f fast-gce.yaml oc get storageclass
# oc create -f slow-gce.yaml
storageclass "slow" created
# oc create -f fast-gce.yaml
storageclass "fast" created
# oc get storageclass
NAME TYPE
fast kubernetes.io/gce-pd
slow kubernetes.io/gce-pd
cluster-admin
or storage-admin
users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.
As a regular user, create a new project:
oc new-project rh-eng
# oc new-project rh-eng
Create the claim YAML definition, save it to a file (pvc-fast.yaml
):
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-engineering annotations: volume.beta.kubernetes.io/storage-class: fast spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-engineering
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
Add the claim with the oc create
command:
oc create -f pvc-fast.yaml
# oc create -f pvc-fast.yaml
persistentvolumeclaim "pvc-engineering" created
Check to see if your claim is bound:
oc get pvc
# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-engineering Bound pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX 2m
Since this claim was created and bound in the rh-eng project, it can be shared by any user in the same project.
As a cluster-admin
or storage-admin
user, view the recent dynamically provisioned Persistent Volume (PV).
oc get pv
# oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 5m
Notice the RECLAIMPOLICY is Delete by default for all dynamically provisioned volumes. This means the volume only lasts as long as the claim still exists in the system. If you delete the claim, the volume is also deleted and all data on the volume is lost.
Finally, check the GCE console. The new disk has been created and is ready for use.
kubernetes-dynamic-pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 SSD persistent disk 10 GB us-east1-d
kubernetes-dynamic-pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 SSD persistent disk 10 GB us-east1-d
Pods can now reference the persistent volume claim and start using the volume.
22.9.3. Scenario 2: How to enable Default StorageClass behavior for a Cluster
In this example, a cluster-admin
or storage-admin
enables a default storage class for all other users and projects that do not implicitly specify a StorageClass annotation in their claim. This is useful for a cluster-admin
or storage-admin
to provide easy management of a storage volume without having to set up or communicate specialized StorageClasses across the cluster.
This example builds upon Section 22.9.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses”. The cluster-admin
or storage-admin
will create another StorageClass for designation as the defaultStorageClass.
Example 22.22. Default StorageClass Object Definition
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: generic annotations: storageclass.beta.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/gce-pd parameters: type: pd-standard zone: us-east1-d
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: generic
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zone: us-east1-d
As a cluster-admin
or storage-admin
save the definition to a YAML file (generic-gce.yaml
), then create the StorageClasses:
oc create -f generic-gce.yaml oc get storageclass
# oc create -f generic-gce.yaml
storageclass "generic" created
# oc get storageclass
NAME TYPE
generic kubernetes.io/gce-pd
fast kubernetes.io/gce-pd
slow kubernetes.io/gce-pd
As a regular user, create a new claim definition without any StorageClass annotation and save it to a file (generic-pvc.yaml
).
Example 22.23. default Storage Claim Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-engineering2 spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-engineering2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Execute it and check the claim is bound:
oc create -f generic-pvc.yaml oc get pvc
# oc create -f generic-pvc.yaml
persistentvolumeclaim "pvc-engineering2" created
3s
# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-engineering Bound pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX 41m
pvc-engineering2 Bound pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX 7s
- 1
pvc-engineering2
is bound to a dynamically provisioned Volume by default.
As a cluster-admin
or storage-admin
, view the Persistent Volumes defined so far:
oc get pv
# oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX Delete Bound rh-eng/pvc-engineering2 5m
pvc-ba4612ce-8b4d-11e6-9962-42010af00004 5Gi RWO Delete Bound mytest/gce-dyn-claim1 21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 46m
- 1
- This PV was bound to our default dynamic volume from the default StorageClass.
- 2
- This PV was bound to our first PVC from Section 22.9.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses” with our fast StorageClass.
Create a manually provisioned disk using GCE (not dynamically provisioned). Then create a Persistent Volume that connects to the new GCE disk (pv-manual-gce.yaml
).
Example 22.24. Manual PV Object Defition
apiVersion: v1 kind: PersistentVolume metadata: name: pv-manual-gce spec: capacity: storage: 35Gi accessModes: - ReadWriteMany gcePersistentDisk: readOnly: false pdName: the-newly-created-gce-PD fsType: ext4
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-manual-gce
spec:
capacity:
storage: 35Gi
accessModes:
- ReadWriteMany
gcePersistentDisk:
readOnly: false
pdName: the-newly-created-gce-PD
fsType: ext4
Execute the object definition file:
oc create -f pv-manual-gce.yaml
# oc create -f pv-manual-gce.yaml
Now view the PVs again. Notice that a pv-manual-gce
volume is Available.
oc get pv
# oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv-manual-gce 35Gi RWX Retain Available 4s
pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX Delete Bound rh-eng/pvc-engineering2 12m
pvc-ba4612ce-8b4d-11e6-9962-42010af00004 5Gi RWO Delete Bound mytest/gce-dyn-claim1 21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 53m
Now create another claim identical to the generic-pvc.yaml
PVC definition but change the name and do not set an annotation.
Example 22.25. Claim Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-engineering3 spec: accessModes: - ReadWriteMany resources: requests: storage: 15Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-engineering3
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Gi
Because default StorageClass is enabled in this instance, the manually created PV does not satisfy the claim request. The user receives a new dynamically provisioned Persistent Volume.
oc get pvc
# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-engineering Bound pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX 1h
pvc-engineering2 Bound pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX 19m
pvc-engineering3 Bound pvc-6fa8e73b-8c00-11e6-9962-42010af00004 15Gi RWX 6s
Since the default StorageClass is enabled on this system, for the manually created Persistent Volume to get bound by the above claim and not have a new dynamic provisioned volume be bound, the PV would need to have been created in the default StorageClass.
Since the default StorageClass is enabled on this system, you would need to create the PV in the default StorageClass for the manually created Persistent Volume to get bound to the above claim and not have a new dynamic provisioned volume bound to the claim.
To fix this, the cluster-admin
or storage-admin
user simply needs to create another GCE disk or delete the first manual PV and use a PV object definition that assigns a StorageClass annotation (pv-manual-gce2.yaml
) if necessary:
Example 22.26. Manual PV Spec with default StorageClass annotation
apiVersion: v1 kind: PersistentVolume metadata: name: pv-manual-gce2 annotations: volume.beta.kubernetes.io/storage-class: generic spec: capacity: storage: 35Gi accessModes: - ReadWriteMany gcePersistentDisk: readOnly: false pdName: the-newly-created-gce-PD fsType: ext4
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-manual-gce2
annotations:
volume.beta.kubernetes.io/storage-class: generic
spec:
capacity:
storage: 35Gi
accessModes:
- ReadWriteMany
gcePersistentDisk:
readOnly: false
pdName: the-newly-created-gce-PD
fsType: ext4
- 1
- The annotation for previously created generic StorageClass.
Execute the object definition file:
oc create -f pv-manual-gce2.yaml
# oc create -f pv-manual-gce2.yaml
List the PVs:
oc get pv
# oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv-manual-gce 35Gi RWX Retain Available 4s
pv-manual-gce2 35Gi RWX Retain Bound rh-eng/pvc-engineering3 4s
pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX Delete Bound rh-eng/pvc-engineering2 12m
pvc-ba4612ce-8b4d-11e6-9962-42010af00004 5Gi RWO Delete Bound mytest/gce-dyn-claim1 21h
pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 53m
Notice that all dynamically provisioned volumes by default have a RECLAIMPOLICY of Delete. Once the PVC dynamically bound to the PV is deleted, the GCE volume is deleted and all data is lost. However, the manually created PV has a default RECLAIMPOLICY of Retain.
22.10. Using Storage Classes for Existing Legacy Storage
22.10.1. Overview
In this example, a legacy data volume exists and a cluster-admin
or storage-admin
needs to make it available for consumption in a particular project. Using StorageClasses decreases the likelihood of other users and projects gaining access to this volume from a claim because the claim would have to have an exact matching value for the StorageClass annotation. This example also disables dynamic provisioning. This example assumes:
- Some familiarity with OpenShift Container Platform, GCE, and Persistent Disks
- OpenShift Container Platform is properly configured to use GCE.
22.10.1.1. Scenario 1: Link StorageClass to existing Persistent Volume with Legacy Data
As a cluster-admin
or storage-admin
, define and create the StorageClass for historical financial data.
Example 22.27. StorageClass finance-history Object Definitions
kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: finance-history provisioner: no-provisioning parameters:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: finance-history
provisioner: no-provisioning
parameters:
Save the definitions to a YAML file (finance-history-storageclass.yaml
) and create the StorageClass.
oc create -f finance-history-storageclass.yaml oc get storageclass
# oc create -f finance-history-storageclass.yaml
storageclass "finance-history" created
# oc get storageclass
NAME TYPE
finance-history no-provisioning
cluster-admin
or storage-admin
users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.
The StorageClass exists. A cluster-admin
or storage-admin
can create the Persistent Volume (PV) for use with the StorageClass. Create a manually provisioned disk using GCE (not dynamically provisioned) and a Persistent Volume that connects to the new GCE disk (gce-pv.yaml
).
Example 22.28. Finance History PV Object
apiVersion: v1 kind: PersistentVolume metadata: name: pv-finance-history annotations: volume.beta.kubernetes.io/storage-class: finance-history spec: capacity: storage: 35Gi accessModes: - ReadWriteMany gcePersistentDisk: readOnly: false pdName: the-existing-PD-volume-name-that-contains-the-valuable-data fsType: ext4
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-finance-history
annotations:
volume.beta.kubernetes.io/storage-class: finance-history
spec:
capacity:
storage: 35Gi
accessModes:
- ReadWriteMany
gcePersistentDisk:
readOnly: false
pdName: the-existing-PD-volume-name-that-contains-the-valuable-data
fsType: ext4
As a cluster-admin
or storage-admin
, create and view the PV.
oc create -f gce-pv.yaml oc get pv
# oc create -f gce-pv.yaml
persistentvolume "pv-finance-history" created
# oc get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv-finance-history 35Gi RWX Retain Available 2d
Notice you have a pv-finance-history
Available and ready for consumption.
As a user, create a Persistent Volume Claim (PVC) as a YAML file and specify the correct StorageClass annotation:
Example 22.29. Claim for finance-history Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-finance-history annotations: volume.beta.kubernetes.io/storage-class: finance-history spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-finance-history
annotations:
volume.beta.kubernetes.io/storage-class: finance-history
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
- 1
- The StorageClass annotation, that must match exactly or the claim will go unbound until it is deleted or another StorageClass is created that matches the annotation.
Create and view the PVC and PV to see if it is bound.
oc create -f pvc-finance-history.yaml oc get pvc oc get pv (cluster/storage-admin)
# oc create -f pvc-finance-history.yaml
persistentvolumeclaim "pvc-finance-history" created
# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-finance-history Bound pv-finance-history 35Gi RWX 9m
# oc get pv (cluster/storage-admin)
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
pv-finance-history 35Gi RWX Retain Bound default/pvc-finance-history 5m
You can use StorageClasses in the same cluster for both legacy data (no dynamic provisioning) and with dynamic provisioning.