Chapter 28. Persistent Storage Examples
28.1. Overview
The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.
- Sharing an NFS PV Across Two Pods
- Ceph-RBD Block Storage Volume
- Shared Storage Using a GlusterFS Volume
- Dynamic Provisioning Storage Using GlusterFS
- Mounting a PV to Privileged Pods
- Backing Container Image Registry with GlusterFS Storage
- Binding Persistent Volumes by Labels
- Using StorageClasses for Dynamic Provisioning
- Using StorageClasses for Existing Legacy Storage
- Configuring Azure Blob Storage for Integrated Container Image Registry
28.3. Complete Example Using Ceph RBD
28.3.1. Overview
This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Container Platform persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.
All oc …
commands are executed on the OpenShift Container Platform master host.
28.3.2. Installing the ceph-common Package
The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes:
The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.
# yum install -y ceph-common
28.3.3. Creating the Ceph Secret
The ceph auth get-key
command is run on a Ceph MON node to display the key value for the client.admin user:
Example 28.5. Ceph Secret Definition
apiVersion: v1 kind: Secret metadata: name: ceph-secret data: key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1 type: kubernetes.io/rbd 2
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
$ oc create -f ceph-secret.yaml secret "ceph-secret" created
Verify that the secret was created:
# oc get secret ceph-secret NAME TYPE DATA AGE ceph-secret kubernetes.io/rbd 1 23d
28.3.4. Creating the Persistent Volume
Next, before creating the PV object in OpenShift Container Platform, define the persistent volume file:
Example 28.6. Persistent Volume Object Definition Using Ceph RBD
apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv 1 spec: capacity: storage: 2Gi 2 accessModes: - ReadWriteOnce 3 rbd: 4 monitors: 5 - 192.168.122.133:6789 pool: rbd image: ceph-image user: admin secretRef: name: ceph-secret 6 fsType: ext4 7 readOnly: false persistentVolumeReclaimPolicy: Retain
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
oc
volume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).- 4
- This defines the volume type being used. In this case, the rbd plug-in is defined.
- 5
- This is an array of Ceph monitor IP addresses and ports.
- 6
- This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Container Platform to the Ceph server.
- 7
- This is the file system type mounted on the Ceph RBD block device.
Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:
# oc create -f ceph-pv.yaml persistentvolume "ceph-pv" created
Verify that the persistent volume was created:
# oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE ceph-pv <none> 2147483648 RWO Available 2s
28.3.5. Creating the Persistent Volume Claim
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 28.7. PVC Object Definition
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-claim spec: accessModes: 1 - ReadWriteOnce resources: requests: storage: 2Gi 2
Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:
# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created
#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim <none> Bound ceph-pv 1Gi RWX 21s
1
- 1
- the claim was bound to the ceph-pv PV.
28.3.6. Creating the Pod
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:
Example 28.8. Pod Object Definition
apiVersion: v1 kind: Pod metadata: name: ceph-pod1 1 spec: containers: - name: ceph-busybox image: busybox 2 command: ["sleep", "60000"] volumeMounts: - name: ceph-vol1 3 mountPath: /usr/share/busybox 4 readOnly: false volumes: - name: ceph-vol1 5 persistentVolumeClaim: claimName: ceph-claim 6
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 5
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path as seen in the container.
- 6
- The PVC that is bound to the Ceph RBD cluster.
Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:
# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created
#verify pod was created
# oc get pod
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 2m
1
- 1
- After a minute or so, the pod will be in the Running state.
28.3.7. Defining Group and Owner IDs (Optional)
When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup
, as shown in the following pod definition fragment:
28.3.8. Setting ceph-user-secret as Default for Projects
If you would like to make the persistent storage available to every project you have to modify the default project template. You can read more on modifying the default project template. Read more on modifying the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster.
Default Project Example
...
apiVersion: v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: v1
kind: Secret
metadata:
name: ceph-user-secret
data:
key: yoursupersecretbase64keygoeshere 1
type:
kubernetes.io/rbd
...
- 1
- Place your Ceph user key here in base64 format.
28.4. Using Ceph RBD for dynamic provisioning
28.4.1. Overview
This topic provides a complete example of using an existing Ceph cluster for OpenShift Container Platform persistent storage. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and how to use Ceph Rados Block Device (RBD) as persistent storage.
-
Run all
oc
commands on the OpenShift Container Platform master host. - The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.
28.4.2. Creating a pool for dynamic volumes
Install the latest ceph-common package:
yum install -y ceph-common
NoteThe
ceph-common
library must be installed onall schedulable
OpenShift Container Platform nodes.From an administrator or MON node, create a new pool for dynamic volumes, for example:
$ ceph osd pool create kube 1024 $ ceph auth get-or-create client.kube mon 'allow r, allow command "osd blacklist"' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
NoteUsing the default pool of RBD is an option, but not recommended.
28.4.3. Using an existing Ceph cluster for dynamic persistent storage
To use an existing Ceph cluster for dynamic persistent storage:
Generate the client.admin base64-encoded key:
$ ceph auth get client.admin
Ceph secret definition example
apiVersion: v1 kind: Secret metadata: name: ceph-secret namespace: kube-system data: key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== 1 type: kubernetes.io/rbd 2
Create the Ceph secret for the client.admin:
$ oc create -f ceph-secret.yaml secret "ceph-secret" created
Verify that the secret was created:
$ oc get secret ceph-secret NAME TYPE DATA AGE ceph-secret kubernetes.io/rbd 1 5d
Create the storage class:
$ oc create -f ceph-storageclass.yaml storageclass "dynamic" created
Ceph storage class example
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: dynamic annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/rbd parameters: monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789 1 adminId: admin 2 adminSecretName: ceph-secret 3 adminSecretNamespace: kube-system 4 pool: kube 5 userId: kube 6 userSecretName: ceph-user-secret 7
- 1
- A comma-delimited list of IP addresses Ceph monitors. This value is required.
- 2
- The Ceph client ID that is capable of creating images in the pool. The default is
admin
. - 3
- The secret name for
adminId
. This value is required. The secret that you provide must havekubernetes.io/rbd
. - 4
- The namespace for
adminSecret
. The default isdefault
. - 5
- The Ceph RBD pool. The default is
rbd
, but this value is not recommended. - 6
- The Ceph client ID used to map the Ceph RBD image. The default is the same as the secret name for
adminId
. - 7
- The name of the Ceph secret for
userId
to map the Ceph RBD image. It must exist in the same namespace as the PVCs. Unless you set the Ceph secret as the default in new projects, you must provide this parameter value.
Verify that the storage class was created:
$ oc get storageclasses NAME TYPE dynamic (default) kubernetes.io/rbd
Create the PVC object definition:
PVC object definition example
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-claim-dynamic spec: accessModes: 1 - ReadWriteOnce resources: requests: storage: 2Gi 2
Create the PVC:
$ oc create -f ceph-pvc.yaml persistentvolumeclaim "ceph-claim-dynamic" created
Verify that the PVC was created and bound to the expected PV:
$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE ceph-claim Bound pvc-f548d663-3cac-11e7-9937-0024e8650c7a 2Gi RWO 1m
Create the pod object definition:
Pod object definition example
apiVersion: v1 kind: Pod metadata: name: ceph-pod1 1 spec: containers: - name: ceph-busybox image: busybox 2 command: ["sleep", "60000"] volumeMounts: - name: ceph-vol1 3 mountPath: /usr/share/busybox 4 readOnly: false volumes: - name: ceph-vol1 persistentVolumeClaim: claimName: ceph-claim-dynamic 5
- 1
- The name of this pod as displayed by
oc get pod
. - 2
- The image run by this pod. In this case,
busybox
is set tosleep
. - 3
- The name of the volume. This name must be the same in both the
containers
andvolumes
sections. - 4
- The mount path in the container.
- 5
- The PVC that is bound to the Ceph RBD cluster.
Create the pod:
$ oc create -f ceph-pod1.yaml pod "ceph-pod1" created
Verify that the pod was created:
$ oc get pod NAME READY STATUS RESTARTS AGE ceph-pod1 1/1 Running 0 2m
After a minute or so, the pod status changes to Running
.
28.4.4. Setting ceph-user-secret as the default for projects
To make persistent storage available to every project, you must modify the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster. See modifying the default project template for more information.
Default project example
...
apiVersion: v1
kind: Template
metadata:
creationTimestamp: null
name: project-request
objects:
- apiVersion: v1
kind: Project
metadata:
annotations:
openshift.io/description: ${PROJECT_DESCRIPTION}
openshift.io/display-name: ${PROJECT_DISPLAYNAME}
openshift.io/requester: ${PROJECT_REQUESTING_USER}
creationTimestamp: null
name: ${PROJECT_NAME}
spec: {}
status: {}
- apiVersion: v1
kind: Secret
metadata:
name: ceph-user-secret
data:
key: QVFCbEV4OVpmaGJtQ0JBQW55d2Z0NHZtcS96cE42SW1JVUQvekE9PQ== 1
type:
kubernetes.io/rbd
...
- 1
- Place your Ceph user key here in base64 format.
28.5. Complete Example Using GlusterFS
28.5.1. Overview
This topic provides an end-to-end example of how to use an existing converged mode, independent mode, or standalone Red Hat Gluster Storage cluster as persistent storage for OpenShift Container Platform. It is assumed that a working Red Hat Gluster Storage cluster is already set up. For help installing converged mode or independent mode, see Persistent Storage Using Red Hat Gluster Storage. For standalone Red Hat Gluster Storage, consult the Red Hat Gluster Storage Administration Guide.
For an end-to-end example of how to dynamically provision GlusterFS volumes, see Complete Example Using GlusterFS for Dynamic Provisioning.
All oc
commands are executed on the OpenShift Container Platform master host.
28.5.2. Prerequisites
To access GlusterFS volumes, the mount.glusterfs
command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do this, the following RPM repository must be enabled:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
# yum update glusterfs-fuse
By default, SELinux does not allow writing from a pod to a remote Red Hat Gluster Storage server. To enable writing to Red Hat Gluster Storage volumes with SELinux on, run the following on each node running GlusterFS:
$ sudo setsebool -P virt_sandbox_use_fusefs on 1
$ sudo setsebool -P virt_use_fusefs on
- 1
- The
-P
option makes the boolean persistent between reboots.
The virt_sandbox_use_fusefs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
If you use Atomic Host, the SELinux booleans are cleared when you upgrade Atomic Host. When you upgrade Atomic Host, you must set these boolean values again.
28.5.3. Static Provisioning
-
To enable static provisioning, first create a GlusterFS volume. See the Red Hat Gluster Storage Administration Guide for information on how to do this using the
gluster
command-line interface or the heketi project site for information on how to do this usingheketi-cli
. For this example, the volume will be namedmyVol1
. Define the following Service and Endpoints in
gluster-endpoints.yaml
:--- apiVersion: v1 kind: Service metadata: name: glusterfs-cluster 1 spec: ports: - port: 1 --- apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster 2 subsets: - addresses: - ip: 192.168.122.221 3 ports: - port: 1 4 - addresses: - ip: 192.168.122.222 5 ports: - port: 1 6 - addresses: - ip: 192.168.122.223 7 ports: - port: 1 8
From the OpenShift Container Platform master host, create the Service and Endpoints:
$ oc create -f gluster-endpoints.yaml service "glusterfs-cluster" created endpoints "glusterfs-cluster" created
Verify that the Service and Endpoints were created:
$ oc get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s $ oc get endpoints NAME ENDPOINTS AGE docker-registry 10.1.0.3:5000 4h glusterfs-cluster 192.168.122.221:1,192.168.122.222:1,192.168.122.223:1 11s kubernetes 172.16.35.3:8443 4d
NoteEndpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.
In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:
$ mkdir -p /mnt/glusterfs/myVol1 $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1 $ ls -lnZ /mnt/glusterfs/ drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1 1 2
Define the following PersistentVolume (PV) in
gluster-pv.yaml
:apiVersion: v1 kind: PersistentVolume metadata: name: gluster-default-volume 1 annotations: pv.beta.kubernetes.io/gid: "590" 2 spec: capacity: storage: 2Gi 3 accessModes: 4 - ReadWriteMany glusterfs: endpoints: glusterfs-cluster 5 path: myVol1 6 readOnly: false persistentVolumeReclaimPolicy: Retain
- 1
- The name of the volume.
- 2
- The GID on the root of the GlusterFS volume.
- 3
- The amount of storage allocated to this volume.
- 4
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 5
- The Endpoints resource previously created.
- 6
- The GlusterFS volume that will be accessed.
From the OpenShift Container Platform master host, create the PV:
$ oc create -f gluster-pv.yaml
Verify that the PV was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2147483648 RWX Available 2s
Create a PersistentVolumeClaim (PVC) that will bind to the new PV in
gluster-claim.yaml
:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim 1 spec: accessModes: - ReadWriteMany 2 resources: requests: storage: 1Gi 3
From the OpenShift Container Platform master host, create the PVC:
$ oc create -f gluster-claim.yaml
Verify that the PV and PVC are bound:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-pv <none> 1Gi RWX Available gluster-claim 37s $ oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV.
28.5.4. Using the Storage
At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now utilize this PVC in a pod.
Create the pod object definition:
apiVersion: v1 kind: Pod metadata: name: hello-openshift-pod labels: name: hello-openshift-pod spec: containers: - name: hello-openshift-pod image: openshift/hello-openshift ports: - name: web containerPort: 80 volumeMounts: - name: gluster-vol1 mountPath: /usr/share/nginx/html readOnly: false volumes: - name: gluster-vol1 persistentVolumeClaim: claimName: gluster1 1
- 1
- The name of the PVC created in the previous step.
From the OpenShift Container Platform master host, create the pod:
# oc create -f hello-openshift-pod.yaml pod "hello-openshift-pod" created
View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:
# oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-openshift-pod 1/1 Running 0 9m 10.38.0.0 node1
oc exec
into the container and create an index.html file in themountPath
definition of the pod:$ oc exec -ti hello-openshift-pod /bin/sh $ cd /usr/share/nginx/html $ echo 'Hello OpenShift!!!' > index.html $ ls index.html $ exit
Now
curl
the URL of the pod:# curl http://10.38.0.0 Hello OpenShift!!!
Delete the pod, recreate it, and wait for it to come up:
# oc delete pod hello-openshift-pod pod "hello-openshift-pod" deleted # oc create -f hello-openshift-pod.yaml pod "hello-openshift-pod" created # oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-openshift-pod 1/1 Running 0 9m 10.37.0.0 node1
Now
curl
the pod again and it should still have the same data as before. Note that its IP address may have changed:# curl http://10.37.0.0 Hello OpenShift!!!
Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:
$ mount | grep heketi /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick $ ls index.html $ cat index.html Hello OpenShift!!!
28.6. Complete Example Using GlusterFS for Dynamic Provisioning
28.6.1. Overview
This topic provides an end-to-end example of how to use an existing converged mode, independent mode, or standalone Red Hat Gluster Storage cluster as dynamic persistent storage for OpenShift Container Platform. It is assumed that a working Red Hat Gluster Storage cluster is already set up. For help installing converged mode or independent mode, see Persistent Storage Using Red Hat Gluster Storage. For standalone Red Hat Gluster Storage, consult the Red Hat Gluster Storage Administration Guide.
All oc
commands are executed on the OpenShift Container Platform master host.
28.6.2. Prerequisites
To access GlusterFS volumes, the mount.glusterfs
command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do this, the following RPM repository must be enabled:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
# yum update glusterfs-fuse
By default, SELinux does not allow writing from a pod to a remote Red Hat Gluster Storage server. To enable writing to Red Hat Gluster Storage volumes with SELinux on, run the following on each node running GlusterFS:
$ sudo setsebool -P virt_sandbox_use_fusefs on 1
$ sudo setsebool -P virt_use_fusefs on
- 1
- The
-P
option makes the boolean persistent between reboots.
The virt_sandbox_use_fusefs
boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
If you use Atomic Host, the SELinux booleans are cleared when you upgrade Atomic Host. When you upgrade Atomic Host, you must set these boolean values again.
28.6.3. Dynamic Provisioning
To enable dynamic provisioning, first create a
StorageClass
object definition. The definition below is based on the minimum requirements needed for this example to work with OpenShift Container Platform. See Dynamic Provisioning and Creating Storage Classes for additional parameters and specification definitions.kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: glusterfs provisioner: kubernetes.io/glusterfs parameters: resturl: "http://10.42.0.0:8080" 1 restauthenabled: "false" 2
From the OpenShift Container Platform master host, create the StorageClass:
# oc create -f gluster-storage-class.yaml storageclass "glusterfs" created
Create a PVC using the newly-created StorageClass. For example:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster1 spec: accessModes: - ReadWriteMany resources: requests: storage: 30Gi storageClassName: glusterfs
From the OpenShift Container Platform master host, create the PVC:
# oc create -f glusterfs-dyn-pvc.yaml persistentvolumeclaim "gluster1" created
View the PVC to see that the volume was dynamically created and bound to the PVC:
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE gluster1 Bound pvc-78852230-d8e2-11e6-a3fa-0800279cf26f 30Gi RWX glusterfs 42s
28.6.4. Using the Storage
At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now utilize this PVC in a pod.
Create the pod object definition:
apiVersion: v1 kind: Pod metadata: name: hello-openshift-pod labels: name: hello-openshift-pod spec: containers: - name: hello-openshift-pod image: openshift/hello-openshift ports: - name: web containerPort: 80 volumeMounts: - name: gluster-vol1 mountPath: /usr/share/nginx/html readOnly: false volumes: - name: gluster-vol1 persistentVolumeClaim: claimName: gluster1 1
- 1
- The name of the PVC created in the previous step.
From the OpenShift Container Platform master host, create the pod:
# oc create -f hello-openshift-pod.yaml pod "hello-openshift-pod" created
View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:
# oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-openshift-pod 1/1 Running 0 9m 10.38.0.0 node1
oc exec
into the container and create an index.html file in themountPath
definition of the pod:$ oc exec -ti hello-openshift-pod /bin/sh $ cd /usr/share/nginx/html $ echo 'Hello OpenShift!!!' > index.html $ ls index.html $ exit
Now
curl
the URL of the pod:# curl http://10.38.0.0 Hello OpenShift!!!
Delete the pod, recreate it, and wait for it to come up:
# oc delete pod hello-openshift-pod pod "hello-openshift-pod" deleted # oc create -f hello-openshift-pod.yaml pod "hello-openshift-pod" created # oc get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-openshift-pod 1/1 Running 0 9m 10.37.0.0 node1
Now
curl
the pod again and it should still have the same data as before. Note that its IP address may have changed:# curl http://10.37.0.0 Hello OpenShift!!!
Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:
$ mount | grep heketi /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota) $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick $ ls index.html $ cat index.html Hello OpenShift!!!
28.7. Mounting Volumes on Privileged Pods
28.7.1. Overview
Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.
While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.
28.7.2. Prerequisites
- An existing Gluster volume.
- glusterfs-fuse installed on all hosts.
Definitions for GlusterFS:
- Endpoints and services: gluster-endpoints-service.yaml and gluster-endpoints.yaml
- Persistent volumes: gluster-pv.yaml
- Persistent volume claims: gluster-pvc.yaml
- Privileged pods: gluster-S3-pod.yaml
-
A user with the cluster-admin role binding. For this guide, that user is called
admin
.
28.7.3. Creating the Persistent Volume
Creating the PersistentVolume makes the storage accessible to users, regardless of projects.
As the admin, create the service, endpoint object, and persistent volume:
$ oc create -f gluster-endpoints-service.yaml $ oc create -f gluster-endpoints.yaml $ oc create -f gluster-pv.yaml
Verify that the objects were created:
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE gluster-cluster 172.30.151.58 <none> 1/TCP <none> 24s
$ oc get ep NAME ENDPOINTS AGE gluster-cluster 192.168.59.102:1,192.168.59.103:1 2m
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2Gi RWX Available 2d
28.7.4. Creating a Regular User
Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:
As the admin, add a user to the SCC:
$ oc adm policy add-scc-to-user privileged <username>
Log in as the regular user:
$ oc login -u <username> -p <password>
Then, create a new project:
$ oc new-project <project_name>
28.7.5. Creating the Persistent Volume Claim
As a regular user, create the PersistentVolumeClaim to access the volume:
$ oc create -f gluster-pvc.yaml -n <project_name>
Define your pod to access the claim:
Example 28.10. Pod Definition
apiVersion: v1 id: gluster-S3-pvc kind: Pod metadata: name: gluster-nginx-priv spec: containers: - name: gluster-nginx-priv image: fedora/nginx volumeMounts: - mountPath: /mnt/gluster 1 name: gluster-volume-claim securityContext: privileged: true volumes: - name: gluster-volume-claim persistentVolumeClaim: claimName: gluster-claim 2
Upon pod creation, the mount directory is created and the volume is attached to that mount point.
As regular user, create a pod from the definition:
$ oc create -f gluster-S3-pod.yaml
Verify that the pod created successfully:
$ oc get pods NAME READY STATUS RESTARTS AGE gluster-S3-pod 1/1 Running 0 36m
It can take several minutes for the pod to create.
28.7.6. Verifying the Setup
28.7.6.1. Checking the Pod SCC
Export the pod configuration:
$ oc get -o yaml --export pod <pod_name>
Examine the output. Check that
openshift.io/scc
has the value ofprivileged
:Example 28.11. Export Snippet
metadata: annotations: openshift.io/scc: privileged
28.7.6.2. Verifying the Mount
Access the pod and check that the volume is mounted:
$ oc rsh <pod_name> [root@gluster-S3-pvc /]# mount
Examine the output for the Gluster volume:
Example 28.12. Volume Mount
192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
28.8. Mount Propagation
28.8.1. Overview
Mount propagation allows for sharing volumes mounted by a container to other containers in the same pod, or even to other pods on the same node.
28.8.2. Values
Mount propagation of a volume is controlled by the mountPropagation
field in Container.volumeMounts
. Its values are:
-
none
- This volume mount does not receive any subsequent mounts that are mounted to this volume or any of its subdirectories by the host. In similar fashion, no mounts created by the container are visible on the host. This is the default mode, and is equal toprivate
mount propagation in Linux kernels. -
HostToContainer
- This volume mount receives all subsequent mounts that are mounted to this volume or any of its subdirectories. In other words, if the host mounts anything inside the volume mount, the container acknowledges it mounted there. This mode is equal torslave
mount propagation in Linux kernels. -
Bidirectional
- This volume mount behaves the same as theHostToContainer
mount. In addition, all volume mounts created by the container are propagated back to the host and to all containers of all pods that use the same volume. A typical use case for this mode is a Pod with a FlexVolume or CSI driver or a Pod that needs to mount something on the host using ahostPath
volume. This mode is equal torshared
mount propagation in Linux kernels.
Bidirectional
mount propagation can be dangerous. It can damage the host operating system and therefore it is allowed only in privileged containers. Familiarity with Linux kernel behavior is strongly recommended. In addition, any volume mounts created by containers in pods must be destroyed, or unmounted, by the containers on termination.
28.8.3. Configuration
Before mount propagation can work properly on some deployments, such as CoreOS, Red Hat Enterprise Linux/Centos, or Ubuntu, the mount share must be configured correctly in Docker.
Procedure
Edit your Docker’s systemd service file. Set
MountFlags
as follows:MountFlags=shared
Alternatively, remove
MountFlags=slave
, if present.Restart the Docker daemon:
$ sudo systemctl daemon-reload $ sudo systemctl restart docker
28.9. Switching an Integrated OpenShift Container Registry to GlusterFS
28.9.1. Overview
This topic reviews how to attach a GlusterFS volume to an integrated OpenShift Container Registry. This can be done with any of converged mode, independent mode, or standalone Red Hat Gluster Storage. It is assumed that the registry has already been started and a volume has been created.
28.9.2. Prerequisites
- An existing registry deployed without configuring storage.
- An existing GlusterFS volume
- glusterfs-fuse installed on all schedulable nodes.
A user with the cluster-admin role binding.
- For this guide, that user is admin.
All oc
commands are executed on the master node as the admin user.
28.9.3. Manually Provision the GlusterFS PersistentVolumeClaim
-
To enable static provisioning, first create a GlusterFS volume. See the Red Hat Gluster Storage Administration Guide for information on how to do this using the
gluster
command-line interface or the heketi project site for information on how to do this usingheketi-cli
. For this example, the volume will be namedmyVol1
. Define the following Service and Endpoints in
gluster-endpoints.yaml
:--- apiVersion: v1 kind: Service metadata: name: glusterfs-cluster 1 spec: ports: - port: 1 --- apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster 2 subsets: - addresses: - ip: 192.168.122.221 3 ports: - port: 1 4 - addresses: - ip: 192.168.122.222 5 ports: - port: 1 6 - addresses: - ip: 192.168.122.223 7 ports: - port: 1 8
From the OpenShift Container Platform master host, create the Service and Endpoints:
$ oc create -f gluster-endpoints.yaml service "glusterfs-cluster" created endpoints "glusterfs-cluster" created
Verify that the Service and Endpoints were created:
$ oc get services NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s $ oc get endpoints NAME ENDPOINTS AGE docker-registry 10.1.0.3:5000 4h glusterfs-cluster 192.168.122.221:1,192.168.122.222:1,192.168.122.223:1 11s kubernetes 172.16.35.3:8443 4d
NoteEndpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.
In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:
$ mkdir -p /mnt/glusterfs/myVol1 $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1 $ ls -lnZ /mnt/glusterfs/ drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1 1 2
Define the following PersistentVolume (PV) in
gluster-pv.yaml
:apiVersion: v1 kind: PersistentVolume metadata: name: gluster-default-volume 1 annotations: pv.beta.kubernetes.io/gid: "590" 2 spec: capacity: storage: 2Gi 3 accessModes: 4 - ReadWriteMany glusterfs: endpoints: glusterfs-cluster 5 path: myVol1 6 readOnly: false persistentVolumeReclaimPolicy: Retain
- 1
- The name of the volume.
- 2
- The GID on the root of the GlusterFS volume.
- 3
- The amount of storage allocated to this volume.
- 4
accessModes
are used as labels to match a PV and a PVC. They currently do not define any form of access control.- 5
- The Endpoints resource previously created.
- 6
- The GlusterFS volume that will be accessed.
From the OpenShift Container Platform master host, create the PV:
$ oc create -f gluster-pv.yaml
Verify that the PV was created:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-default-volume <none> 2147483648 RWX Available 2s
Create a PersistentVolumeClaim (PVC) that will bind to the new PV in
gluster-claim.yaml
:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-claim 1 spec: accessModes: - ReadWriteMany 2 resources: requests: storage: 1Gi 3
From the OpenShift Container Platform master host, create the PVC:
$ oc create -f gluster-claim.yaml
Verify that the PV and PVC are bound:
$ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-pv <none> 1Gi RWX Available gluster-claim 37s $ oc get pvc NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV.
28.9.4. Attach the PersistentVolumeClaim to the Registry
Before moving forward, ensure that the docker-registry service is running.
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m
If either the docker-registry service or its associated pod is not running, refer back to the registry setup instructions for troubleshooting before continuing.
Then, attach the PVC:
$ oc set volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \ --claim-name=gluster-claim --overwrite
Setting up the Registry provides more information on using an OpenShift Container Registry.
28.10. Binding Persistent Volumes by Labels
28.10.1. Overview
This topic provides an end-to-end example for binding persistent volume claims (PVCs) to persistent volumes (PVs), by defining labels in the PV and matching selectors in the PVC. This feature is available for all storage options. It is assumed that a OpenShift Container Platform cluster contains persistent storage resources which are available for binding by PVCs.
A Note on Labels and Selectors
Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV. For a more in-depth look at labels, see Pods and Services.
For this example, we will be using modified GlusterFS PV and PVC specifications. However, implementation of selectors and labels is generic across for all storage options. See the relevant storage option for your volume provider to learn more about its unique configuration.
28.10.1.1. Assumptions
It is assumed that you have:
- An existing OpenShift Container Platform cluster with at least one master and one node
- At least one supported storage volume
- A user with cluster-admin privileges
28.10.2. Defining Specifications
These specifications are tailored to GlusterFS. Consult the relevant storage option for your volume provider to learn more about its unique configuration.
28.10.2.1. Persistent Volume with Labels
Example 28.13. glusterfs-pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: gluster-volume labels: 1 storage-tier: gold aws-availability-zone: us-east-1 spec: capacity: storage: 2Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster 2 path: myVol1 readOnly: false persistentVolumeReclaimPolicy: Retain
- 1
- Use labels to identify common attributes or characteristics shared among volumes. In this case, we defined the Gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with
storage-tier=gold
to match this PV. - 2
- Endpoints define the Gluster trusted pool and are discussed below.
28.10.2.2. Persistent Volume Claim with Selectors
A claim with a selector stanza (see example below) attempts to match existing, unclaimed, and non-prebound PVs. The existence of a PVC selector ignores a PV’s capacity. However, accessModes are still considered in the matching criteria.
It is important to note that a claim must match all of the key-value pairs included in its selector stanza. If no PV matches the claim, then the PVC will remain unbound (Pending). A PV can subsequently be created and the claim will automatically check for a label match.
Example 28.14. glusterfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector: 1
matchLabels:
storage-tier: gold
aws-availability-zone: us-east-1
- 1
- The selector stanza defines all labels necessary in a PV in order to match this claim.
28.10.2.3. Volume Endpoints
To attach the PV to the Gluster volume, endpoints should be configured before creating our objects.
Example 28.15. glusterfs-ep.yaml
apiVersion: v1 kind: Endpoints metadata: name: glusterfs-cluster subsets: - addresses: - ip: 192.168.122.221 ports: - port: 1 - addresses: - ip: 192.168.122.222 ports: - port: 1
28.10.2.4. Deploy the PV, PVC, and Endpoints
For this example, run the oc
commands as a cluster-admin privileged user. In a production environment, cluster clients might be expected to define and create the PVC.
# oc create -f glusterfs-ep.yaml endpoints "glusterfs-cluster" created # oc create -f glusterfs-pv.yaml persistentvolume "gluster-volume" created # oc create -f glusterfs-pvc.yaml persistentvolumeclaim "gluster-claim" created
Lastly, confirm that the PV and PVC bound successfully.
# oc get pv,pvc NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE gluster-volume 2Gi RWX Bound gfs-trial/gluster-claim 7s NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-claim Bound gluster-volume 2Gi RWX 7s
PVCs are local to a project, whereas PVs are a cluster-wide, global resource. Developers and non-administrator users may not have access to see all (or any) of the available PVs.
28.11. Using Storage Classes for Dynamic Provisioning
28.11.1. Overview
In these examples we will walk through a few scenarios of various configuratons of StorageClasses and Dynamic Provisioning using Google Cloud Platform Compute Engine (GCE). These examples assume some familiarity with Kubernetes, GCE and Persistent Disks and OpenShift Container Platform is installed and properly configured to use GCE.
28.11.2. Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses
StorageClasses can be used to differentiate and delineate storage levels and usages. In this case, the cluster-admin
or storage-admin
sets up two distinct classes of storage in GCE.
-
slow
: Cheap, efficient, and optimized for sequential data operations (slower reading and writing) -
fast
: Optimized for higher rates of random IOPS and sustained throughput (faster reading and writing)
By creating these StorageClasses, the cluster-admin
or storage-admin
allows users to create claims requesting a particular level or service of StorageClass.
Example 28.16. StorageClass Slow Object Definitions
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow 1 provisioner: kubernetes.io/gce-pd 2 parameters: type: pd-standard 3 zone: us-east1-d 4
- 1
- Name of the StorageClass.
- 2
- The provisioner plug-in to be used. This is a required field for StorageClasses.
- 3
- PD type. This example uses
pd-standard
, which has a slightly lower cost, rate of sustained IOPS, and throughput versuspd-ssd
, which carries more sustained IOPS and throughput. - 4
- The zone is required.
Example 28.17. StorageClass Fast Object Definition
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd zone: us-east1-d
As a cluster-admin
or storage-admin
, save both definitions as YAML files. For example, slow-gce.yaml
and fast-gce.yaml
. Then create the StorageClasses.
# oc create -f slow-gce.yaml storageclass "slow" created # oc create -f fast-gce.yaml storageclass "fast" created # oc get storageclass NAME TYPE fast kubernetes.io/gce-pd slow kubernetes.io/gce-pd
cluster-admin
or storage-admin
users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.
As a regular user, create a new project:
# oc new-project rh-eng
Create the claim YAML definition, save it to a file (pvc-fast.yaml
):
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-engineering spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: fast
Add the claim with the oc create
command:
# oc create -f pvc-fast.yaml persistentvolumeclaim "pvc-engineering" created
Check to see if your claim is bound:
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc-engineering Bound pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX 2m
Since this claim was created and bound in the rh-eng project, it can be shared by any user in the same project.
As a cluster-admin
or storage-admin
user, view the recent dynamically provisioned Persistent Volume (PV).
# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 5m
Notice the RECLAIMPOLICY is Delete by default for all dynamically provisioned volumes. This means the volume only lasts as long as the claim still exists in the system. If you delete the claim, the volume is also deleted and all data on the volume is lost.
Finally, check the GCE console. The new disk has been created and is ready for use.
kubernetes-dynamic-pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 SSD persistent disk 10 GB us-east1-d
Pods can now reference the persistent volume claim and start using the volume.
28.11.3. Scenario 2: How to enable Default StorageClass behavior for a Cluster
In this example, a cluster-admin
or storage-admin
enables a default storage class for all other users and projects that do not implicitly specify a StorageClass in their claim. This is useful for a cluster-admin
or storage-admin
to provide easy management of a storage volume without having to set up or communicate specialized StorageClasses across the cluster.
This example builds upon Section 28.11.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses”. The cluster-admin
or storage-admin
will create another StorageClass for designation as the defaultStorageClass.
Example 28.18. Default StorageClass Object Definition
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: generic 1 annotations: storageclass.kubernetes.io/is-default-class: "true" 2 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard zone: us-east1-d
As a cluster-admin
or storage-admin
save the definition to a YAML file (generic-gce.yaml
), then create the StorageClasses:
# oc create -f generic-gce.yaml storageclass "generic" created # oc get storageclass NAME TYPE generic kubernetes.io/gce-pd fast kubernetes.io/gce-pd slow kubernetes.io/gce-pd
As a regular user, create a new claim definition without any StorageClass requirement and save it to a file (generic-pvc.yaml
).
Example 28.19. default Storage Claim Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-engineering2 spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
Execute it and check the claim is bound:
# oc create -f generic-pvc.yaml
persistentvolumeclaim "pvc-engineering2" created
3s
# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-engineering Bound pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX 41m
pvc-engineering2 Bound pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX 7s 1
- 1
pvc-engineering2
is bound to a dynamically provisioned Volume by default.
As a cluster-admin
or storage-admin
, view the Persistent Volumes defined so far:
# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX Delete Bound rh-eng/pvc-engineering2 5m 1 pvc-ba4612ce-8b4d-11e6-9962-42010af00004 5Gi RWO Delete Bound mytest/gce-dyn-claim1 21h pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 46m 2
- 1
- This PV was bound to our default dynamic volume from the default StorageClass.
- 2
- This PV was bound to our first PVC from Section 28.11.2, “Scenario 1: Basic Dynamic Provisioning with Two Types of StorageClasses” with our fast StorageClass.
Create a manually provisioned disk using GCE (not dynamically provisioned). Then create a Persistent Volume that connects to the new GCE disk (pv-manual-gce.yaml
).
Example 28.20. Manual PV Object Defition
apiVersion: v1 kind: PersistentVolume metadata: name: pv-manual-gce spec: capacity: storage: 35Gi accessModes: - ReadWriteMany gcePersistentDisk: readOnly: false pdName: the-newly-created-gce-PD fsType: ext4
Execute the object definition file:
# oc create -f pv-manual-gce.yaml
Now view the PVs again. Notice that a pv-manual-gce
volume is Available.
# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv-manual-gce 35Gi RWX Retain Available 4s pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX Delete Bound rh-eng/pvc-engineering2 12m pvc-ba4612ce-8b4d-11e6-9962-42010af00004 5Gi RWO Delete Bound mytest/gce-dyn-claim1 21h pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 53m
Now create another claim identical to the generic-pvc.yaml
PVC definition but change the name and do not set a storage class name.
Example 28.21. Claim Object Definition
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-engineering3 spec: accessModes: - ReadWriteMany resources: requests: storage: 15Gi
Because default StorageClass is enabled in this instance, the manually created PV does not satisfy the claim request. The user receives a new dynamically provisioned Persistent Volume.
# oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc-engineering Bound pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX 1h pvc-engineering2 Bound pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX 19m pvc-engineering3 Bound pvc-6fa8e73b-8c00-11e6-9962-42010af00004 15Gi RWX 6s
Since the default StorageClass is enabled on this system, for the manually created Persistent Volume to get bound by the above claim and not have a new dynamic provisioned volume be bound, the PV would need to have been created in the default StorageClass.
Since the default StorageClass is enabled on this system, you would need to create the PV in the default StorageClass for the manually created Persistent Volume to get bound to the above claim and not have a new dynamic provisioned volume bound to the claim.
To fix this, the cluster-admin
or storage-admin
user simply needs to create another GCE disk or delete the first manual PV and use a PV object definition that assigns a StorageClass name (pv-manual-gce2.yaml
) if necessary:
Example 28.22. Manual PV Spec with default StorageClass name
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-manual-gce2
spec:
capacity:
storage: 35Gi
accessModes:
- ReadWriteMany
gcePersistentDisk:
readOnly: false
pdName: the-newly-created-gce-PD
fsType: ext4
storageClassName: generic 1
- 1
- The name for previously created generic StorageClass.
Execute the object definition file:
# oc create -f pv-manual-gce2.yaml
List the PVs:
# oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv-manual-gce 35Gi RWX Retain Available 4s 1 pv-manual-gce2 35Gi RWX Retain Bound rh-eng/pvc-engineering3 4s 2 pvc-a9f70544-8bfd-11e6-9962-42010af00004 5Gi RWX Delete Bound rh-eng/pvc-engineering2 12m pvc-ba4612ce-8b4d-11e6-9962-42010af00004 5Gi RWO Delete Bound mytest/gce-dyn-claim1 21h pvc-e9b4fef7-8bf7-11e6-9962-42010af00004 10Gi RWX Delete Bound rh-eng/pvc-engineering 53m
Notice that all dynamically provisioned volumes by default have a RECLAIMPOLICY of Delete. Once the PVC dynamically bound to the PV is deleted, the GCE volume is deleted and all data is lost. However, the manually created PV has a default RECLAIMPOLICY of Retain.
28.12. Using Storage Classes for Existing Legacy Storage
28.12.1. Overview
In this example, a legacy data volume exists and a cluster-admin
or storage-admin
needs to make it available for consumption in a particular project. Using StorageClasses decreases the likelihood of other users and projects gaining access to this volume from a claim because the claim would have to have an exact matching value for the StorageClass name. This example also disables dynamic provisioning. This example assumes:
- Some familiarity with OpenShift Container Platform, GCE, and Persistent Disks
- OpenShift Container Platform is properly configured to use GCE.
28.12.1.1. Scenario 1: Link StorageClass to existing Persistent Volume with Legacy Data
As a cluster-admin
or storage-admin
, define and create the StorageClass for historical financial data.
Example 28.23. StorageClass finance-history Object Definitions
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: finance-history 1 provisioner: no-provisioning 2 parameters: 3
Save the definitions to a YAML file (finance-history-storageclass.yaml
) and create the StorageClass.
# oc create -f finance-history-storageclass.yaml storageclass "finance-history" created # oc get storageclass NAME TYPE finance-history no-provisioning
cluster-admin
or storage-admin
users are responsible for relaying the correct StorageClass name to the correct users, groups, and projects.
The StorageClass exists. A cluster-admin
or storage-admin
can create the Persistent Volume (PV) for use with the StorageClass. Create a manually provisioned disk using GCE (not dynamically provisioned) and a Persistent Volume that connects to the new GCE disk (gce-pv.yaml
).
Example 28.24. Finance History PV Object
apiVersion: v1 kind: PersistentVolume metadata: name: pv-finance-history spec: capacity: storage: 35Gi accessModes: - ReadWriteMany gcePersistentDisk: readOnly: false pdName: the-existing-PD-volume-name-that-contains-the-valuable-data 1 fsType: ext4 storageClassName: finance-history 2
As a cluster-admin
or storage-admin
, create and view the PV.
# oc create -f gce-pv.yaml persistentvolume "pv-finance-history" created # oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv-finance-history 35Gi RWX Retain Available 2d
Notice you have a pv-finance-history
Available and ready for consumption.
As a user, create a Persistent Volume Claim (PVC) as a YAML file and specify the correct StorageClass name:
Example 28.25. Claim for finance-history Object Definition
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-finance-history
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: finance-history 1
- 1
- The StorageClass name, that must match exactly or the claim will go unbound until it is deleted or another StorageClass is created that matches the name.
Create and view the PVC and PV to see if it is bound.
# oc create -f pvc-finance-history.yaml persistentvolumeclaim "pvc-finance-history" created # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc-finance-history Bound pv-finance-history 35Gi RWX 9m # oc get pv (cluster/storage-admin) NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv-finance-history 35Gi RWX Retain Bound default/pvc-finance-history 5m
You can use StorageClasses in the same cluster for both legacy data (no dynamic provisioning) and with dynamic provisioning.
28.13. Configuring Azure Blob Storage for Integrated Container Image Registry
28.13.1. Overview
This topic reviews how to configure Microsoft Azure Blob Storage for OpenShift integrated container image registry.
28.13.2. Before You Begin
- Create a storage container using Microsoft Azure Portal, Microsoft Azure CLI, or Microsoft Azure Storage Explorer. Keep a note of the storage account name, storage account key and container name.
- Deploy the integrated container image registry if it is not deployed.
28.13.3. Overriding Registry Configuration
To create a new registry pod and replace the old pod automatically:
Create a new registry configuration file called registryconfig.yaml and add the following information:
version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: inmemory delete: enabled: true azure: 1 accountname: azureblobacc accountkey: azureblobacckey container: azureblobname realm: core.windows.net 2 auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: acceptschema2: false pullthrough: true enforcequota: false projectcachettl: 1m blobrepositorycachettl: 10m storage: - name: openshift
Create a new registry configuration:
$ oc create secret generic registry-config --from-file=config.yaml=registryconfig.yaml
Add the secret:
$ oc set volume dc/docker-registry --add --type=secret \ --secret-name=registry-config -m /etc/docker/registry/
Set the
REGISTRY_CONFIGURATION_PATH
environment variable:$ oc set env dc/docker-registry \ REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yaml
If you already created a registry configuration:
Delete the secret:
$ oc delete secret registry-config
Create a new registry configuration:
$ oc create secret generic registry-config --from-file=config.yaml=registryconfig.yaml
Update the configuration by starting a new rollout:
$ oc rollout latest docker-registry