Este conteúdo não está disponível no idioma selecionado.
Chapter 21. Persistent Storage Examples
21.1. Overview Copiar o linkLink copiado para a área de transferência!
The following sections provide detailed, comprehensive instructions on setting up and configuring common storage use cases. These examples cover both the administration of persistent volumes and their security, and how to claim against the volumes as a user of the system.
21.3. Complete Example Using Ceph RBD Copiar o linkLink copiado para a área de transferência!
21.3.1. Overview Copiar o linkLink copiado para a área de transferência!
This topic provides an end-to-end example of using an existing Ceph cluster as an OpenShift Container Platform persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.
All oc … commands are executed on the OpenShift Container Platform master host.
21.3.2. Installing the ceph-common Package Copiar o linkLink copiado para a área de transferência!
The ceph-common library must be installed on all schedulable OpenShift Container Platform nodes:
The OpenShift Container Platform all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.
# yum install -y ceph-common
21.3.3. Creating the Ceph Secret Copiar o linkLink copiado para a área de transferência!
The ceph auth get-key command is run on a Ceph MON node to display the key value for the client.admin user:
Example 21.5. Ceph Secret Definition
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
- 1
- This base64 key is generated on one of the Ceph MON nodes using the
ceph auth get-key client.admin | base64command, then copying the output and pasting it as the secret key’s value.
Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
$ oc create -f ceph-secret.yaml
secret "ceph-secret" created
Verify that the secret was created:
# oc get secret ceph-secret
NAME TYPE DATA AGE
ceph-secret Opaque 1 23d
21.3.4. Creating the Persistent Volume Copiar o linkLink copiado para a área de transferência!
Next, before creating the PV object in OpenShift Container Platform, define the persistent volume file:
Example 21.6. Persistent Volume Object Definition Using Ceph RBD
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 192.168.122.133:6789
pool: rbd
image: ceph-image
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Recycle
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
ocvolume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModesare used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-shared storage).- 4
- This defines the volume type being used. In this case, the rbd plug-in is defined.
- 5
- This is an array of Ceph monitor IP addresses and ports.
- 6
- This is the Ceph secret, defined above. It is used to create a secure connection from OpenShift Container Platform to the Ceph server.
- 7
- This is the file system type mounted on the Ceph RBD block device.
Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:
# oc create -f ceph-pv.yaml
persistentvolume "ceph-pv" created
Verify that the persistent volume was created:
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
ceph-pv <none> 2147483648 RWO Available 2s
21.3.5. Creating the Persistent Volume Claim Copiar o linkLink copiado para a área de transferência!
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 21.7. PVC Object Definition
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:
# oc create -f ceph-claim.yaml
persistentvolumeclaim "ceph-claim" created
#and verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim <none> Bound ceph-pv 1Gi RWX 21s
- 1
- the claim was bound to the ceph-pv PV.
21.3.6. Creating the Pod Copiar o linkLink copiado para a área de transferência!
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:
Example 21.8. Pod Object Definition
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1
spec:
containers:
- name: ceph-busybox
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim
- 1
- The name of this pod as displayed by
oc get pod. - 2
- The image run by this pod. In this case, we are telling busybox to sleep.
- 3 5
- The name of the volume. This name must be the same in both the
containersandvolumessections. - 4
- The mount path as seen in the container.
- 6
- The PVC that is bound to the Ceph RBD cluster.
Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:
# oc create -f ceph-pod1.yaml
pod "ceph-pod1" created
#verify pod was created
# oc get pod
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 2m
- 1
- After a minute or so, the pod will be in the Running state.
21.3.7. Defining Group and Owner IDs (Optional) Copiar o linkLink copiado para a área de transferência!
When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using fsGroup, as shown in the following pod definition fragment:
21.4. Complete Example Using GlusterFS Copiar o linkLink copiado para a área de transferência!
21.4.1. Overview Copiar o linkLink copiado para a área de transferência!
This topic provides an end-to-end example of how to use an existing Gluster cluster as an OpenShift Container Platform persistent store. It is assumed that a working Gluster cluster is already set up. If not, consult the Red Hat Gluster Storage Administration Guide.
Persistent Storage Using GlusterFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using GlusterFS as persistent storage.
All oc … commands are executed on the OpenShift Container Platform master host.
21.4.2. Installing the glusterfs-fuse Package Copiar o linkLink copiado para a área de transferência!
The glusterfs-fuse library must be installed on all schedulable OpenShift Container Platform nodes:
# yum install -y glusterfs-fuse
The OpenShift Container Platform all-in-one host is often not used to run pod workloads and, thus, is not included as a schedulable node.
21.4.3. Creating the Gluster Endpoints and Gluster Service for Persistence Copiar o linkLink copiado para a área de transferência!
The named endpoints define each node in the Gluster-trusted storage pool:
Example 21.10. GlusterFS Endpoint Definition
apiVersion: v1
kind: Endpoints
metadata:
name: gluster-cluster
subsets:
- addresses:
- ip: 192.168.122.21
ports:
- port: 1
protocol: TCP
- addresses:
- ip: 192.168.122.22
ports:
- port: 1
protocol: TCP
Save the endpoints definition to a file, for example gluster-endpoints.yaml, then create the endpoints object:
# oc create -f gluster-endpoints.yaml
endpoints "gluster-cluster" created
Verify that the endpoints were created:
# oc get endpoints gluster-cluster
NAME ENDPOINTS AGE
gluster-cluster 192.168.122.21:1,192.168.122.22:1 1m
To persist the Gluster endpoints, you also need to create a service.
Endpoints are name-spaced. Each project accessing the Gluster volume needs its own endpoints.
Example 21.11. GlusterFS Service Definition
Save the service definition to a file, for example gluster-service.yaml, then create the endpoints object:
# oc create -f gluster-service.yaml
endpoints "gluster-cluster" created
Verify that the service was created:
# oc get service gluster-cluster
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gluster-cluster 10.0.0.130 <none> 1/TCP 9s
21.4.4. Creating the Persistent Volume Copiar o linkLink copiado para a área de transferência!
Next, before creating the PV object, define the persistent volume in OpenShift Container Platform:
Persistent Volume Object Definition Using GlusterFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: gluster-cluster
path: /HadoopVol
readOnly: false
persistentVolumeReclaimPolicy: Retain
- 1
- The name of the PV, which is referenced in pod definitions or displayed in various
ocvolume commands. - 2
- The amount of storage allocated to this volume.
- 3
accessModesare used as labels to match a PV and a PVC. They currently do not define any form of access control.- 4
- This defines the volume type being used. In this case, the glusterfs plug-in is defined.
- 5
- This references the endpoints named above.
- 6
- This is the Gluster volume name, preceded by
/. - 7
- The volume reclaim policy
Retainindicates that the volume will be preserved after the pods accessing it terminates. For GlusterFS, the accepted values includeRetain, andDelete.
Save the PV definition to a file, for example gluster-pv.yaml, and create the persistent volume:
# oc create -f gluster-pv.yaml
persistentvolume "gluster-pv" created
Verify that the persistent volume was created:
# oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-pv <none> 1Gi RWX Available 37s
21.4.5. Creating the Persistent Volume Claim Copiar o linkLink copiado para a área de transferência!
A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.
Example 21.12. PVC Object Definition
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Save the PVC definition to a file, for example gluster-claim.yaml, and create the PVC:
# oc create -f gluster-claim.yaml
persistentvolumeclaim "gluster-claim" created
Verify the PVC was created and bound to the expected PV:
# oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
- 1
- The claim was bound to the gluster-pv PV.
21.4.6. Defining GlusterFS Volume Access Copiar o linkLink copiado para a área de transferência!
Access is necessary to a node in the Gluster-trusted storage pool. On this node, examine the glusterfs-fuse mount:
# ls -lZ /mnt/glusterfs/
drwxrwx---. yarn hadoop system_u:object_r:fusefs_t:s0 HadoopVol
# id yarn
uid=592(yarn) gid=590(hadoop) groups=590(hadoop)
In order to access the HadoopVol volume, the container must match the SELinux label, and either run with a UID of 592, or with 590 in its supplemental groups. It is recommended to gain access to the volume by matching the Gluster mount’s groups, which is defined in the pod definition below.
By default, SELinux does not allow writing from a pod to a remote Gluster server. To enable writing to GlusterFS volumes with SELinux enforcing on each node, run:
# setsebool -P virt_sandbox_use_fusefs on
The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.
21.4.7. Creating the Pod using NGINX Web Server image Copiar o linkLink copiado para a área de transferência!
A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Gluster volume for read-write access:
The NGINX image may require to run in privileged mode to create the mount and run properly. An easy way to accomplish this is to simply add your user to the privileged Security Context Constraint (SCC):
$ oadm policy add-scc-to-user privileged myuser
Then, add the privileged: true to the containers securityContext: section of the YAML file (as seen in the example below).
Managing Security Context Constraints provides additional information regarding SCCs.
Example 21.13. Pod Object Definition using NGINX image
apiVersion: v1
kind: Pod
metadata:
name: gluster-pod1
labels:
name: gluster-pod1
spec:
containers:
- name: gluster-pod1
image: nginx
ports:
- name: web
containerPort: 80
securityContext:
privileged: true
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/nginx/html
readOnly: false
securityContext:
supplementalGroups: [590]
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster-claim
- 1
- The name of this pod as displayed by
oc get pod. - 2
- The image run by this pod. In this case, we are using a standard NGINX image.
- 3 6
- The name of the volume. This name must be the same in both the
containersandvolumessections. - 4
- The mount path as seen in the container.
- 5
- The
SupplementalGroupID (Linux Groups) to be assigned at the pod level and as discussed this should match the POSIX permissions on the Gluster volume. - 7
- The PVC that is bound to the Gluster cluster.
Save the pod definition to a file, for example gluster-pod1.yaml, and create the pod:
# oc create -f gluster-pod1.yaml
pod "gluster-pod1" created
Verify the pod was created:
# oc get pod
NAME READY STATUS RESTARTS AGE
gluster-pod1 1/1 Running 0 31s
- 1
- After a minute or so, the pod will be in the Running state.
More details are shown in the oc describe pod command:
# oc describe pod gluster-pod1
Name: gluster-pod1
Namespace: default
Security Policy: privileged
Node: ose1.rhs/192.168.122.251
Start Time: Wed, 24 Aug 2016 12:37:45 -0400
Labels: name=gluster-pod1
Status: Running
IP: 172.17.0.2
Controllers: <none>
Containers:
gluster-pod1:
Container ID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
Image: nginx
Image ID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
Port: 80/TCP
State: Running
Started: Wed, 24 Aug 2016 12:37:52 -0400
Ready: True
Restart Count: 0
Volume Mounts:
/usr/share/nginx/html/test from glustervol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-1n70u (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
glustervol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: gluster-claim
ReadOnly: false
default-token-1n70u:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-1n70u
QoS Tier: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10s 10s 1 {default-scheduler } Normal Scheduled Successfully assigned gluster-pod1 to ose1.rhs
9s 9s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Pulling pulling image "nginx"
4s 4s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Pulled Successfully pulled image "nginx"
3s 3s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Created Created container with docker id e67ed01729e1
3s 3s 1 {kubelet ose1.rhs} spec.containers{gluster-pod1} Normal Started Started container with docker id e67ed01729e1
There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more shown in the oc get pod <name> -o yaml command:
# oc get pod gluster-pod1 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
openshift.io/scc: privileged
creationTimestamp: 2016-08-24T16:37:45Z
labels:
name: gluster-pod1
name: gluster-pod1
namespace: default
resourceVersion: "482"
selfLink: /api/v1/namespaces/default/pods/gluster-pod1
uid: 15afda77-6a19-11e6-aadb-525400f7256d
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: gluster-pod1
ports:
- containerPort: 80
name: web
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /usr/share/nginx/html
name: glustervol
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-1n70u
readOnly: true
dnsPolicy: ClusterFirst
host: ose1.rhs
imagePullSecrets:
- name: default-dockercfg-20xg9
nodeName: ose1.rhs
restartPolicy: Always
securityContext:
supplementalGroups:
- 590
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: glustervol
persistentVolumeClaim:
claimName: gluster-claim
- name: default-token-1n70u
secret:
secretName: default-token-1n70u
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2016-08-24T16:37:45Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2016-08-24T16:37:53Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2016-08-24T16:37:45Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://e67ed01729e1dc7369c5112d07531a27a7a02a7eb942f17d1c5fce32d8c31a2d
image: nginx
imageID: docker://sha256:4efb2fcdb1ab05fb03c9435234343c1cc65289eeb016be86193e88d3a5d84f6b
lastState: {}
name: gluster-pod1
ready: true
restartCount: 0
state:
running:
startedAt: 2016-08-24T16:37:52Z
hostIP: 192.168.122.251
phase: Running
podIP: 172.17.0.2
startTime: 2016-08-24T16:37:45Z
21.5. Backing Docker Registry with GlusterFS Storage Copiar o linkLink copiado para a área de transferência!
21.5.1. Overview Copiar o linkLink copiado para a área de transferência!
This topic reviews how to attach a GlusterFS persistent volume to the Docker Registry.
It is assumed that the Docker registry service has already been started and the Gluster volume has been created.
21.5.2. Prerequisites Copiar o linkLink copiado para a área de transferência!
- The docker-registry was deployed without configuring storage.
- A Gluster volume exists and glusterfs-fuse is installed on schedulable nodes.
Definitions written for GlusterFS endpoints and service, persistent volume (PV), and persistent volume claim (PVC).
For this guide, these will be:
- gluster-endpoints-service.yaml
- gluster-endpoints.yaml
- gluster-pv.yaml
- gluster-pvc.yaml
A user with the cluster-admin role binding.
- For this guide, that user is admin.
All oc commands are executed on the master node as the admin user.
21.5.3. Create the Gluster Persistent Volume Copiar o linkLink copiado para a área de transferência!
First, make the Gluster volume available to the registry.
$ oc create -f gluster-endpoints-service.yaml
$ oc create -f gluster-endpoints.yaml
$ oc create -f gluster-pv.yaml
$ oc create -f gluster-pvc.yaml
Check to make sure the PV and PVC were created and bound successfully. The expected output should resemble the following. Note that the PVC status is Bound, indicating that it has bound to the PV.
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-pv <none> 1Gi RWX Available 37s
$ oc get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim <none> Bound gluster-pv 1Gi RWX 24s
If either the PVC or PV failed to create or the PVC failed to bind, refer back to the GlusterFS Persistent Storage guide. Do not proceed until they initialize and the PVC status is Bound.
21.5.4. Attach the PVC to the Docker Registry Copiar o linkLink copiado para a área de transferência!
Before moving forward, ensure that the docker-registry service is running.
$ oc get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m
If either the docker-registry service or its associated pod is not running, refer back to the docker-registry setup instructions for troubleshooting before continuing.
Then, attach the PVC:
$ oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \
--claim-name=gluster-claim --overwrite
Deploying a Docker Registry provides more information on using the Docker registry.
21.5.5. Known Issues Copiar o linkLink copiado para a área de transferência!
21.5.5.1. Pod Cannot Resolve the Volume Host Copiar o linkLink copiado para a área de transferência!
In non-production cases where the dnsmasq server is located on the same node as the OpenShift Container Platform master service, pods might not resolve to the host machines when mounting the volume, causing errors in the docker-registry-1-deploy pod. This can happen when dnsmasq.service fails to start because of a collision with OpenShift Container Platform DNS on port 53. To run the DNS server on the master host, some configurations needs to be changed.
In /etc/dnsmasq.conf, add:
# Reverse DNS record for master
host-record=master.example.com,<master-IP>
# Wildcard DNS for OpenShift Applications - Points to Router
address=/apps.example.com/<master-IP>
# Forward .local queries to SkyDNS
server=/local/127.0.0.1#8053
# Forward reverse queries for service network to SkyDNS.
# This is for default OpenShift SDN - change as needed.
server=/17.30.172.in-addr.arpa/127.0.0.1#8053
With these settings, dnsmasq will pull from the /etc/hosts file on the master node.
Add the appropriate host names and IPs for all necessary hosts.
In master-config.yaml, change bindAddress to:
dnsConfig:
bindAddress: 127.0.0.1:8053
When pods are created, they receive a copy of /etc/resolv.conf, which typically contains only the master DNS server so they can resolve external DNS requests. To enable internal DNS resolution, insert the dnsmasq server at the top of the server list. This way, dnsmasq will attempt to resolve requests internally first.
In /etc/resolv.conf all scheduled nodes:
nameserver 192.168.1.100
nameserver 192.168.1.1
Once the configurations are changed, restart the OpenShift Container Platform master and dnsmasq services.
$ systemctl restart atomic-openshift-master
$ systemctl restart dnsmasq
21.6. Binding Persistent Volumes by Labels Copiar o linkLink copiado para a área de transferência!
21.6.1. Overview Copiar o linkLink copiado para a área de transferência!
This topic provides an end-to-end example for binding persistent volume claims (PVCs) to persistent volumes (PVs), by defining labels in the PV and matching selectors in the PVC. This feature is available for all storage options. It is assumed that a OpenShift Container Platform cluster contains persistent storage resources which are available for binding by PVCs.
A Note on Labels and Selectors
Labels are an OpenShift Container Platform feature that support user-defined tags (key-value pairs) as part of an object’s specification. Their primary purpose is to enable the arbitrary grouping of objects by defining identical labels among them. These labels can then be targeted by selectors to match all objects with specified label values. It is this functionality we will take advantage of to enable our PVC to bind to our PV. For a more in-depth look at labels, see Pods and Services.
For this example, we will be using modified GlusterFS PV and PVC specifications. However, implementation of selectors and labels is generic across for all storage options. See the relevant storage option for your volume provider to learn more about its unique configuration.
21.6.1.1. Assumptions Copiar o linkLink copiado para a área de transferência!
It is assumed that you have:
- An existing OpenShift Container Platform cluster with at least one master and one node
- At least one supported storage volume
- A user with cluster-admin privileges
21.6.2. Defining Specifications Copiar o linkLink copiado para a área de transferência!
These specifications are tailored to GlusterFS. Consult the relevant storage option for your volume provider to learn more about its unique configuration.
21.6.2.1. Persistent Volume with Labels Copiar o linkLink copiado para a área de transferência!
Example 21.14. glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-volume
labels:
storage-tier: gold
aws-availability-zone: us-east-1
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: myVol1
readOnly: false
persistentVolumeReclaimPolicy: Retain
- 1
- Use labels to identify common attributes or characteristics shared among volumes. In this case, we defined the Gluster volume to have a custom attribute (key) named storage-tier with a value of gold assigned. A claim will be able to select a PV with
storage-tier=goldto match this PV. - 2
- Endpoints define the Gluster trusted pool and are discussed below.
21.6.2.2. Persistent Volume Claim with Selectors Copiar o linkLink copiado para a área de transferência!
A claim with a selector stanza (see example below) attempts to match existing, unclaimed, and non-prebound PVs. The existence of a PVC selector ignores a PV’s capacity. However, accessModes are still considered in the matching criteria.
It is important to note that a claim must match all of the key-value pairs included in its selector stanza. If no PV matches the claim, then the PVC will remain unbound (Pending). A PV can subsequently be created and the claim will automatically check for a label match.
Example 21.15. glusterfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
storage-tier: gold
aws-availability-zone: us-east-1
- 1
- The selector stanza defines all labels necessary in a PV in order to match this claim.
21.6.2.3. Volume Endpoints Copiar o linkLink copiado para a área de transferência!
To attach the PV to the Gluster volume, endpoints should be configured before creating our objects.
Example 21.16. glusterfs-ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 192.168.122.221
ports:
- port: 1
- addresses:
- ip: 192.168.122.222
ports:
- port: 1
21.6.2.4. Deploy the PV, PVC, and Endpoints Copiar o linkLink copiado para a área de transferência!
For this example, run the oc commands as a cluster-admin privileged user. In a production environment, cluster clients might be expected to define and create the PVC.
# oc create -f glusterfs-ep.yaml
endpoints "glusterfs-cluster" created
# oc create -f glusterfs-pv.yaml
persistentvolume "gluster-volume" created
# oc create -f glusterfs-pvc.yaml
persistentvolumeclaim "gluster-claim" created
Lastly, confirm that the PV and PVC bound successfully.
# oc get pv,pvc
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
gluster-volume 2Gi RWX Bound gfs-trial/gluster-claim 7s
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
gluster-claim Bound gluster-volume 2Gi RWX 7s
PVCs are local to a project, whereas PVs are a cluster-wide, global resource. Developers and non-administrator users may not have access to see all (or any) of the available PVs.