Chapter 2. Get Started Provisioning Storage in Kubernetes
2.1. Overview Copy linkLink copied to clipboard!
Procedures and software described in this chapter for manually configuring and using Kubernetes are deprecated and, therefore, no longer supported. For information on which software and documentation are impacted, see the Red Hat Enterprise Linux Atomic Host Release Notes. For information on Red Hat’s officially supported Kubernetes-based products, refer to Red Hat OpenShift Container Platform, OpenShift Online, OpenShift Dedicated, OpenShift.io, Container Development Kit or Development Suite.
This section explains how to provision storage in Kubernetes.
Before undertaking the exercises in this topic, you must have a working Kubernetes configuration in place. Follow the instructions in Get Started Orchestrating Containers with Kubernetes to manually configure Kubnetes.
While you can use the procedure for orchestrating Kubernetes to test a manual configuration of Kubernetes, you should not use that configuration for production purposes. For a Kubernetes configuration that is supported by Red Hat, you must use OpenShift (which is available in various online and installable forms).
2.2. Kubernetes Persistent Volumes Copy linkLink copied to clipboard!
This section provides an overview of Kubernetes Persistent Volumes. The example below explains how to use the nginx web server to serve content from a persistent volume. This section assumes that you understand the basics of Kubernetes and that you have a Kubernetes cluster up and running.
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Before using Kubernetes to mount anything, you must first create whatever storage that you plan to mount. Cluster administrators must create their GCE disks and export their NFS shares in order for Kubernetes to mount them.
Persistent volumes are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. HostPath was included for ease of development and testing. You’ll create a local HostPath for this example.
In order for HostPath to work, you will need to run a single node cluster. Kubernetes does not support local storage on the host at this time. There is no guarantee that your pod will end up on the correct node where the HostPath resides.
// this will be nginx's webroot $ mkdir /tmp/data01 $ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
// this will be nginx's webroot
$ mkdir /tmp/data01
$ echo 'I love Kubernetes storage!' > /tmp/data01/index.html
Define physical volumes in a YAML file.
mkdir -p ~/examples/persistent-volumes/volumes/ vi ~/examples/persistent-volumes/volumes/local-01.yaml
$ mkdir -p ~/examples/persistent-volumes/volumes/
$ vi ~/examples/persistent-volumes/volumes/local-01.yaml
Create the following content in the local-01.yaml file:
Create physical volumes by posting them to the API server.
2.2.1. Requesting storage Copy linkLink copied to clipboard!
Users of Kubernetes request persistent storage for their pods. The nature of the underlying provisioning need not be known by users. Users must know that they can rely on their claims to storage and that they can manage that storage’s lifecycle independently of the many pods that may use it.
Claims must be created in the same namespace as the pods that use them.
Create a YAML file defining the storage claim.
mkdir -p ~/examples/persistent-volumes/claims/ vi ~/examples/persistent-volumes/claims/claim-01.yaml
$ mkdir -p ~/examples/persistent-volumes/claims/
$ vi ~/examples/persistent-volumes/claims/claim-01.yaml
Add the following content to the claim-01.yaml file:
Create the claim.
kubectl create -f ~/examples/persistent-volumes/claims/claim-01.yaml persistentvolumeclaim "myclaim-1" created
$ kubectl create -f ~/examples/persistent-volumes/claims/claim-01.yaml
persistentvolumeclaim "myclaim-1" created
A background process will attempt to match this claim to a volume. The state of your claim will eventually look something like this:
2.2.2. Using your claim as a volume Copy linkLink copied to clipboard!
Claims are used as volumes in pods. Kubernetes uses the claim to look up its bound PV. The PV is then exposed to the pod.
Start by creating a pod.yaml file.
mkdir -p ~/examples/persistent-volumes/simpletest/ vi ~/examples/persistent-volumes/simpletest/pod.yaml
$ mkdir -p ~/examples/persistent-volumes/simpletest/
$ vi ~/examples/persistent-volumes/simpletest/pod.yaml
Add the following content to the pod.yaml file:
Use pod.yaml to create the pod and the claim, then check that it was all done properly.
Page through the kubectl describe content until you see the IP address for the pod. Use that IP address in the next steps.
2.2.3. Check the service Copy linkLink copied to clipboard!
Query the service using the curl command, with the IP address and port number, to make sure the service is running. In this example, the address is 172.17.0.2. If you get a "forbidden" errror, disable SELinux using the setenforce 0 command.
curl 172.17.0.2:80 I love Kubernetes storage!
# curl 172.17.0.2:80
I love Kubernetes storage!
If you see the output shown above, you have a successfully created a working persistent volumer, claim and pod that is using that claim.
2.3. Volumes Copy linkLink copied to clipboard!
Kubernetes abstracts various storage facilities as "volumes".
Volumes are defined in the volumes section of a pod’s definition. The source of the data in the volumes is either:
- a remote NFS share,
- an iSCSI target,
- an empty directory, or
- a local directory on the host.
It is possible to define multiple volumes in the volumes section of the pod’s definition. Each volume must have a unique name (within the context of the pod) that is used during the mounting procedure as a unique identifier within the pod.
These volumes, once defined, can be mounted into containers that are defined in the containers section of the pod’s definition. Each container can mount several volumes; on the other hand, a single volume can be mounted into several containers. The volumeMounts section of the container definition specifies where the volume should be mounted.
2.3.1. Example Copy linkLink copied to clipboard!
2.4. Kubernetes and SELinux Permissions Copy linkLink copied to clipboard!
Kubernetes, in order to function properly, must have access to a directory that is shared between the host and the container. SELinux, by default, blocks Kubernetes from having access to that shared directory. Usually this is a good idea: no one wants a compromised container to access the host and cause damage. In this situation, though, we want the directory to be shared between the host and the pod without SELinux intervening to prevent the share.
Here’s an example. If we want to share the directory /srv/my-data from the Atomic Host to a pod, we must explicitly relabel /srv/my-data with the SELinux label svirt_sandbox_file_t. The presence of this label on this directory (which is on the host) causes SELinux to permit the container to read and write to the directory. Here’s the command that attaches the svirt_sandbox_file_t label to the /srv/my-data directory:
chcon -R -t svirt_sandbox_file_t /srv/my-data
$ chcon -R -t svirt_sandbox_file_t /srv/my-data
The following example steps you through the procedure:
Define this container, which uses
/srv/my-datafrom the host as the HTML root:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following commands on the container host to confirm that SELinux denies the nginx container read access to /srv/my-data (note the failed curl command):
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the label svirt_sandbox_file_t to the directory /srv/my-data:
chcon -R -t svirt_sandbox_file_t /srv/my-data
$ chcon -R -t svirt_sandbox_file_t /srv/my-dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use
curlto access the container and to confirm that the label has taken effect:curl <IP address of the container> Hello world
$ curl <IP address of the container> Hello worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If the curl command returned "Hello world", the SELinux label has been properly applied.
2.5. NFS Copy linkLink copied to clipboard!
In order to test this scenario, you must already have prepared NFS shares. In this example, you will mount the NFS shares into a pod.
The following example mounts the NFS share into /usr/share/nginx/html/ and runs the nginx webserver.
Create a file named
nfs-web.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Start the pod: The following command tells Kubernetes to mount
192.168.100.1:/wwwinto/usr/share/nginx/html/`inside the nginx container and run it.kubectl create -f nfs-web.yaml
$ kubectl create -f nfs-web.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the webserver receives data from the NFS share:
curl 172.17.0.6 Hello from NFS
$ curl 172.17.0.6 Hello from NFSCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Mount options in Kubernetes
Kubernetes 1.6 includes the ability to add mount options to certain volume types. These include: GCEPersistentDisk, AWSElasticBlockStore, AzureFile, AzureDisk, NFS, iSCSI, RBD (Ceph Block Device), CephFS, Cinder (OpenStack block storage), Glusterfs, VsphereVolume, Quobyte Volumes, VMware, and Photon. You can add mount options by setting annotations to PersistentVolume objects. For example:
Prior to Kubernetes 1.6, the ability to add mount options was not supported. For details, see Kubernetes Persistent Volumes.
Troubleshooting
403 Forbidden error: if you receive a "403 Forbidden" response from the webserver, make sure that SELinux allows Docker containers to read data over NFS by running the following command:
setsebool -P virt_use_nfs 1
$ setsebool -P virt_use_nfs 1
2.6. iSCSI Copy linkLink copied to clipboard!
To use iSCSI storage, make sure that the iSCSI target is properly configured. Then, make sure that all Kubernetes nodes have sufficient privileges to attach a LUN from the iSCSI target.
Create a file named
iscsi-web.yaml, containing the following pod definition:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod: From the following command, Kubernetes logs into the iSCSI target, attaches LUN 0 (typically as
/dev/sdXYZ), mounts the filesystem specified (in our example, it’s ext4) to/usr/share/nginx/html/inside the nginx container, and runs it.kubectl create -f iscsi-web.yaml
$ kubectl create -f iscsi-web.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the web server uses data from the iSCSI volume:
curl 172.17.0.6 Hello from iSCSI
$ curl 172.17.0.6 Hello from iSCSICopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Google Compute Engine Copy linkLink copied to clipboard!
If you are running your cluster on Google Compute Engine, you can use a Google Compute Engine Persistent Disk (GCE PD) as your persistent storage source. In the following example, you will create a pod which serves html content from a GCE PD.
If you have the GCE SDK set up, create a persistent disk using the following command. (Otherwise you can create the disk through the GCE web interface. If you want to set up the GCE SDK follow the instructions here.)
gcloud compute disks create --size=250GB {Persistent Disk Name}$ gcloud compute disks create --size=250GB {Persistent Disk Name}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a file named
gce-pd-web.yaml:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the pod. Kubernetes will create the pod and attach the disk but it will not format and mount it. (This is due to a bug which will be fixed in future versions of Kubernetes. The step that follows works around this issue.)
kubectl create -f gce-pd-web.yaml
$ kubectl create -f gce-pd-web.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Format and mount the persistent disk.
The disk will be attached to the virtual machine and a device will appear under `/dev/disk/by-id/`` with the name `scsi-0Google_PersistentDisk_{Persistent Disk Name}`. If this disk is already formatted and contains data proceed to the next step. Otherwise run the following command as root to format it:The disk will be attached to the virtual machine and a device will appear under `/dev/disk/by-id/`` with the name `scsi-0Google_PersistentDisk_{Persistent Disk Name}`. If this disk is already formatted and contains data proceed to the next step. Otherwise run the following command as root to format it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkfs.ext4 /dev/disk/by-id/scsi-0Google_PersistentDisk_{Persistent Disk Name}$ mkfs.ext4 /dev/disk/by-id/scsi-0Google_PersistentDisk_{Persistent Disk Name}Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the disk is formatted, mount it in the location expected by Kubernetes. Run the following commands as root:
mkdir -p /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name} && mount /dev/disk/by-id/scsi-0Google_PersistentDisk_{Persistent Disk Name} /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name}# mkdir -p /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name} && mount /dev/disk/by-id/scsi-0Google_PersistentDisk_{Persistent Disk Name} /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name}Copy to Clipboard Copied! Toggle word wrap Toggle overflow [NOTE] The `mkdir` command and the mount command must be run in quick succession as above because Kubernetes clean up will remove the directory if it sees nothing mounted there.
[NOTE] The `mkdir` command and the mount command must be run in quick succession as above because Kubernetes clean up will remove the directory if it sees nothing mounted there.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Now that the disk is mounted it must be given the correct SELinux context. As root run the following:
sudo chcon -R -t svirt_sandbox_file_t /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name}$ sudo chcon -R -t svirt_sandbox_file_t /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create some data for your web server to serve:
echo "Hello world" > /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name}/index.html$ echo "Hello world" > /var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts/{Persistent Disk Name}/index.htmlCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should now be able to get HTML content from the pod:
curl {IP address of the container} Hello World!$ curl {IP address of the container} Hello World!Copy to Clipboard Copied! Toggle word wrap Toggle overflow