OpenShift Container Storage is now OpenShift Data Foundation starting with version 4.9.
Chapter 5. Installing zone aware sample application
Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, Metro-DR setup is configured correctly.
With latency between the data zones, one can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). How much will the performance get degraded, depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with Metro-DR cluster configuration to ensure sufficient application performance for the required service levels.
5.1. Install Zone Aware Sample Application Copy linkLink copied to clipboard!
A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader.
Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage:
This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a Metro-DR stretched cluster with zone awareness and high availability.
Create a new project.
oc new-project my-shared-storage
$ oc new-project my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the example PHP application called file-uploader.
oc new-app openshift/php:7.3-ubi8~https://github.com/christianh814/openshift-php-upload-demo --name=file-uploader
$ oc new-app openshift/php:7.3-ubi8~https://github.com/christianh814/openshift-php-upload-demo --name=file-uploaderCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the build log and wait until the application is deployed.
oc logs -f bc/file-uploader -n my-shared-storage
$ oc logs -f bc/file-uploader -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The command prompt returns out of the tail mode once you see
Push successful.NoteThe new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence OpenShift route resource is not created by default. You need to create the route manually.
Scaling the application
Scale the application to four replicas and expose it’s services to make the application zone aware and available.
oc expose svc/file-uploader -n my-shared-storage
$ oc expose svc/file-uploader -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc scale --replicas=4 deploy/file-uploader -n my-shared-storage
$ oc scale --replicas=4 deploy/file-uploader -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods -o wide -n my-shared-storage
$ oc get pods -o wide -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the
Runningstatus.Create a PVC and attach it into an application.
oc set volume deploy/file-uploader --add --name=my-shared-storage \ -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi \ --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs \ --mount-path=/opt/app-root/src/uploaded \ -n my-shared-storage
$ oc set volume deploy/file-uploader --add --name=my-shared-storage \ -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi \ --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs \ --mount-path=/opt/app-root/src/uploaded \ -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command:
- Creates a PVC.
- Updates the application deployment to include a volume definition.
- Updates the application deployment to attach a volume mount into the specified mount-path.
- Creates a new deployment with the four application pods.
Check the result of adding the volume.
oc get pvc -n my-shared-storage
$ oc get pvc -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Notice the
ACCESS MODEis set to RWX.All the four
file-uploaderpods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node.
5.2. Modify Deployment to be Zone Aware Copy linkLink copied to clipboard!
Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints.
Add the pod placement rule in the application deployment configuration to make the application zone aware.
Run the following command, and review the output:
oc get deployment file-uploader -o yaml -n my-shared-storage | less
$ oc get deployment file-uploader -o yaml -n my-shared-storage | lessCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the deployment to use the topology zone labels.
oc edit deployment file-uploader -n my-shared-storage
$ oc edit deployment file-uploader -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add add the following new lines between the
StartandEnd(shown in the output in the previous step):Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
deployment.apps/file-uploader edited
deployment.apps/file-uploader editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement.
- Scaling down to zero pods
oc scale deployment file-uploader --replicas=0 -n my-shared-storage
$ oc scale deployment file-uploader --replicas=0 -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
deployment.apps/file-uploader scaled
deployment.apps/file-uploader scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Scaling up to four pods
oc scale deployment file-uploader --replicas=4 -n my-shared-storage
$ oc scale deployment file-uploader --replicas=4 -n my-shared-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
deployment.apps/file-uploader scaled
deployment.apps/file-uploader scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones.
oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print $7}' | sort | uniq -c$ oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print $7}' | sort | uniq -cCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr
1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Search for the zone labels used.
oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master
$ oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2
perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the file-uploader web application using your browser to upload new files.
Find the route that is created.
oc get route file-uploader -n my-shared-storage -o jsonpath --template="http://{.spec.host}{'\n'}"$ oc get route file-uploader -n my-shared-storage -o jsonpath --template="http://{.spec.host}{'\n'}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example Output:
http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com
http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Point your browser to the web application using the route in the previous step.
The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing.
Select an arbitrary file from your local machine and upload it to the application.
- Click Choose file to select an arbitrary file.
Click Upload.
Figure 5.1. A simple PHP-based file upload tool
- Click List uploaded files to see the list of all currently uploaded files.
The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware.