Search

Chapter 5. Installing zone aware sample application

download PDF

Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, Metro-DR setup is configured correctly.

Important

With latency between the data zones, one can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). How much will the performance get degraded, depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with Metro-DR cluster configuration to ensure sufficient application performance for the required service levels.

5.1. Install Zone Aware Sample Application

A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader.

Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage:

Note

This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a Metro-DR stretched cluster with zone awareness and high availability.

  1. Create a new project.

    $ oc new-project my-shared-storage
  2. Deploy the example PHP application called file-uploader.

    $ oc new-app openshift/php:7.3-ubi8~https://github.com/christianh814/openshift-php-upload-demo --name=file-uploader

    Example Output:

    Found image 4f2dcc0 (9 days old) in image stream "openshift/php" under tag "7.2-ubi8" for "openshift/php:7.2-
    ubi8"
    
    Apache 2.4 with PHP 7.2
    -----------------------
    PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common
    use of PHP coding is probably as a replacement for CGI scripts.
    
    Tags: builder, php, php72, php-72
    
    * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr
    eated
    * The resulting image will be pushed to image stream tag "file-uploader:latest"
    * Use 'oc start-build' to trigger a new build
    
    --> Creating resources ...
        imagestream.image.openshift.io "file-uploader" created
        buildconfig.build.openshift.io "file-uploader" created
        deployment.apps "file-uploader" created
        service "file-uploader" created
    --> Success
        Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress.
    
        Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
         'oc expose service/file-uploader'
    
        Run 'oc status' to view your app.
  3. View the build log and wait until the application is deployed.

    $ oc logs -f bc/file-uploader -n my-shared-storage

    Example Output:

    Cloning "https://github.com/christianh814/openshift-php-upload-demo" ...
    
        [...]
        Generating dockerfile with builder image image-registry.openshift-image-regis
    try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c
    0e05b593844b41d5494ea
    STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s
    ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea
    STEP 2: LABEL "io.openshift.build.commit.author"="Christian Hernandez <christ
    ian.hernandez@yahoo.com>"       "io.openshift.build.commit.date"="Sun Oct 1 1
    7:15:09 2017 -0700"       "io.openshift.build.commit.id"="288eda3dff43b02f7f7
    b6b6b6f93396ffdf34cb2"       "io.openshift.build.commit.ref"="master"       "
    io.openshift.build.commit.message"="trying to modularize"       "io.openshift
    .build.source-location"="https://github.com/christianh814/openshift-php-uploa
    d-demo"       "io.openshift.build.image"="image-registry.openshift-image-regi
    stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610
    c0e05b593844b41d5494ea"
    STEP 3: ENV OPENSHIFT_BUILD_NAME="file-uploader-1"     OPENSHIFT_BUILD_NAMESP
    ACE="my-shared-storage"     OPENSHIFT_BUILD_SOURCE="https://github.com/christ
    ianh814/openshift-php-upload-demo"     OPENSHIFT_BUILD_COMMIT="288eda3dff43b0
    2f7f7b6b6b6f93396ffdf34cb2"
    STEP 4: USER root
    STEP 5: COPY upload/src /tmp/src
    STEP 6: RUN chown -R 1001:0 /tmp/src
    STEP 7: USER 1001
    STEP 8: RUN /usr/libexec/s2i/assemble
    ---> Installing application source...
    => sourcing 20-copy-config.sh ...
    ---> 17:24:39     Processing additional arbitrary httpd configuration provide
    d by s2i ...
    => sourcing 00-documentroot.conf ...
    => sourcing 50-mpm-tuning.conf ...
    => sourcing 40-ssl-certs.sh ...
    STEP 9: CMD /usr/libexec/s2i/run
    STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3
    b83e447
    Getting image source signatures
    
    [...]

    The command prompt returns out of the tail mode once you see Push successful.

    Note

    The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence OpenShift route resource is not created by default. You need to create the route manually.

Scaling the application

  1. Scale the application to four replicas and expose it’s services to make the application zone aware and available.

    $ oc expose svc/file-uploader -n my-shared-storage
    $ oc scale --replicas=4 deploy/file-uploader -n my-shared-storage
    $ oc get pods -o wide -n my-shared-storage

    You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status.

  2. Create a PVC and attach it into an application.

    $ oc set volume deploy/file-uploader --add --name=my-shared-storage \
    -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi \
    --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs \
    --mount-path=/opt/app-root/src/uploaded \
    -n my-shared-storage

    This command:

    • Creates a PVC.
    • Updates the application deployment to include a volume definition.
    • Updates the application deployment to attach a volume mount into the specified mount-path.
    • Creates a new deployment with the four application pods.
  3. Check the result of adding the volume.

    $ oc get pvc -n my-shared-storage

    Example Output:

    NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
    my-shared-storage   Bound    pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7   10Gi       RWX            ocs-storagecluster-cephfs   52s

    Notice the ACCESS MODE is set to RWX.

    All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node.

5.2. Modify Deployment to be Zone Aware

Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints.

  1. Add the pod placement rule in the application deployment configuration to make the application zone aware.

    1. Run the following command, and review the output:

      $ oc get deployment file-uploader -o yaml -n my-shared-storage | less

      Example Output:

      [...]
      spec:
        progressDeadlineSeconds: 600
        replicas: 4
        revisionHistoryLimit: 10
        selector:
          matchLabels:
            deployment: file-uploader
        strategy:
          rollingUpdate:
            maxSurge: 25%
            maxUnavailable: 25%
          type: RollingUpdate
        template:
          metadata:
            annotations:
              openshift.io/generated-by: OpenShiftNewApp
            creationTimestamp: null
            labels:
              deployment: file-uploader
            spec: # <-- Start inserted lines after here
              containers: # <-- End inserted lines before here
              - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b
                imagePullPolicy: IfNotPresent
                name: file-uploader
      [...]
    2. Edit the deployment to use the topology zone labels.

      $ oc edit deployment file-uploader -n my-shared-storage

      Add add the following new lines between the Start and End (shown in the output in the previous step):

      [...]
          spec:
            topologySpreadConstraints:
              - labelSelector:
                  matchLabels:
                    deployment: file-uploader
                maxSkew: 1
                topologyKey: topology.kubernetes.io/zone
                whenUnsatisfiable: DoNotSchedule
              - labelSelector:
                  matchLabels:
                    deployment: file-uploader
                maxSkew: 1
                topologyKey: kubernetes.io/hostname
                whenUnsatisfiable: ScheduleAnyway
            nodeSelector:
              node-role.kubernetes.io/worker: ""
            containers:
      [...]

      Example output:

      deployment.apps/file-uploader edited
  2. Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement.

    Scaling down to zero pods
    $ oc scale deployment file-uploader --replicas=0 -n my-shared-storage

    Example output:

    deployment.apps/file-uploader scaled
    Scaling up to four pods
    $ oc scale deployment file-uploader --replicas=4 -n my-shared-storage

    Example output:

    deployment.apps/file-uploader scaled
  3. Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones.

    $ oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print $7}' | sort | uniq -c

    Example output:

       1 perf1-mz8bt-worker-d2hdm
       1 perf1-mz8bt-worker-k68rv
       1 perf1-mz8bt-worker-ntkp8
       1 perf1-mz8bt-worker-qpwsr

    Search for the zone labels used.

    $ oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master

    Example output:

    perf1-mz8bt-worker-d2hdm   Ready    worker   35d   v1.20.0+5fbfd19   datacenter1
    perf1-mz8bt-worker-k68rv   Ready    worker   35d   v1.20.0+5fbfd19   datacenter1
    perf1-mz8bt-worker-ntkp8   Ready    worker   35d   v1.20.0+5fbfd19   datacenter2
    perf1-mz8bt-worker-qpwsr   Ready    worker   35d   v1.20.0+5fbfd19   datacenter2
  4. Use the file-uploader web application using your browser to upload new files.

    1. Find the route that is created.

      $ oc get route file-uploader -n my-shared-storage -o jsonpath --template="http://{.spec.host}{'\n'}"

      Example Output:

      http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com
    2. Point your browser to the web application using the route in the previous step.

      The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing.

    3. Select an arbitrary file from your local machine and upload it to the application.

      1. Click Choose file to select an arbitrary file.
      2. Click Upload.

        Figure 5.1. A simple PHP-based file upload tool

        uploader screen upload
    4. Click List uploaded files to see the list of all currently uploaded files.
Note

The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.