Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 16. Deploying an application

download PDF

16.1. Tutorial: Deploying an application

16.1.1. Introduction

After successfully provisioning your cluster, you can deploy an application on it. This application allows you to become more familiar with some of the features of Red Hat OpenShift Service on AWS (ROSA) and Kubernetes.

16.1.1.1. Lab overview

In this lab, you will complete the following set of tasks designed to help you understand the concepts of deploying and operating container-based applications:

  • Deploy a Node.js based app by using S2I and Kubernetes Deployment objects.
  • Set up a continuous delivery (CD) pipeline to automatically push source code changes.
  • Explore logging.
  • Experience self healing of applications.
  • Explore configuration management through configmaps, secrets, and environment variables.
  • Use persistent storage to share data across pod restarts.
  • Explore networking within Kubernetes and applications.
  • Familiarize yourself with ROSA and Kubernetes functionality.
  • Automatically scale pods based on loads from the Horizontal Pod Autoscaler.
  • Use AWS Controllers for Kubernetes (ACK) to deploy and use an S3 bucket.

This lab uses either the ROSA CLI or ROSA web user interface (UI).

16.2. Tutorial: Deploying an application

16.2.1. Prerequisites

  1. A Provisioned ROSA cluster

    This lab assumes you have access to a successfully provisioned a ROSA cluster. If you have not yet created a ROSA cluster, see Red Hat OpenShift Service on AWS quick start guide for more information.

  2. The OpenShift Command Line Interface (CLI)

    For more information, see Getting started with the OpenShift CLI.

  3. A GitHub Account

    Use your existing GitHub account or register at https://github.com/signup.

16.3. Tutorial: Deploying an application

16.3.1. Lab overview

16.3.1.1. Lab resources

  • Source code for the OSToy application
  • OSToy front-end container image
  • OSToy microservice container image
  • Deployment Definition YAML files:

    ostoy-frontend-deployment.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ostoy-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ostoy-frontend
      labels:
        app: ostoy
    spec:
      selector:
        matchLabels:
          app: ostoy-frontend
      strategy:
        type: Recreate
      replicas: 1
      template:
        metadata:
          labels:
            app: ostoy-frontend
        spec:
          # Uncomment to use with ACK portion of the workshop
          # If you chose a different service account name please replace it.
          # serviceAccount: ostoy-sa
          containers:
          - name: ostoy-frontend
            securityContext:
              allowPrivilegeEscalation: false
              runAsNonRoot: true
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                - ALL
            image: quay.io/ostoylab/ostoy-frontend:1.6.0
            imagePullPolicy: IfNotPresent
            ports:
            - name: ostoy-port
              containerPort: 8080
            resources:
              requests:
                memory: "256Mi"
                cpu: "100m"
              limits:
                memory: "512Mi"
                cpu: "200m"
            volumeMounts:
            - name: configvol
              mountPath: /var/config
            - name: secretvol
              mountPath: /var/secret
            - name: datavol
              mountPath: /var/demo_files
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
              initialDelaySeconds: 10
              periodSeconds: 5
            env:
            - name: ENV_TOY_SECRET
              valueFrom:
                secretKeyRef:
                  name: ostoy-secret-env
                  key: ENV_TOY_SECRET
            - name: MICROSERVICE_NAME
              value: OSTOY_MICROSERVICE_SVC
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumes:
            - name: configvol
              configMap:
                name: ostoy-configmap-files
            - name: secretvol
              secret:
                defaultMode: 420
                secretName: ostoy-secret
            - name: datavol
              persistentVolumeClaim:
                claimName: ostoy-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-frontend-svc
      labels:
        app: ostoy-frontend
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: ostoy-port
          protocol: TCP
          name: ostoy
      selector:
        app: ostoy-frontend
    ---
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: ostoy-route
    spec:
      to:
        kind: Service
        name: ostoy-frontend-svc
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ostoy-secret-env
    type: Opaque
    data:
      ENV_TOY_SECRET: VGhpcyBpcyBhIHRlc3Q=
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ostoy-configmap-files
    data:
      config.json:  '{ "default": "123" }'
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ostoy-secret
    data:
      secret.txt: VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1
    type: Opaque

    ostoy-microservice-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ostoy-microservice
      labels:
        app: ostoy
    spec:
      selector:
        matchLabels:
          app: ostoy-microservice
      replicas: 1
      template:
        metadata:
          labels:
            app: ostoy-microservice
        spec:
          containers:
          - name: ostoy-microservice
            securityContext:
              allowPrivilegeEscalation: false
              runAsNonRoot: true
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                - ALL
            image: quay.io/ostoylab/ostoy-microservice:1.5.0
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 8080
              protocol: TCP
            resources:
              requests:
                memory: "128Mi"
                cpu: "50m"
              limits:
                memory: "256Mi"
                cpu: "100m"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-microservice-svc
      labels:
        app: ostoy-microservice
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: 8080
          protocol: TCP
      selector:
        app: ostoy-microservice

  • S3 bucket manifest for ACK S3

    s3-bucket.yaml

    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: Bucket
    metadata:
      name: ostoy-bucket
      namespace: ostoy
    spec:
      name: ostoy-bucket

Note

To simplify deployment of the OSToy application, all of the objects required in the above deployment manifests are grouped together. For a typical enterprise deployment, a separate manifest file for each Kubernetes object is recommended.

16.3.1.2. About the OSToy application

OSToy is a simple Node.js application that you will deploy to a ROSA cluster to help explore the functionality of Kubernetes. This application has a user interface where you can:

  • Write messages to the log (stdout / stderr).
  • Intentionally crash the application to view self-healing.
  • Toggle a liveness probe and monitor OpenShift behavior.
  • Read config maps, secrets, and env variables.
  • If connected to shared storage, read and write files.
  • Check network connectivity, intra-cluster DNS, and intra-communication with the included microservice.
  • Increase the load to view automatic scaling of the pods to handle the load using the Horizontal Pod Autoscaler.
  • Optional: Connect to an AWS S3 bucket to read and write objects.

16.3.1.3. OSToy Application Diagram

OSToy architecture diagram

16.3.1.4. Understanding the OSToy UI

Preview of the OSToy homepage
  1. Shows the pod name that served your browser the page.
  2. Home: The main page of the application where you can perform some of the functions listed which we will explore.
  3. Persistent Storage: Allows you to write data to the persistent volume bound to this application.
  4. Config Maps: Shows the contents of configmaps available to the application and the key:value pairs.
  5. Secrets: Shows the contents of secrets available to the application and the key:value pairs.
  6. ENV Variables: Shows the environment variables available to the application.
  7. Networking: Tools to illustrate networking within the application.
  8. Pod Auto Scaling: Tool to increase the load of the pods and test the HPA.
  9. ACK S3: Optional: Integrate with AWS S3 to read and write objects to a bucket.

    Note

    In order see the "ACK S3" section of OSToy, you must complete the ACK section of this workshop. If you decide not to complete that section, the OSToy application will still function.

  10. About: Displays more information about the application.

16.4. Tutorial: Deploying an application

16.4.1. Deploying the OSToy application with Kubernetes

You can deploy the OSToy application by creating and storing the images for the front-end and back-end microservice containers in an image repository. You can then create Kubernetes deployments to deploy the application.

16.4.1.1. Retrieving the login command

  1. If you are not logged in to the CLI, access your cluster with the web console.
  2. Click the dropdown arrow next to your login name in the upper right, and select Copy Login Command.

    CLI login screen

    A new tab opens.

  3. Select your authentication method.
  4. Click Display Token.
  5. Copy the command under Log in with this token.
  6. From your terminal, paste and run the copied command. If the login is successful, you will see the following confirmation message:

    $ oc login --token=<your_token> --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
    Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
    
    You don't have any projects. You can try to create a new project, by running
    
    oc new-project <project name>

16.4.1.2. Creating a new project

16.4.1.2.1. Using the CLI
  1. Create a new project named ostoy in your cluster by running following command:

    $ oc new-project ostoy

    Example output

    Now using project "ostoy" on server "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443".

  2. Optional: Alternatively, create a unique project name by running the following command:

    $ oc new-project ostoy-$(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')
16.4.1.2.2. Using the web console
  1. From the web console, click Home Projects.
  2. On the Projects page, click create Create Project.

    The project creation screen

16.4.1.3. Deploying the back-end microservice

The microservice serves internal web requests and returns a JSON object containing the current hostname and a randomly generated color string.

  • Deploy the microservice by running the following command from your terminal:

    $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml

    Example output

    $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
    deployment.apps/ostoy-microservice created
    service/ostoy-microservice-svc created

16.4.1.4. Deploying the front-end service

The front-end deployment uses the Node.js front-end for the application and additional Kubernetes objects.

The ostoy-frontend-deployment.yaml file shows that front-end deployment defines the following features:

  • Persistent volume claim
  • Deployment object
  • Service
  • Route
  • Configmaps
  • Secrets

    • Deploy the application front-end and create all of the objects by entering the following command:

      $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml

      Example output

      persistentvolumeclaim/ostoy-pvc created
      deployment.apps/ostoy-frontend created
      service/ostoy-frontend-svc created
      route.route.openshift.io/ostoy-route created
      configmap/ostoy-configmap-env created
      secret/ostoy-secret-env created
      configmap/ostoy-configmap-files created
      secret/ostoy-secret created

      You should see all objects created successfully.

16.4.1.5. Getting the route

You must get the route to access the application.

  • Get the route to your application by running the following command:

    $ oc get route

    Example output

    NAME          HOST/PORT                                                 PATH   SERVICES             PORT    TERMINATION   WILDCARD
    ostoy-route   ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com          ostoy-frontend-svc   <all>                 None

16.4.1.6. Viewing the application

  1. Copy the ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com URL output from the previous step.
  2. Paste the copied URL into your web browser and press enter. You should see the homepage of your application. If the page does not load, make sure you use http and not https.

    OStoy application homepage

16.5. Tutorial: Networking

This tutorial shows how the OSToy app uses intra-cluster networking to separate functions by using microservices and visualize the scaling of pods.

OSToy Diagram

The diagram shows there are at least two separate pods, each with its own service.

One pod functions as the front end web application with a service and a publicly accessible route. The other pod functions as the backend microservice with a service object so that the front end pod can communicate with the microservice. This communication occurs across the pods if more than one. Because of these communication limits, this microservice is not accessible from outside this cluster or from other namespaces or projects if these are configured. The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname, which is the pod’s name, and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled "Intra-cluster Communication".

For more information about the networking limitations, see About network policy.

16.5.1. Intra-cluster networking

You can view your networking configurations in your OSToy application.

Procedure

  1. In the OSToy application, click Networking in the left menu.
  2. Review the networking configuration. The right tile titled "Hostname Lookup" illustrates how the service name created for a pod can be used to translate into an internal ClusterIP address.

    OSToy Networking page
  3. Enter the name of the microservice created in the right tile ("Hostname Lookup") following the format of <service_name>.<namespace>.svc.cluster.local. You can find this service name in the service definition of ostoy-microservice.yaml by running the following command:

    $ oc get service <name_of_service> -o yaml

    Example output

    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-microservice-svc
      labels:
        app: ostoy-microservice
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: 8080
          protocol: TCP
      selector:
        app: ostoy-microservice

    In this example, the full hostname is ostoy-microservice-svc.ostoy.svc.cluster.local.

  4. You see an IP address returned. In this example it is 172.30.165.246. This is the intra-cluster IP address, which is only accessible from within the cluster.

    OSToy DNS

16.6. Tutorial: Persistent volumes for cluster storage

Red Hat OpenShift Service on AWS (ROSA) (classic architecture) and Red Hat OpenShift Service on AWS (ROSA) support storing persistent volumes with either Amazon Web Services (AWS) Elastic Block Store (EBS) or AWS Elastic File System (EFS).

16.6.1. Using persistent volumes

Use the following procedures to create a file, store it on a persistent volume in your cluster, and confirm that it still exists after pod failure and re-creation.

16.6.1.1. Viewing a persistent volume claim

  1. Navigate to the cluster’s OpenShift web console.
  2. Click Storage in the left menu, then click PersistentVolumeClaims to see a list of all the persistent volume claims.
  3. Click a persistence volume claim to see the size, access mode, storage class, and other additional claim details.

    Note

    The access mode is ReadWriteOnce (RWO). This means that the volume can only be mounted to one node and the pod or pods can read and write to the volume.

16.6.1.2. Storing your file

  1. In the OSToy app console, click Persistent Storage in the left menu.
  2. In the Filename box, enter a file name with a .txt extension, for example test-pv.txt.
  3. In the File contents box, enter a sentence of text, for example OpenShift is the greatest thing since sliced bread!.
  4. Click Create file.

    cloud experts storage ostoy createfile
  5. Scroll to Existing files on the OSToy app console.
  6. Click the file you created to see the file name and contents.

    cloud experts storage ostoy viewfile

16.6.1.3. Crashing the pod

  1. On the OSToy app console, click Home in the left menu.
  2. Click Crash pod.

16.6.1.4. Confirming persistent storage

  1. Wait for the pod to re-create.
  2. On the OSToy app console, click Persistent Storage in the left menu.
  3. Find the file you created, and open it to view and confirm the contents.

    cloud experts storage ostoy existingfile

Verification

The deployment YAML file shows that we mounted the directory /var/demo_files to our persistent volume claim.

  1. Retrieve the name of your front-end pod by running the following command:

    $ oc get pods
  2. Start a secure shell (SSH) session in your container by running the following command:

    $ oc rsh <pod_name>
  3. Go to the directory by running the following command:

    $ cd /var/demo_files
  4. Optional: See all the files you created by running the following command:

    $ ls
  5. Open the file to view the contents by running the following command:

    $ cat test-pv.txt
  6. Verify that the output is the text you entered in the OSToy app console.

    Example terminal

    $ oc get pods
    NAME                                  READY     STATUS    RESTARTS   AGE
    ostoy-frontend-5fc8d486dc-wsw24       1/1       Running   0          18m
    ostoy-microservice-6cf764974f-hx4qm   1/1       Running   0          18m
    
    $ oc rsh ostoy-frontend-5fc8d486dc-wsw24
    
    $ cd /var/demo_files/
    
    $ ls
    lost+found   test-pv.txt
    
    $ cat test-pv.txt
    OpenShift is the greatest thing since sliced bread!

16.6.1.5. Ending the session

  • Type exit in your terminal to quit the session and return to the CLI.

16.6.2. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.