Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 18. Deploying an application


18.1. Tutorial: Deploying an application

18.1.1. Introduction

After successfully provisioning your cluster, you can deploy an application on it. This application allows you to become more familiar with some of the features of Red Hat OpenShift Service on AWS (ROSA) and Kubernetes.

18.1.1.1. Lab overview

In this lab, you will complete the following set of tasks designed to help you understand the concepts of deploying and operating container-based applications:

  • Deploy a Node.js based app by using S2I and Kubernetes Deployment objects.
  • Set up a continuous delivery (CD) pipeline to automatically push source code changes.
  • Explore logging.
  • Experience self healing of applications.
  • Explore configuration management through configmaps, secrets, and environment variables.
  • Use persistent storage to share data across pod restarts.
  • Explore networking within Kubernetes and applications.
  • Familiarize yourself with ROSA and Kubernetes functionality.
  • Automatically scale pods based on loads from the Horizontal Pod Autoscaler.
  • Use AWS Controllers for Kubernetes (ACK) to deploy and use an S3 bucket.

This lab uses either the ROSA CLI or ROSA web user interface (UI).

18.2. Tutorial: Deploying an application

18.2.1. Prerequisites

  1. A Provisioned ROSA cluster

    This lab assumes you have access to a successfully provisioned a ROSA cluster. If you have not yet created a ROSA cluster, see ROSA quick start guide for more information.

  2. The OpenShift Command Line Interface (CLI)

    For more information, see Getting started with the OpenShift CLI.

  3. A GitHub Account

    Use your existing GitHub account or register at https://github.com/signup.

18.2.1.1. Understanding AWS account association

Before you can use Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), you must associate your AWS account with your Red Hat organization. You can associate your account by creating and linking the following IAM roles.

OpenShift Cluster Manager role

Create an OpenShift Cluster Manager IAM role and link it to your Red Hat organization.

You can apply basic or administrative permissions to the OpenShift Cluster Manager role. The basic permissions enable cluster maintenance using OpenShift Cluster Manager. The administrative permissions enable automatic deployment of the cluster-specific Operator roles and the OpenID Connect (OIDC) provider using OpenShift Cluster Manager.

User role

Create a user IAM role and link it to your Red Hat user account. The Red Hat user account must exist in the Red Hat organization that is linked to your OpenShift Cluster Manager role.

The user role is used by Red Hat to verify your AWS identity when you use the OpenShift Cluster Manager Hybrid Cloud Console to install a cluster and the required STS resources.

Associating your AWS account with your Red Hat organization

Before using Red Hat OpenShift Cluster Manager on the Red Hat Hybrid Cloud Console to create Red Hat OpenShift Service on AWS (ROSA) clusters that use the AWS Security Token Service (STS), create an OpenShift Cluster Manager IAM role and link it to your Red Hat organization. Then, create a user IAM role and link it to your Red Hat user account in the same Red Hat organization.

Procedure

  1. Create an OpenShift Cluster Manager role and link it to your Red Hat organization:

    Note

    To enable automatic deployment of the cluster-specific Operator roles and the OpenID Connect (OIDC) provider using the OpenShift Cluster Manager Hybrid Cloud Console, you must apply the administrative privileges to the role by choosing the Admin OCM role command in the Accounts and roles step of creating a ROSA cluster. For more information about the basic and administrative privileges for the OpenShift Cluster Manager role, see Understanding AWS account association.

    Note

    If you choose the Basic OCM role command in the Accounts and roles step of creating a ROSA cluster in the OpenShift Cluster Manager Hybrid Cloud Console, you must deploy a ROSA cluster using manual mode. You will be prompted to configure the cluster-specific Operator roles and the OpenID Connect (OIDC) provider in a later step.

    $ rosa create ocm-role

    Select the default values at the prompts to quickly create and link the role.

  2. Create a user role and link it to your Red Hat user account:

    $ rosa create user-role

    Select the default values at the prompts to quickly create and link the role.

    Note

    The Red Hat user account must exist in the Red Hat organization that is linked to your OpenShift Cluster Manager role.

18.3. Tutorial: Deploying an application

18.3.1. Lab overview

18.3.1.1. Lab resources

  • Source code for the OSToy application
  • OSToy front-end container image
  • OSToy microservice container image
  • Deployment Definition YAML files:

    ostoy-frontend-deployment.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ostoy-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ostoy-frontend
      labels:
        app: ostoy
    spec:
      selector:
        matchLabels:
          app: ostoy-frontend
      strategy:
        type: Recreate
      replicas: 1
      template:
        metadata:
          labels:
            app: ostoy-frontend
        spec:
          # Uncomment to use with ACK portion of the workshop
          # If you chose a different service account name please replace it.
          # serviceAccount: ostoy-sa
          containers:
          - name: ostoy-frontend
            securityContext:
              allowPrivilegeEscalation: false
              runAsNonRoot: true
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                - ALL
            image: quay.io/ostoylab/ostoy-frontend:1.6.0
            imagePullPolicy: IfNotPresent
            ports:
            - name: ostoy-port
              containerPort: 8080
            resources:
              requests:
                memory: "256Mi"
                cpu: "100m"
              limits:
                memory: "512Mi"
                cpu: "200m"
            volumeMounts:
            - name: configvol
              mountPath: /var/config
            - name: secretvol
              mountPath: /var/secret
            - name: datavol
              mountPath: /var/demo_files
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
              initialDelaySeconds: 10
              periodSeconds: 5
            env:
            - name: ENV_TOY_SECRET
              valueFrom:
                secretKeyRef:
                  name: ostoy-secret-env
                  key: ENV_TOY_SECRET
            - name: MICROSERVICE_NAME
              value: OSTOY_MICROSERVICE_SVC
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumes:
            - name: configvol
              configMap:
                name: ostoy-configmap-files
            - name: secretvol
              secret:
                defaultMode: 420
                secretName: ostoy-secret
            - name: datavol
              persistentVolumeClaim:
                claimName: ostoy-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-frontend-svc
      labels:
        app: ostoy-frontend
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: ostoy-port
          protocol: TCP
          name: ostoy
      selector:
        app: ostoy-frontend
    ---
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: ostoy-route
    spec:
      to:
        kind: Service
        name: ostoy-frontend-svc
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ostoy-secret-env
    type: Opaque
    data:
      ENV_TOY_SECRET: VGhpcyBpcyBhIHRlc3Q=
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ostoy-configmap-files
    data:
      config.json:  '{ "default": "123" }'
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ostoy-secret
    data:
      secret.txt: VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1
    type: Opaque

    ostoy-microservice-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ostoy-microservice
      labels:
        app: ostoy
    spec:
      selector:
        matchLabels:
          app: ostoy-microservice
      replicas: 1
      template:
        metadata:
          labels:
            app: ostoy-microservice
        spec:
          containers:
          - name: ostoy-microservice
            securityContext:
              allowPrivilegeEscalation: false
              runAsNonRoot: true
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                - ALL
            image: quay.io/ostoylab/ostoy-microservice:1.5.0
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 8080
              protocol: TCP
            resources:
              requests:
                memory: "128Mi"
                cpu: "50m"
              limits:
                memory: "256Mi"
                cpu: "100m"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-microservice-svc
      labels:
        app: ostoy-microservice
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: 8080
          protocol: TCP
      selector:
        app: ostoy-microservice

  • S3 bucket manifest for ACK S3

    s3-bucket.yaml

    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: Bucket
    metadata:
      name: ostoy-bucket
      namespace: ostoy
    spec:
      name: ostoy-bucket

Note

To simplify deployment of the OSToy application, all of the objects required in the above deployment manifests are grouped together. For a typical enterprise deployment, a separate manifest file for each Kubernetes object is recommended.

18.3.1.2. About the OSToy application

OSToy is a simple Node.js application that you will deploy to a ROSA cluster to help explore the functionality of Kubernetes. This application has a user interface where you can:

  • Write messages to the log (stdout / stderr).
  • Intentionally crash the application to view self-healing.
  • Toggle a liveness probe and monitor OpenShift behavior.
  • Read config maps, secrets, and env variables.
  • If connected to shared storage, read and write files.
  • Check network connectivity, intra-cluster DNS, and intra-communication with the included microservice.
  • Increase the load to view automatic scaling of the pods to handle the load using the Horizontal Pod Autoscaler.
  • Optional: Connect to an AWS S3 bucket to read and write objects.

18.3.1.3. OSToy Application Diagram

OSToy architecture diagram

18.3.1.4. Understanding the OSToy UI

Preview of the OSToy homepage
  1. Shows the pod name that served your browser the page.
  2. Home: The main page of the application where you can perform some of the functions listed which we will explore.
  3. Persistent Storage: Allows you to write data to the persistent volume bound to this application.
  4. Config Maps: Shows the contents of configmaps available to the application and the key:value pairs.
  5. Secrets: Shows the contents of secrets available to the application and the key:value pairs.
  6. ENV Variables: Shows the environment variables available to the application.
  7. Networking: Tools to illustrate networking within the application.
  8. Pod Auto Scaling: Tool to increase the load of the pods and test the HPA.
  9. ACK S3: Optional: Integrate with AWS S3 to read and write objects to a bucket.

    Note

    In order see the "ACK S3" section of OSToy, you must complete the ACK section of this workshop. If you decide not to complete that section, the OSToy application will still function.

  10. About: Displays more information about the application.

18.4. Tutorial: Deploying an application

18.4.1. Deploying the OSToy application with Kubernetes

You can deploy the OSToy application by creating and storing the images for the front-end and back-end microservice containers in an image repository. You can then create Kubernetes deployments to deploy the application.

18.4.1.1. Retrieving the login command

  1. If you are not logged in to the CLI, access your cluster with the web console.
  2. Click the dropdown arrow next to your login name in the upper right, and select Copy Login Command.

    CLI login screen

    A new tab opens.

  3. Select your authentication method.
  4. Click Display Token.
  5. Copy the command under Log in with this token.
  6. From your terminal, paste and run the copied command. If the login is successful, you will see the following confirmation message:

    $ oc login --token=<your_token> --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
    Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
    
    You don't have any projects. You can try to create a new project, by running
    
    oc new-project <project name>

18.4.1.2. Creating a new project

18.4.1.2.1. Using the CLI
  1. Create a new project named ostoy in your cluster by running following command:

    $ oc new-project ostoy

    Example output

    Now using project "ostoy" on server "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443".

  2. Optional: Alternatively, create a unique project name by running the following command:

    $ oc new-project ostoy-$(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')
18.4.1.2.2. Using the web console
  1. From the web console, click Home Projects.
  2. On the Projects page, click create Create Project.

    The project creation screen

18.4.1.3. Deploying the back-end microservice

The microservice serves internal web requests and returns a JSON object containing the current hostname and a randomly generated color string.

  • Deploy the microservice by running the following command from your terminal:

    $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml

    Example output

    $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
    deployment.apps/ostoy-microservice created
    service/ostoy-microservice-svc created

18.4.1.4. Deploying the front-end service

The front-end deployment uses the Node.js front-end for the application and additional Kubernetes objects.

The ostoy-frontend-deployment.yaml file shows that front-end deployment defines the following features:

  • Persistent volume claim
  • Deployment object
  • Service
  • Route
  • Configmaps
  • Secrets

    • Deploy the application front-end and create all of the objects by entering the following command:

      $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml

      Example output

      persistentvolumeclaim/ostoy-pvc created
      deployment.apps/ostoy-frontend created
      service/ostoy-frontend-svc created
      route.route.openshift.io/ostoy-route created
      configmap/ostoy-configmap-env created
      secret/ostoy-secret-env created
      configmap/ostoy-configmap-files created
      secret/ostoy-secret created

      You should see all objects created successfully.

18.4.1.5. Getting the route

You must get the route to access the application.

  • Get the route to your application by running the following command:

    $ oc get route

    Example output

    NAME          HOST/PORT                                                 PATH   SERVICES             PORT    TERMINATION   WILDCARD
    ostoy-route   ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com          ostoy-frontend-svc   <all>                 None

18.4.1.6. Viewing the application

  1. Copy the ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com URL output from the previous step.
  2. Paste the copied URL into your web browser and press enter. You should see the homepage of your application. If the page does not load, make sure you use http and not https.

    OStoy application homepage

18.5. Tutorial: Health Check

You can see how Kubernetes responds to pod failure by intentionally crashing your pod and making it unresponsive to the Kubernetes liveness probes.

18.5.1. Preparing your desktop

  1. Split your desktop screen between the OpenShift web console and the OSToy application web console so that you can see the results of your actions immediately.

    Splitscreen desktop with the OSToy application and the web console

    If you cannot split your screen, open the OSToy application web console in another tab so you can quickly switch to the OpenShift web console after activating the features in the application.

  2. From the OpenShift web console, select Workloads > Deployments > ostoy-frontend to view the OSToy deployment.

    The web console deployments page

18.5.2. Crashing the pod

  1. From the OSToy application web console, click Home in the left menu, and enter a message in the Crash Pod box, for example, This is goodbye!.
  2. Click Crash Pod.

    OSToy crash pod selection

    The pod crashes and Kubernetes should restart the pod.

    OSToy pod crash message

18.5.3. Viewing the revived pod

  1. From the OpenShift web console, quickly switch to the Deployments screen. You will see that the pod turns yellow, meaning it is down. It should quickly revive and turn blue. The revival process happens quickly so you might miss it.

    Deployment details page

Verification

  1. From the web console, click Pods > ostoy-frontend-xxxxxxx-xxxx to change to the pods screen.

    Pod overview page
  2. Click the Events sub-tab and verify that the container crashed and restarted.

    Pod events list

18.5.4. Making the application malfunction

Keep the pod events page open from the previous procedure.

  • From the OSToy application, click Toggle Health in the Toggle Health Status tile. Watch Current Health switch to I’m not feeling all that well.

    OSToy toggle health tile

Verification

After the previous step, the application stops responding with a 200 HTTP code. After 3 consecutive failures, Kubernetes will stop the pod and restart it. From the web console, switch back to the pod events page and you will see that the liveness probe failed and the pod restarted.

The following image shows an example of what you should see on your pod events page.

Pod events list

A. The pod has three consecutive failures.

B. Kubernetes stops the pod.

C. Kubernetes restarts the pod.

18.6. Tutorial: Persistent volumes for cluster storage

Red Hat OpenShift Service on AWS (ROSA) (classic architecture) and Red Hat OpenShift Service on AWS (ROSA) support storing persistent volumes with either Amazon Web Services (AWS) Elastic Block Store (EBS) or AWS Elastic File System (EFS).

18.6.1. Using persistent volumes

Use the following procedures to create a file, store it on a persistent volume in your cluster, and confirm that it still exists after pod failure and re-creation.

18.6.1.1. Viewing a persistent volume claim

  1. Navigate to the cluster’s OpenShift web console.
  2. Click Storage in the left menu, then click PersistentVolumeClaims to see a list of all the persistent volume claims.
  3. Click a persistence volume claim to see the size, access mode, storage class, and other additional claim details.

    Note

    The access mode is ReadWriteOnce (RWO). This means that the volume can only be mounted to one node and the pod or pods can read and write to the volume.

18.6.1.2. Storing your file

  1. In the OSToy app console, click Persistent Storage in the left menu.
  2. In the Filename box, enter a file name with a .txt extension, for example test-pv.txt.
  3. In the File contents box, enter a sentence of text, for example OpenShift is the greatest thing since sliced bread!.
  4. Click Create file.

    cloud experts storage ostoy createfile
  5. Scroll to Existing files on the OSToy app console.
  6. Click the file you created to see the file name and contents.

    cloud experts storage ostoy viewfile

18.6.1.3. Crashing the pod

  1. On the OSToy app console, click Home in the left menu.
  2. Click Crash pod.

18.6.1.4. Confirming persistent storage

  1. Wait for the pod to re-create.
  2. On the OSToy app console, click Persistent Storage in the left menu.
  3. Find the file you created, and open it to view and confirm the contents.

    cloud experts storage ostoy existingfile

Verification

The deployment YAML file shows that we mounted the directory /var/demo_files to our persistent volume claim.

  1. Retrieve the name of your front-end pod by running the following command:

    $ oc get pods
  2. Start a secure shell (SSH) session in your container by running the following command:

    $ oc rsh <pod_name>
  3. Go to the directory by running the following command:

    $ cd /var/demo_files
  4. Optional: See all the files you created by running the following command:

    $ ls
  5. Open the file to view the contents by running the following command:

    $ cat test-pv.txt
  6. Verify that the output is the text you entered in the OSToy app console.

    Example terminal

    $ oc get pods
    NAME                                  READY     STATUS    RESTARTS   AGE
    ostoy-frontend-5fc8d486dc-wsw24       1/1       Running   0          18m
    ostoy-microservice-6cf764974f-hx4qm   1/1       Running   0          18m
    
    $ oc rsh ostoy-frontend-5fc8d486dc-wsw24
    
    $ cd /var/demo_files/
    
    $ ls
    lost+found   test-pv.txt
    
    $ cat test-pv.txt
    OpenShift is the greatest thing since sliced bread!

18.6.1.5. Ending the session

  • Type exit in your terminal to quit the session and return to the CLI.

18.6.2. Additional resources

18.7. Tutorial: ConfigMaps, secrets, and environment variables

This tutorial shows how to configure the OSToy application by using config maps, secrets, and environment variables. For more information, see these linked topics.

18.7.1. Configuration using ConfigMaps

Config maps allow you to decouple configuration artifacts from container image content to keep containerized applications portable.

Procedure

  • In the OSToy app, in the left menu, click Config Maps, displaying the contents of the config map available to the OSToy application. The code snippet shows an example of a config map configuration:

    Example output

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ostoy-configmap-files
    data:
      config.json:  '{ "default": "123" }'

18.7.2. Configuration using secrets

Kubernetes Secret objects allow you to store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. Putting this information in a secret is safer and more flexible than putting it in plain text into a pod definition or a container image.

Procedure

  • In the OSToy app, in the left menu, click Secrets, displaying the contents of the secrets available to the OSToy application. The code snippet shows an example of a secret configuration:

    Example output

    USERNAME=my_user
    PASSWORD=VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1
    SMTP=localhost
    SMTP_PORT=25

18.7.3. Configuration using environment variables

Using environment variables is an easy way to change application behavior without requiring code changes. It allows different deployments of the same application to potentially behave differently based on the environment variables. Red Hat OpenShift Service on AWS makes it simple to set, view, and update environment variables for pods or deployments.

Procedure

  • In the OSToy app, in the left menu, click ENV Variables, displaying the environment variables available to the OSToy application. The code snippet shows an example of an environmental variable configuration:

    Example output

    {
      "npm_config_local_prefix": "/opt/app-root/src",
      "STI_SCRIPTS_PATH": "/usr/libexec/s2i",
      "npm_package_version": "1.7.0",
      "APP_ROOT": "/opt/app-root",
      "NPM_CONFIG_PREFIX": "/opt/app-root/src/.npm-global",
      "OSTOY_MICROSERVICE_PORT_8080_TCP_PORT": "8080",
      "NODE": "/usr/bin/node",
      "LD_PRELOAD": "libnss_wrapper.so",
      "KUBERNETES_SERVICE_HOST": "172.30.0.1",
      "OSTOY_MICROSERVICE_PORT": "tcp://172.30.60.255:8080",
      "OSTOY_PORT": "tcp://172.30.152.25:8080",
      "npm_package_name": "ostoy",
      "OSTOY_SERVICE_PORT_8080_TCP": "8080",
      "_": "/usr/bin/node"
      "ENV_TOY_CONFIGMAP": "ostoy-configmap -env"
    }

18.8. Tutorial: Networking

This tutorial shows how the OSToy app uses intra-cluster networking to separate functions by using microservices and visualize the scaling of pods.

OSToy Diagram

The diagram shows there are at least two separate pods, each with its own service.

One pod functions as the front end web application with a service and a publicly accessible route. The other pod functions as the backend microservice with a service object so that the front end pod can communicate with the microservice. This communication occurs across the pods if more than one. Because of these communication limits, this microservice is not accessible from outside this cluster or from other namespaces or projects if these are configured. The sole purpose of this microservice is to serve internal web requests and return a JSON object containing the current hostname, which is the pod’s name, and a randomly generated color string. This color string is used to display a box with that color displayed in the tile titled "Intra-cluster Communication".

For more information about the networking limitations, see About network policy.

18.8.1. Intra-cluster networking

You can view your networking configurations in your OSToy application.

Procedure

  1. In the OSToy application, click Networking in the left menu.
  2. Review the networking configuration. The right tile titled "Hostname Lookup" illustrates how the service name created for a pod can be used to translate into an internal ClusterIP address.

    OSToy Networking page
  3. Enter the name of the microservice created in the right tile ("Hostname Lookup") following the format of <service_name>.<namespace>.svc.cluster.local. You can find this service name in the service definition of ostoy-microservice.yaml by running the following command:

    $ oc get service <name_of_service> -o yaml

    Example output

    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-microservice-svc
      labels:
        app: ostoy-microservice
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: 8080
          protocol: TCP
      selector:
        app: ostoy-microservice

    In this example, the full hostname is ostoy-microservice-svc.ostoy.svc.cluster.local.

  4. You see an IP address returned. In this example it is 172.30.165.246. This is the intra-cluster IP address, which is only accessible from within the cluster.

    OSToy DNS

18.9. Tutorial: Scaling an application

18.9.1. Scaling

You can manually or automatically scale your pods by using the Horizontal Pod Autoscaler (HPA). You can also scale your cluster nodes.

18.9.1.1. Manual pod scaling

You can manually scale your application’s pods by using one of the following methods:

  • Changing your ReplicaSet or deployment definition
  • Using the command line
  • Using the web console

This workshop starts by using only one pod for the microservice. By defining a replica of 1 in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the Horizontal Pod Autoscaler(HPA) which is based on the load and will scale out more pods, beyond your initial definition, if high load is experienced.

Prerequisites

  • An active ROSA cluster
  • A deloyed the OSToy application

Procedure

  1. In the OSToy app, click the Networking tab in the navigational menu.
  2. In the "Intra-cluster Communication" section, locate the box located beneath "Remote Pods" that randomly changes colors. Inside the box, you see the microservice’s pod name. There is only one box in this example because there is only one microservice pod.

    HPA Menu
  3. Confirm that there is only one pod running for the microservice by running the following command:

    $ oc get pods

    Example output

    NAME                                  READY     STATUS    RESTARTS   AGE
    ostoy-frontend-679cb85695-5cn7x       1/1       Running   0          1h
    ostoy-microservice-86b4c6f559-p594d   1/1       Running   0          1h

  4. Download the ostoy-microservice-deployment.yaml and save it to your local machine.
  5. Change the deployment definition to three pods instead of one by using the following example:

    spec:
        selector:
          matchLabels:
            app: ostoy-microservice
        replicas: 3
  6. Apply the replica changes by running the following command:

    $ oc apply -f ostoy-microservice-deployment.yaml
    Note

    You can also edit the ostoy-microservice-deployment.yaml file in the OpenShift Web Console by going to the Workloads > Deployments > ostoy-microservice > YAML tab.

  7. Confirm that there are now 3 pods by running the following command:

    $ oc get pods

    The output shows that there are now 3 pods for the microservice instead of only one.

    Example output

    NAME                                  READY   STATUS    RESTARTS   AGE
    ostoy-frontend-5fbcc7d9-rzlgz         1/1     Running   0          26m
    ostoy-microservice-6666dcf455-2lcv4   1/1     Running   0          81s
    ostoy-microservice-6666dcf455-5z56w   1/1     Running   0          81s
    ostoy-microservice-6666dcf455-tqzmn   1/1     Running   0          26m

  8. Scale the application by using the CLI or by using the web UI:

    • In the CLI, decrease the number of pods from 3 to 2 by running the following command:

      $ oc scale deployment ostoy-microservice --replicas=2
    • From the navigational menu of the OpenShift web console UI, click Workloads > Deployments > ostoy-microservice.
    • On the left side of the page, locate the blue circle with a "3 Pod" label in the middle.
    • Selecting the arrows next to the circle scales the number of pods. Select the down arrow to 2.

      UI Scale

Verification

Check your pod counts by using the CLI, the web UI, or the OSToy app:

  • From the CLI, confirm that you are using two pods for the microservice by running the following command:

    $ oc get pods

    Example output

    NAME                                  READY   STATUS    RESTARTS   AGE
    ostoy-frontend-5fbcc7d9-rzlgz         1/1     Running   0          75m
    ostoy-microservice-6666dcf455-2lcv4   1/1     Running   0          50m
    ostoy-microservice-6666dcf455-tqzmn   1/1     Running   0          75m

  • In the web UI, select Workloads > Deployments > ostoy-microservice.

    Verify the workload pods
  • You can also confirm that there are two pods in use by selecting Networking in the navigational menu of the OSToy app. There should be two colored boxes for the two pods.

    UI Scale

18.9.1.2. Pod Autoscaling

Red Hat OpenShift Service on AWS offers a Horizontal Pod Autoscaler (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.

Procedure

  1. From the navigational menu of the web UI, select Pod Auto Scaling.

    HPA Menu
  2. Create the HPA by running the following command:

    $ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10

    This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. Thoughout deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.

  3. On the Pod Auto Scaling > Horizontal Pod Autoscaling page, select Increase the load.

    Important

    Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Click Increase the Load only once. For more information about the process, see the microservice’s GitHub repository.

    After a few minutes, the new pods display on the page represented by colored boxes.

    Note

    The page can experience lag.

Verification

Check your pod counts with one of the following methods:

  • In the OSToy application’s web UI, see the remote pods box:

    HPA Main

    Because there is only one pod, increasing the workload should trigger an increase of pods.

  • In the CLI, run the following command:

    oc get pods --field-selector=status.phase=Running | grep microservice

    Example output

    ostoy-microservice-79894f6945-cdmbd   1/1     Running   0          3m14s
    ostoy-microservice-79894f6945-mgwk7   1/1     Running   0          4h24m
    ostoy-microservice-79894f6945-q925d   1/1     Running   0          3m14s

  • You can also verify autoscaling from the OpenShift Cluster Manager

    1. In the OpenShift web console navigational menu, click Observe > Dashboards.
    2. In the dashboard, select Kubernetes / Compute Resources / Namespace (Pods) and your namespace ostoy.

      Select metrics
    3. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:

      1. The load increased (A).
      2. Two new pods were created (B and C).
      3. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
      4. The load decreased (D), and the pods were deleted.

        Select metrics

18.9.1.3. Node Autoscaling

Red Hat OpenShift Service on AWS allows you to use node autoscaling. In this scenario, you will create a new project with a job that has a large workload that the cluster cannot handle. With autoscaling enabled, when the load is larger than your current capacity, the cluster will automatically create new nodes to handle the load.

Prerequisites

  • Autoscaling is enabled on your machine pools.

Procedure

  1. Create a new project called autoscale-ex by running the following command:

    $ oc new-project autoscale-ex
  2. Create the job by running the following command:

    $ oc create -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/job-work-queue.yaml
  3. After a few minuts, run the following command to see the pods:

    $ oc get pods

    Example output

    NAME                     READY   STATUS    RESTARTS   AGE
    work-queue-5x2nq-24xxn   0/1     Pending   0          10s
    work-queue-5x2nq-57zpt   0/1     Pending   0          10s
    work-queue-5x2nq-58bvs   0/1     Pending   0          10s
    work-queue-5x2nq-6c5tl   1/1     Running   0          10s
    work-queue-5x2nq-7b84p   0/1     Pending   0          10s
    work-queue-5x2nq-7hktm   0/1     Pending   0          10s
    work-queue-5x2nq-7md52   0/1     Pending   0          10s
    work-queue-5x2nq-7qgmp   0/1     Pending   0          10s
    work-queue-5x2nq-8279r   0/1     Pending   0          10s
    work-queue-5x2nq-8rkj2   0/1     Pending   0          10s
    work-queue-5x2nq-96cdl   0/1     Pending   0          10s
    work-queue-5x2nq-96tfr   0/1     Pending   0          10s

  4. Because there are many pods in a Pending state, this status should trigger the autoscaler to create more nodes in your machine pool. Allow time to create these worker nodes.
  5. After a few minutes, use the following command to see how many worker nodes you now have:

    $ oc get nodes

    Example output

    NAME                                         STATUS   ROLES          AGE     VERSION
    ip-10-0-138-106.us-west-2.compute.internal   Ready    infra,worker   22h     v1.23.5+3afdacb
    ip-10-0-153-68.us-west-2.compute.internal    Ready    worker         2m12s   v1.23.5+3afdacb
    ip-10-0-165-183.us-west-2.compute.internal   Ready    worker         2m8s    v1.23.5+3afdacb
    ip-10-0-176-123.us-west-2.compute.internal   Ready    infra,worker   22h     v1.23.5+3afdacb
    ip-10-0-195-210.us-west-2.compute.internal   Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-196-84.us-west-2.compute.internal    Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-203-104.us-west-2.compute.internal   Ready    worker         2m6s    v1.23.5+3afdacb
    ip-10-0-217-202.us-west-2.compute.internal   Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-225-141.us-west-2.compute.internal   Ready    worker         23h     v1.23.5+3afdacb
    ip-10-0-231-245.us-west-2.compute.internal   Ready    worker         2m11s   v1.23.5+3afdacb
    ip-10-0-245-27.us-west-2.compute.internal    Ready    worker         2m8s    v1.23.5+3afdacb
    ip-10-0-245-7.us-west-2.compute.internal     Ready    worker         23h     v1.23.5+3afdacb

    You can see the worker nodes were automatically created to handle the workload.

  6. Return to the OSToy app by entering the following command:

    $ oc project ostoy

18.10. Tutorial: Logging

There are various methods to view your logs in Red Hat OpenShift Service on AWS (ROSA). Use the following procedures to forward the logs to AWS CloudWatch and view the logs directly through the pod by using oc logs.

Note

ROSA is not preconfigured with a logging solution.

18.10.1. Forwarding logs to CloudWatch

Install the logging add-on service to forward the logs to AWS CloudWatch.

  1. Run the following script to configure your ROSA cluster to forward logs to CloudWatch:

    $ curl https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/resources/configure-cloudwatch.sh | bash
    Note

    Configuring ROSA to send logs to CloudWatch goes beyond the scope of this tutorial. Integrating with AWS and enabling CloudWatch logging are important aspects of ROSA, so a script is included to simplify the configuration process. The script will automatically set up AWS CloudWatch. You can examine the script to understand the steps involved.

    Example output

    Varaibles are set...ok.
    Policy already exists...ok.
    Created RosaCloudWatch-mycluster role.
    Attached role policy.
    Deploying the Red Hat OpenShift Logging Operator
    namespace/openshift-logging configured
    operatorgroup.operators.coreos.com/cluster-logging created
    subscription.operators.coreos.com/cluster-logging created
    Waiting for Red Hat OpenShift Logging Operator deployment to complete...
    Red Hat OpenShift Logging Operator deployed.
    secret/cloudwatch-credentials created
    clusterlogforwarder.logging.openshift.io/instance created
    clusterlogging.logging.openshift.io/instance created
    Complete.

  2. After a few minutes, you should begin to see log groups inside of AWS CloudWatch. Run the following command to see the log groups:

    $ aws logs describe-log-groups --log-group-name-prefix rosa-mycluster

    Example output

    {
        "logGroups": [
            {
                "logGroupName": "rosa-mycluster.application",
                "creationTime": 1724104537717,
                "metricFilterCount": 0,
                "arn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.application:*",
                "storedBytes": 0,
                "logGroupClass": "STANDARD",
                "logGroupArn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.application"
            },
            {
                "logGroupName": "rosa-mycluster.audit",
                "creationTime": 1724104152968,
                "metricFilterCount": 0,
                "arn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.audit:*",
                "storedBytes": 0,
                "logGroupClass": "STANDARD",
                "logGroupArn": "arn:aws:logs:us-west-2:000000000000:log-group:rosa-mycluster.audit"
            },

18.10.2. Outputting the data to the streams and logs

  1. Output a message to stdout.

    1. In the OSToy application, click Home and then click the message box for Log Message (stdout).
    2. Write a message to output to the stdout stream, for example "All is well!".
    3. Click Send Message.

      cloud experts deploying application logging ostoy stdout

  2. Output a message to stderr.

    1. Click the message box for Log Message (stderr).
    2. Write a message to output to the stderr stream, for example "Oh no! Error!".
    3. Click Send Message.

      cloud experts deploying application logging ostoy stderr

18.10.3. Viewing the application logs by using the oc command

  1. Enter the following command in the command line interface (CLI) to retrieve the name of your frontend pod:

    $ oc get pods -o name

    Example output

    pod/ostoy-frontend-679cb85695-5cn7x 1
    pod/ostoy-microservice-86b4c6f559-p594d

    1
    The pod name is ostoy-frontend-679cb85695-5cn7x.
  2. Run the following command to see both the stdout and stderr messages:

    $ oc logs <pod-name>

    Example output

    $ oc logs ostoy-frontend-679cb85695-5cn7x
    [...]
    ostoy-frontend-679cb85695-5cn7x: server starting on port 8080
    Redirecting to /home
    stdout: All is well!
    stderr: Oh no! Error!

18.10.4. Viewing the logs with CloudWatch

  1. Navigate to CloudWatch on the AWS web console.
  2. In the left menu, click Logs and then Log groups to see the different groups of logs. You should see 3 groups:

    • rosa-<cluster-name>.application
    • rosa-<cluster-name>.audit
    • rosa-<cluster-name>.infrastructure

      cloud experts deploying application logging cw

  3. Click rosa-<cluster-name>.application.
  4. Click the log stream for the frontend pod.

    cloud experts deploying application logging logstream2

  5. Filter for stdout and stderr.
  6. Expand the row to show the messages you entered earlier and other pertinent information.

    cloud experts deploying application logging stderr

  7. Return to the log streams and select the microservice.
  8. Enter "microservice" in the search bar to see other messages in your logs.
  9. Expand one of the entries to see the color the frontend pod received from microservice and which pod sent that color to the frontend pod.

    cloud experts deploying application logging messages

Additional resources

18.11. Tutorial: S2I deployments

There are several methods to deploy applications in OpenShift. This tutorial explores using the integrated Source-to-Image (S2I) builder. As mentioned in the OpenShift concepts section, S2I is a tool for building reproducible, Docker-formatted container images.

18.11.1. Prerequisites

The following requirements must be completed before you can use this tutorial.

  1. You have created a ROSA cluster.
  2. Retrieve your login command

    1. If you are not logged in via the CLI, in OpenShift Cluster Manager, click the dropdown arrow next to your name in the upper-right and select Copy Login Command.

      CLI Login
    2. A new tab opens. Enter your username and password, and select the authentication method.
    3. Click Display Token
    4. Copy the command under "Log in with this token".
    5. Log in to the command line interface (CLI) by running the copied command in your terminal. You should see something similar to the following:

      $ oc login --token=RYhFlXXXXXXXXXXXX --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443

      Example output

      Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
      
      You don't have any projects. You can try to create a new project, by running
      
      oc new-project <project name>

  3. Create a new project from the CLI by running the following command:

    $ oc new-project ostoy-s2i

18.11.2. Fork the OSToy repository

The next section focuses on triggering automated builds based on changes to the source code. You must set up a GitHub webhook to trigger S2I builds when you push code into your GitHub repo. To set up the webhook, you must first fork the repo.

Note

Replace <UserName> with your own GitHub username for the following URLs in this guide.

18.11.3. Using S2i to deploy OSToy on your cluster

  1. Add secret to OpenShift

    The example emulates a .env file and shows how easy it is to move these directly into an OpenShift environment. Files can even be renamed in the Secret. In your CLI enter the following command, replacing <UserName> with your GitHub username:

    $ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/secret.yaml
  2. Add ConfigMap to OpenShift

    The example emulates an HAProxy config file, and is typically used for overriding default configurations in an OpenShift application. Files can even be renamed in the ConfigMap.

    In your CLI enter the following command, replacing <UserName> with your GitHub username:

    $ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/configmap.yaml
  3. Deploy the microservice

    You must deploy the microservice first to ensure that the SERVICE environment variables are available from the UI application. --context-dir is used here to only build the application defined in the microservice directory in the git repository. Using the app label allows us to ensure the UI application and microservice are both grouped in the OpenShift UI. Run the following command in the CLI to create the microservice, replacing <UserName> with your GitHub username:

    $ oc new-app https://github.com/<UserName>/ostoy \
        --context-dir=microservice \
        --name=ostoy-microservice \
        --labels=app=ostoy

    Example output

    --> Creating resources with label app=ostoy ...
        imagestream.image.openshift.io "ostoy-microservice" created
        buildconfig.build.openshift.io "ostoy-microservice" created
        deployment.apps "ostoy-microservice" created
        service "ostoy-microservice" created
    --> Success
        Build scheduled, use 'oc logs -f buildconfig/ostoy-microservice' to track its progress.
        Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
         'oc expose service/ostoy-microservice'
        Run 'oc status' to view your app.

  4. Check the status of the microservice

    Before moving onto the next step we should be sure that the microservice was created and is running correctly by running the following command:

    $ oc status

    Example output

    In project ostoy-s2i on server https://api.myrosacluster.g14t.p1.openshiftapps.com:6443
    
    svc/ostoy-microservice - 172.30.47.74:8080
      dc/ostoy-microservice deploys istag/ostoy-microservice:latest <-
        bc/ostoy-microservice source builds https://github.com/UserName/ostoy on openshift/nodejs:14-ubi8
        deployment #1 deployed 34 seconds ago - 1 pod

    Wait until you see that it was successfully deployed. You can also check this through the web UI.

  5. Deploy the front end UI

    The application has been designed to rely on several environment variables to define external settings. Attach the previously created Secret and ConfigMap afterward, along with creating a PersistentVolume. Enter the following into the CLI:

    $ oc new-app https://github.com/<UserName>/ostoy \
        --env=MICROSERVICE_NAME=OSTOY_MICROSERVICE

    Example output

    --> Creating resources ...
        imagestream.image.openshift.io "ostoy" created
        buildconfig.build.openshift.io "ostoy" created
        deployment.apps "ostoy" created
        service "ostoy" created
    --> Success
        Build scheduled, use 'oc logs -f buildconfig/ostoy' to track its progress.
        Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
         'oc expose service/ostoy'
        Run 'oc status' to view your app.

  6. Update the Deployment

    Update the deployment to use a "Recreate" deployment strategy (as opposed to the default of RollingUpdate) for consistent deployments with persistent volumes. Reasoning here is that the PV is backed by EBS and as such only supports the RWO method. If the deployment is updated without all existing pods being killed it might not be able to schedule a new pod and create a PVC for the PV as it’s still bound to the existing pod. If you will be using EFS you do not have to change this.

    $ oc patch deployment ostoy --type=json -p \
        '[{"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "remove", "path": "/spec/strategy/rollingUpdate"}]'
  7. Set a Liveness probe

    Create a Liveness Probe on the Deployment to ensure the pod is restarted if something isn’t healthy within the application. Enter the following into the CLI:

    $ oc set probe deployment ostoy --liveness --get-url=http://:8080/health
  8. Attach Secret, ConfigMap, and PersistentVolume to Deployment

    Run the following commands attach your secret, ConfigMap, and PersistentVolume:

    1. Attach Secret

      $ oc set volume deployment ostoy --add \
          --secret-name=ostoy-secret \
          --mount-path=/var/secret
    2. Attach ConfigMap

      $ oc set volume deployment ostoy --add \
          --configmap-name=ostoy-config \
          -m /var/config
    3. Create and attach PersistentVolume

      $ oc set volume deployment ostoy --add \
          --type=pvc \
          --claim-size=1G \
          -m /var/demo_files
  9. Expose the UI application as an OpenShift Route

    Run the following command to deploy this as an HTTPS application that uses the included TLS wildcard certificates:

    $ oc create route edge --service=ostoy --insecure-policy=Redirect
  10. Browse to your application with the following methods:

    • Running the following command opens a web browser with your OSToy application:

      $ python -m webbrowser "$(oc get route ostoy -o template --template='https://{{.spec.host}}')"
    • You can get the route for the application and copy and paste the route into your browser by running the following command:

      $ oc get route

18.12. Tutorial: Using Source-to-Image (S2I) webhooks for automated deployment

Automatically trigger a build and deploy anytime you change the source code by using a webhook. For more information about this process, see Triggering Builds.

Procedure

  1. To get the GitHub webhook trigger secret, in your terminal, run the following command:

    $ oc get bc/ostoy-microservice -o=jsonpath='{.spec.triggers..github.secret}'

    Example output

    `o_3x9M1qoI2Wj_cz1WiK`

    Important

    You need to use this secret in a later step in this process.

  2. To get the GitHub webhook trigger URL from the OSToy’s buildconfig, run the following command:

    $ oc describe bc/ostoy-microservice

    Example output

    [...]
    Webhook GitHub:
    	URL:	https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy/webhooks/<secret>/github
    [...]

  3. In the GitHub webhook URL, replace the <secret> text with the secret you retrieved. Your URL will resemble the following example output:

    Example output

    https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy-microservice/webhooks/o_3x9M1qoI2Wj_czR1WiK/github

  4. Setup the webhook URL in GitHub repository.

    1. In your repository, click Settings > Webhooks > Add webhook.

      Add Webhook
    2. Paste the GitHub webhook URL with the Secret included into the "Payload URL" field.
    3. Change the "Content type" to application/json.
    4. Click the Add webhook button.

      Finish Add Webhook

      You should see a message from GitHub stating that your webhook was successfully configured. Now, whenever you push a change to your GitHub repository, a new build automatically starts, and upon a successful build, a new deployment starts.

  5. Now, make a change in the source code. Any changes automatically trigger a build and deployment. In this example, the colors that denote the status of your OSToy app are selected randomly. To test the configuration, change the box to only display grayscale.

    1. Go to the source code in your repository https://github.com/<username>/ostoy/blob/master/microservice/app.js.
    2. Edit the file.
    3. Comment out line 8 (containing let randomColor = getRandomColor();).
    4. Uncomment line 9 (containing let randomColor = getRandomGrayScaleColor();).

      7   app.get('/', function(request, response) {
      8   //let randomColor = getRandomColor(); // <-- comment this
      9   let randomColor = getRandomGrayScaleColor(); // <-- uncomment this
      10
      11  response.writeHead(200, {'Content-Type': 'application/json'});
    5. Enter a message for the update, such as "changed box to grayscale colors".
    6. Click Commit at the bottom to commit the changes to the main branch.
  6. In your cluster’s web UI, click Builds > Builds to determine the status of the build. After this build is completed, the deployment begins. You can also check the status by running oc status in your terminal.

    Build Run
  7. After the deployment has finished, return to the OSToy application in your browser. Access the Networking menu item on the left. The box color is now limited to grayscale colors only.

    Gray

18.13. Tutorial: Integrating with AWS Services

Although the OSToy application functions independently, many real-world applications require external services such as databases, object stores, or messaging services.

Objectives

  • Learn how to integrate the OSToy application with other Amazon Web Services (AWS) services, specifically AWS S3 Storage. By the end of this section, the application will securely create and read objects from AWS S3 Storage.
  • Use the Amazon Controller for Kubernetes (ACK) to create the necessary services for our application directly from Kubernetes.
  • Use Identity and Access Management (IAM) roles for service accounts to manage access and authentication.
  • Use OSToy to create a basic text file and save it in an S3 bucket.
  • Confirm that the file was successfully added and can be read from the bucket.

18.13.1. Amazon Controller for Kubernetes (ACK)

Use the ACK to create and use AWS services directly from Kubernetes. You can deploy your applications directly in the Kubernetes framework by using a familiar structure to declaratively define and create AWS services such as S3 buckets or Relational Database Service (RDS) databases.

With ACK, you can create an S3 bucket, integrate it with the OSToy application, upload a file to it, and view the file in your application.

18.13.2. IAM roles for service accounts

You can use IAM roles for service accounts to assign IAM roles directly to Kubernetes service accounts. You can use it to grant the ACK controller credentials to deploy services in your AWS account. Use IAM roles for service accounts to automate the management and rotation of temporary credentials.

Pods receive a valid OpenID Connect (OIDC) JSON web token (JWT) and pass it to the AWS STS AssumeRoleWithWebIdentity API operation to receive IAM temporary role credentials. The process relies on the EKS pod identity mutating webhook which modifies pods that require AWS IAM access.

IAM roles for service accounts adheres to the following best practices:

  • Principle of least privilege: You can create IAM permissions for AWS roles that only allow limited access. These permissions are limited to the service account associated with the role and only the pods that use that service account have access.
  • Credential isolation: A pod can only retrieve credentials for the IAM role associated with the service account that the pod is using.
  • Auditing: All AWS resource access can be viewed in CloudTrail.

Additional resources

18.13.3. Installing the ACK controller

Install the ACK controller to create and delete buckets in the S3 service by using a Kubernetes custom resource for the bucket. Installing the controller will also create the required namespace and service account.

We will use an Operator to make it easy. The Operator installation will also create an ack-system namespace and a service account ack-s3-controller for you.

  1. Log in to the cluster console.
  2. On the left menu, click Operators, then OperatorHub.
  3. In the filter box, enter "S3" and select AWS Controller for Kubernetes - Amazon S3.

    cloud experts deploying integrating ack operator

  4. If a pop-up about community operators appears, click Continue.
  5. Click Install.
  6. Select All namespaces on the cluster under "Installation mode".
  7. Select ack-system under "Installed Namespace".
  8. Select Manual under "Update approval".

    Important

    Make sure Manual Mode is selected so changes to the service account are not overwritten by an automatic operator update.

  9. Click Install.

    The settings should look like the below image.

    cloud experts deployment integrating ack install

  10. Click Approve.
  11. The installation begins but will not complete until you have created an IAM role and policy for the ACK controller.

18.13.4. Creating an IAM role and policy for the ACK controller

  1. Run one of the following scripts to create the AWS IAM role for the ACK controller and assign the S3 policy:

    • Automatically download the setup-s3-ack-controller.sh script, which automates the process for you.
    • Run the following script in your command line interface (CLI):

      $ curl https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/resources/setup-s3-ack-controller.sh | bash
  2. When the script completes, it restarts the deployment which updates the service controller pods with the IAM roles for service accounts environment variables.
  3. Confirm that the environment variables are set by running the following command:

    $ oc describe pod ack-s3-controller -n ack-system | grep "^\s*AWS_"

    Example output

    AWS_ROLE_ARN:                 arn:aws:iam::000000000000:role/ack-s3-controller
    AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token

  4. Confirm successful setup of the ACK controller in the web console by clicking Operators and then Installed operators.

    cloud experts deployment installing ack oper installed

  5. If you do not see a successful Operator installation and the environment variables, manually restart the deployment by running the following command:

    $ oc rollout restart deployment ack-s3-controller -n ack-system

18.13.5. Setting up access for the application

You can create an AWS IAM role and service account so that OSToy can read and write objects to an S3 bucket.

  1. Create a new unique project for OSToy by running the following command:

    $ oc new-project ostoy-$(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')
  2. Save the name of the namespace and project to an environment variable by running the following command:

    $ export OSTOY_NAMESPACE=$(oc config view --minify -o 'jsonpath={..namespace}')

18.13.6. Creating an AWS IAM role

  1. Get your AWS account ID by running the following command:

    $ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
  2. Get the OIDC provider by running the following command, replacing <cluster-name> with the name of your cluster:

    $ export OIDC_PROVIDER=$(rosa describe cluster -c <cluster-name> -o yaml | awk '/oidc_endpoint_url/ {print $2}' | cut -d '/' -f 3,4)
  3. Create the trust policy file by running the following command:

    $ cat <<EOF > ./ostoy-sa-trust.json
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
          },
          "Action": "sts:AssumeRoleWithWebIdentity",
          "Condition": {
            "StringEquals": {
              "${OIDC_PROVIDER}:sub": "system:serviceaccount:${OSTOY_NAMESPACE}:ostoy-sa"
            }
          }
        }
      ]
    }
    EOF
  4. Create the AWS IAM role to be used with your service account by running the following command:

    $ aws iam create-role --role-name "ostoy-sa-role" --assume-role-policy-document file://ostoy-sa-trust.json

18.13.7. Attaching the S3 policy to the IAM role

  1. Get the S3 full access policy ARN by running the following command:

    $ export POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3FullAccess`].Arn' --output text)
  2. Attach the policy to the AWS IAM role by running the following command:

    $ aws iam attach-role-policy --role-name "ostoy-sa-role" --policy-arn "${POLICY_ARN}"

18.13.8. Creating the service account for your pod

  1. Get the ARN for the AWS IAM role we created so that it will be included as an annotation when you create your service account by running the following command:

    $ export APP_IAM_ROLE_ARN=$(aws iam get-role --role-name=ostoy-sa-role --query Role.Arn --output text)
  2. Create the service account by running the following command:

    $ cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: ostoy-sa
      namespace: ${OSTOY_NAMESPACE}
      annotations:
        eks.amazonaws.com/role-arn: "$APP_IAM_ROLE_ARN"
    EOF
    Important

    Do not change the name of the service account from "ostoy-sa" or you will have to change the trust relationship for the AWS IAM role.

  3. Grant the service account the restricted role by running the following command:

    $ oc adm policy add-scc-to-user restricted system:serviceaccount:${OSTOY_NAMESPACE}:ostoy-sa
  4. Confirm that the annotation was successful by running the following command:

    $ oc describe serviceaccount ostoy-sa -n ${OSTOY_NAMESPACE}

    Example output

    Name:                ostoy-sa
    Namespace:           ostoy
    Labels:              <none>
    Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::000000000000:role/ostoy-sa-role
    Image pull secrets:  ostoy-sa-dockercfg-b2l94
    Mountable secrets:   ostoy-sa-dockercfg-b2l94
    Tokens:              ostoy-sa-token-jlc6d
    Events:              <none>

18.13.9. Creating an S3 bucket

  1. Create an S3 bucket using a manifest file by running the following command:

    $ cat <<EOF | oc apply -f -
    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: Bucket
    metadata:
      name: ${OSTOY_NAMESPACE}-bucket
      namespace: ${OSTOY_NAMESPACE}
    spec:
      name: ${OSTOY_NAMESPACE}-bucket
    EOF
    Important

    The OSToy application expects to find a bucket named <namespace>-bucket. If you use anything other than the namespace of your OSToy project, this feature will not work. For example, if our project is "ostoy", the value for name must be ostoy-bucket.

  2. Confirm the bucket was created by running the following command:

    $ aws s3 ls | grep ${OSTOY_NAMESPACE}-bucket

18.13.10. Redeploying the OSToy app with the new service account

  1. Run your pod with the service account you created.
  2. Deploy the microservice by running the following command:

    $ - oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
  3. Deploy the ostoy-frontend by running the following command:

    $ - oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml
  4. Patch the ostoy-frontend deployment by running the following command:

    $ oc patch deploy ostoy-frontend -n ${OSTOY_NAMESPACE} --type=merge --patch '{"spec": {"template": {"spec":{"serviceAccount":"ostoy-sa"}}}}'

    Example output

    spec:
      # Uncomment to use with ACK portion of the workshop
      # If you chose a different service account name please replace it.
      serviceAccount: ostoy-sa
      containers:
      - name: ostoy-frontend
        image: quay.io/ostoylab/ostoy-frontend:1.6.0
        imagePullPolicy: IfNotPresent
    [...]

  5. Wait for the pod to update.

18.13.11. Confirming the environment variables

  • Use the following command to describe the pods and verify that the AWS_WEB_IDENTITY_TOKEN_FILE and AWS_ROLE_ARN environment variables exist for our application:

    $ oc describe pod ostoy-frontend -n ${OSTOY_NAMESPACE} | grep "^\s*AWS_"

    Example output

    AWS_ROLE_ARN:                 arn:aws:iam::000000000000:role/ostoy-sa
    AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token

18.13.12. Viewing the bucket contents through OSToy

Use your app to view the contents of your S3 bucket.

  1. Get the route for the newly deployed application by running the following command:

    $ oc get route ostoy-route -n ${OSTOY_NAMESPACE} -o jsonpath='{.spec.host}{"\n"}'
  2. Open a new browser tab and enter the route obtained in the previous step.

    Important

    Be sure to use http:// and not https://.

  3. Click ACK S3 in the left menu in OSToy.
  4. Because it is a new bucket, the bucket should be empty.

    cloud expert deploying integrating ack views3contents

18.13.13. Creating files in your S3 bucket

Use OStoy to create a file and upload it to the S3 bucket. While S3 can accept any kind of file, for this tutorial use text files so that the contents can easily be rendered in the browser.

  1. Click ACK S3 in the left menu in OSToy.
  2. Scroll down to Upload a text file to S3.
  3. Enter a file name for your file.
  4. Enter content for your file.
  5. Click Create file.

    cloud expert deploying integrating ack creates3obj

  6. Scroll to the top section for existing files and confirm that the file you just created is there.
  7. Click the file name to view the file.

    cloud experts deploying integrating ack viewobj

  8. Confirm with the AWS CLI by running the following command to list the contents of your bucket:

    $ aws s3 ls s3://${OSTOY_NAMESPACE}-bucket

    Example output

    $ aws s3 ls s3://ostoy-bucket
    2023-05-04 22:20:51         51 OSToy.txt

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.