Questo contenuto non è disponibile nella lingua selezionata.

Learning about ROSA


Red Hat OpenShift Service on AWS 4

Learning about Red Hat OpenShift Service on AWS

Red Hat OpenShift Documentation Team

Abstract

This document includes workshops on creating a cluster and deploying an application.

Chapter 1. Creating a cluster workshop

1.1. Creating a cluster

Follow this workshop to deploy a sample Red Hat OpenShift Service on AWS (ROSA) cluster. You can then use your cluster in the next workshops.

Workshop objectives

  • Learn to create your cluster prerequisites:

    • Create a sample virtual private cloud (VPC)
    • Create sample OpenID Connect (OIDC) resources
  • Create sample environment variables
  • Deploy a sample ROSA cluster

Prerequisites

  • ROSA version 1.2.31 or later
  • Amazon Web Service (AWS) command-line interface (CLI)
  • ROSA CLI (rosa)

1.1.1. Creating your cluster prerequisites

Before deploying a ROSA cluster, you must have both a VPC and OIDC resources. We will create these resources first. ROSA uses the bring your own VPC (BYO-VPC) model.

1.1.1.1. Creating a VPC
  1. Make sure your AWS CLI (aws) is configured to use a region where ROSA is available. See the regions supported by the AWS CLI by running the following command:

    $ rosa list regions --hosted-cp
    Copy to Clipboard Toggle word wrap
  2. Create the VPC. For this workshop, the following script creates the VPC and its required components. It uses the region configured in your aws CLI.

    #!/bin/bash
    
    set -e
    ##########
    # This script will create the network requirements for a ROSA cluster. This will be
    # a public cluster. This creates:
    # - VPC
    # - Public and private subnets
    # - Internet Gateway
    # - Relevant route tables
    # - NAT Gateway
    #
    # This will automatically use the region configured for the aws cli
    #
    ##########
    
    VPC_CIDR=10.0.0.0/16
    PUBLIC_CIDR_SUBNET=10.0.1.0/24
    PRIVATE_CIDR_SUBNET=10.0.0.0/24
    
    # Create VPC
    echo -n "Creating VPC..."
    VPC_ID=$(aws ec2 create-vpc --cidr-block $VPC_CIDR --query Vpc.VpcId --output text)
    
    # Create tag name
    aws ec2 create-tags --resources $VPC_ID --tags Key=Name,Value=$CLUSTER_NAME
    
    # Enable dns hostname
    aws ec2 modify-vpc-attribute --vpc-id $VPC_ID --enable-dns-hostnames
    echo "done."
    
    # Create Public Subnet
    echo -n "Creating public subnet..."
    PUBLIC_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PUBLIC_CIDR_SUBNET --query Subnet.SubnetId --output text)
    
    aws ec2 create-tags --resources $PUBLIC_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-public
    echo "done."
    
    # Create private subnet
    echo -n "Creating private subnet..."
    PRIVATE_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --cidr-block $PRIVATE_CIDR_SUBNET --query Subnet.SubnetId --output text)
    
    aws ec2 create-tags --resources $PRIVATE_SUBNET_ID --tags Key=Name,Value=$CLUSTER_NAME-private
    echo "done."
    
    # Create an internet gateway for outbound traffic and attach it to the VPC.
    echo -n "Creating internet gateway..."
    IGW_ID=$(aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text)
    echo "done."
    
    aws ec2 create-tags --resources $IGW_ID --tags Key=Name,Value=$CLUSTER_NAME
    
    aws ec2 attach-internet-gateway --vpc-id $VPC_ID --internet-gateway-id $IGW_ID > /dev/null 2>&1
    echo "Attached IGW to VPC."
    
    # Create a route table for outbound traffic and associate it to the public subnet.
    echo -n "Creating route table for public subnet..."
    PUBLIC_ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query RouteTable.RouteTableId --output text)
    
    aws ec2 create-tags --resources $PUBLIC_ROUTE_TABLE_ID --tags Key=Name,Value=$CLUSTER_NAME
    echo "done."
    
    aws ec2 create-route --route-table-id $PUBLIC_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGW_ID > /dev/null 2>&1
    echo "Created default public route."
    
    aws ec2 associate-route-table --subnet-id $PUBLIC_SUBNET_ID --route-table-id $PUBLIC_ROUTE_TABLE_ID > /dev/null 2>&1
    echo "Public route table associated"
    
    # Create a NAT gateway in the public subnet for outgoing traffic from the private network.
    echo -n "Creating NAT Gateway..."
    NAT_IP_ADDRESS=$(aws ec2 allocate-address --domain vpc --query AllocationId --output text)
    
    NAT_GATEWAY_ID=$(aws ec2 create-nat-gateway --subnet-id $PUBLIC_SUBNET_ID --allocation-id $NAT_IP_ADDRESS --query NatGateway.NatGatewayId --output text)
    
    aws ec2 create-tags --resources $NAT_IP_ADDRESS --resources $NAT_GATEWAY_ID --tags Key=Name,Value=$CLUSTER_NAME
    sleep 10
    echo "done."
    
    # Create a route table for the private subnet to the NAT gateway.
    echo -n "Creating a route table for the private subnet to the NAT gateway..."
    PRIVATE_ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query RouteTable.RouteTableId --output text)
    
    aws ec2 create-tags --resources $PRIVATE_ROUTE_TABLE_ID $NAT_IP_ADDRESS --tags Key=Name,Value=$CLUSTER_NAME-private
    
    aws ec2 create-route --route-table-id $PRIVATE_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $NAT_GATEWAY_ID > /dev/null 2>&1
    
    aws ec2 associate-route-table --subnet-id $PRIVATE_SUBNET_ID --route-table-id $PRIVATE_ROUTE_TABLE_ID > /dev/null 2>&1
    
    echo "done."
    
    # echo "***********VARIABLE VALUES*********"
    # echo "VPC_ID="$VPC_ID
    # echo "PUBLIC_SUBNET_ID="$PUBLIC_SUBNET_ID
    # echo "PRIVATE_SUBNET_ID="$PRIVATE_SUBNET_ID
    # echo "PUBLIC_ROUTE_TABLE_ID="$PUBLIC_ROUTE_TABLE_ID
    # echo "PRIVATE_ROUTE_TABLE_ID="$PRIVATE_ROUTE_TABLE_ID
    # echo "NAT_GATEWAY_ID="$NAT_GATEWAY_ID
    # echo "IGW_ID="$IGW_ID
    # echo "NAT_IP_ADDRESS="$NAT_IP_ADDRESS
    
    echo "Setup complete."
    echo ""
    echo "To make the cluster create commands easier, please run the following commands to set the environment variables:"
    echo "export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID"
    echo "export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID"
    Copy to Clipboard Toggle word wrap
  3. The script outputs commands. Set the commands as environment variables to store the subnet IDs for later use. Copy and run the commands:

    $ export PUBLIC_SUBNET_ID=$PUBLIC_SUBNET_ID
    $ export PRIVATE_SUBNET_ID=$PRIVATE_SUBNET_ID
    Copy to Clipboard Toggle word wrap
  4. Confirm your environment variables by running the following command:

    $ echo "Public Subnet: $PUBLIC_SUBNET_ID"; echo "Private Subnet: $PRIVATE_SUBNET_ID"
    Copy to Clipboard Toggle word wrap

    Example output

    Public Subnet: subnet-0faeeeb0000000000
    Private Subnet: subnet-011fe340000000000
    Copy to Clipboard Toggle word wrap

1.1.1.2. Creating your OIDC configuration

In this workshop, we will use the automatic mode when creating the OIDC configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the ROSA CLI to create your cluster’s unique OIDC configuration.

  • Create the OIDC configuration by running the following command:

    $ export OIDC_ID=$(rosa create oidc-config --mode auto --managed --yes -o json | jq -r '.id')
    Copy to Clipboard Toggle word wrap

1.1.2. Creating additional environment variables

  • Run the following command to set up environment variables. These variables make it easier to run the command to create a ROSA cluster:

    $ export CLUSTER_NAME=<cluster_name>
    $ export REGION=<VPC_region>
    Copy to Clipboard Toggle word wrap
    Tip

    Run rosa whoami to find the VPC region.

1.1.3. Creating a cluster

  1. Optional: Run the following command to create the account-wide roles and policies, including the Operator policies and the AWS IAM roles and policies:

    Important

    Only complete this step if this is the first time you are deploying ROSA in this account and you have not yet created your account roles and policies.

    $ rosa create account-roles --mode auto --yes
    Copy to Clipboard Toggle word wrap
  2. Run the following command to create the cluster:

    $ rosa create cluster --cluster-name $CLUSTER_NAME \
    --subnet-ids ${PUBLIC_SUBNET_ID},${PRIVATE_SUBNET_ID} \
    --hosted-cp \
    --region $REGION \
    --oidc-config-id $OIDC_ID \
    --sts --mode auto --yes
    Copy to Clipboard Toggle word wrap

The cluster is ready after about 10 minutes. The cluster will have a control plane across three AWS availability zones in your selected region and create two worker nodes in your AWS account.

1.1.4. Checking the installation status

  1. Run one of the following commands to check the status of the cluster:

    • For a detailed view of the cluster status, run:

      $ rosa describe cluster --cluster $CLUSTER_NAME
      Copy to Clipboard Toggle word wrap
    • For an abridged view of the cluster status, run:

      $ rosa list clusters
      Copy to Clipboard Toggle word wrap
    • To watch the log as it progresses, run:

      $ rosa logs install --cluster $CLUSTER_NAME --watch
      Copy to Clipboard Toggle word wrap
  2. Once the state changes to “ready” your cluster is installed. It might take a few more minutes for the worker nodes to come online.

1.2. Creating an admin user

Creating an administration (admin) user allows you to access your cluster quickly. Follow these steps to create an admin user.

Note

An admin user works well in this tutorial setting. For actual deployment, use a formal identity provider to access the cluster and grant the user admin privileges.

  1. Run the following command to create the admin user:

    rosa create admin --cluster=<cluster-name>
    Copy to Clipboard Toggle word wrap

    Example output

    W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information.
    I: Admin account has been added to cluster 'my-rosa-cluster'. It may take up to a minute for the account to become active.
    I: To login, run the following command:
    oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 \
    --username cluster-admin \
    --password FWGYL-2mkJI-00000-00000
    Copy to Clipboard Toggle word wrap

  2. Copy the log in command returned to you in the previous step and paste it into your terminal. This will log you in to the cluster using the CLI so you can start using the cluster.

    $ oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 \
    >    --username cluster-admin \
    >    --password FWGYL-2mkJI-00000-00000
    Copy to Clipboard Toggle word wrap

    Example output

    Login successful.
    
    You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects'
    
    Using project "default".
    Copy to Clipboard Toggle word wrap

  3. To check that you are logged in as the admin user, run one of the following commands:

    • Option 1:

      $ oc whoami
      Copy to Clipboard Toggle word wrap

      Example output

      cluster-admin
      Copy to Clipboard Toggle word wrap

    • Option 2:

      oc get all -n openshift-apiserver
      Copy to Clipboard Toggle word wrap

      Only an admin user can run this command without errors.

  4. You can now use the cluster as an admin user, which will suffice for this tutorial. For actual deployment, it is highly recommended to set up an identity provider, which is explained in the next tutorial.

1.3. Setting up an identity provider

To log in to your cluster, set up an identity provider (IDP). This tutorial uses GitHub as an example IDP. See the full list of IDPs supported by ROSA.

  • To view all IDP options, run the following command:

    rosa create idp --help
    Copy to Clipboard Toggle word wrap

1.3.1. Setting up an IDP with GitHub

  1. Log in to your GitHub account.
  2. Create a new GitHub organization where you are an administrator.

    Tip

    If you are already an administrator in an existing organization and you want to use that organization, skip to step 9.

    Click the + icon, then click New Organization.

  3. Choose the most applicable plan for your situation or click Join for free.
  4. Enter an organization account name, an email, and whether it is a personal or business account. Then, click Next.

  5. Optional: Add the GitHub IDs of other users to grant additional access to your ROSA cluster. You can also add them later.
  6. Click Complete Setup.
  7. Optional: Enter the requested information on the following page.
  8. Click Submit.
  9. Go back to the terminal and enter the following command to set up the GitHub IDP:

    rosa create idp --cluster=<cluster name> --interactive
    Copy to Clipboard Toggle word wrap
  10. Enter the following values:

    Type of identity provider: github
    Identity Provider Name: <IDP-name>
    Restrict to members of: organizations
    GitHub organizations: <organization-account-name>
    Copy to Clipboard Toggle word wrap
  11. The CLI will provide you with a link. Copy and paste the link into a browser and press Enter. This will fill the required information to register this application for OAuth. You do not need to modify any of the information.

  12. Click Register application.

  13. The next page displays a Client ID. Copy the ID and paste it in the terminal where it asks for Client ID.

    Note

    Do not close the tab.

  14. The CLI will ask for a Client Secret. Go back in your browser and click Generate a new client secret.

  15. A secret is generated for you. Copy your secret because it will never be visible again.
  16. Paste your secret into the terminal and press Enter.
  17. Leave GitHub Enterprise Hostname blank.
  18. Select claim.
  19. Wait approximately 1 minute for the IDP to be created and the configuration to land on your cluster.

  20. Copy the returned link and paste it into your browser. The new IDP should be available under your chosen name. Click your IDP and use your GitHub credentials to access the cluster.

1.3.2. Granting other users access to the cluster

To grant access to other cluster user you will need to add their GitHub user ID to the GitHub organization used for this cluster.

  1. In GitHub, go to the Your organizations page.
  2. Click your profile icon, then Your organizations. Then click <your-organization-name>. In our example, it is my-rosa-cluster.

  3. Click Invite someone.

  4. Enter the GitHub ID of the new user, select the correct user, and click Invite.
  5. Once the new user accepts the invitation, they will be able to log in to the ROSA cluster using the Hybrid Cloud Console link and their GitHub credentials.

1.4. Granting admin privileges

Administration (admin) privileges are not automatically granted to users that you add to your cluster. If you want to grant admin-level privileges to certain users, you will need to manually grant them to each user. You can grant admin privileges from either the ROSA command-line interface (CLI) or the Red Hat OpenShift Cluster Manager web user interface (UI).

Red Hat offers two types of admin privileges:

  • cluster-admin: cluster-admin privileges give the admin user full privileges within the cluster.
  • dedicated-admin: dedicated-admin privileges allow the admin user to complete most administrative tasks with certain limitations to prevent cluster damage. It is best practice to use dedicated-admin when elevated privileges are needed.

For more information on admin privileges, see the administering a cluster documentation.

1.4.1. Using the ROSA CLI

  1. Assuming you are the user who created the cluster, run one of the following commands to grant admin privileges:

    • For cluster-admin:

      $ rosa grant user cluster-admin --user <idp_user_name> --cluster=<cluster-name>
      Copy to Clipboard Toggle word wrap
    • For dedicated-admin:

      $ rosa grant user dedicated-admin --user <idp_user_name> --cluster=<cluster-name>
      Copy to Clipboard Toggle word wrap
  2. Verify that the admin privileges were added by running the following command:

    $ rosa list users --cluster=<cluster-name>
    Copy to Clipboard Toggle word wrap

    Example output

    $ rosa list users --cluster=my-rosa-cluster
    ID                 GROUPS
    <idp_user_name>    cluster-admins
    Copy to Clipboard Toggle word wrap

  3. If you are currently logged into the Red Hat Hybrid Cloud Console, log out of the console and log back in to the cluster to see a new perspective with the "Administrator Panel". You might need an incognito or private window.

    cloud experts getting started admin rights admin panel

  4. You can also test that admin privileges were added to your account by running the following command. Only a cluster-admin users can run this command without errors.

    $ oc get all -n openshift-apiserver
    Copy to Clipboard Toggle word wrap

1.4.2. Using the Red Hat OpenShift Cluster Manager UI

  1. Log in to the OpenShift Cluster Manager.
  2. Select your cluster.
  3. Click the Access Control tab.
  4. Click the Cluster roles and Access tab in the sidebar.
  5. Click Add user.

  6. On the pop-up screen, enter the user ID.
  7. Select whether you want to grant the user cluster-admins or dedicated-admins privileges.

1.5. Accessing your cluster

You can connect to your cluster using the command-line interface (CLI) or the Red Hat Hybrid Cloud Console user interface (UI).

1.5.1. Accessing your cluster using the CLI

To access the cluster using the CLI, you must have the oc CLI installed. If you are following the tutorials, you already installed the oc CLI.

  1. Log in to the OpenShift Cluster Manager.
  2. Click your username in the top right corner.
  3. Click Copy Login Command.

  4. This opens a new tab with a choice of identity providers (IDPs). Click the IDP you want to use. For example, "rosa-github".

  5. A new tab opens. Click Display token.
  6. Run the following command in your terminal:

    $ oc login --token=sha256~GBAfS4JQ0t1UTKYHbWAK6OUWGUkdMGz000000000000 --server=https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443
    Copy to Clipboard Toggle word wrap

    Example output

    Logged into "https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
    
    You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects'
    
    Using project "default".
    Copy to Clipboard Toggle word wrap

  7. Confirm that you are logged in by running the following command:

    $ oc whoami
    Copy to Clipboard Toggle word wrap

    Example output

    rosa-user
    Copy to Clipboard Toggle word wrap

  8. You can now access your cluster.

1.5.2. Accessing the cluster via the Hybrid Cloud Console

  1. Log in to the OpenShift Cluster Manager.

    1. To retrieve the Hybrid Cloud Console URL run:

      rosa describe cluster -c <cluster-name> | grep Console
      Copy to Clipboard Toggle word wrap
  2. Click your IDP. For example, "rosa-github".

  3. Enter your user credentials.
  4. You should be logged in. If you are following the tutorials, you will be a cluster-admin and should see the Hybrid Cloud Console webpage with the Administrator panel visible.

1.6. Managing worker nodes

In Red Hat OpenShift Service on AWS (ROSA), changing aspects of your worker nodes is performed through the use of machine pools. A machine pool allows users to manage many machines as a single entity. Every ROSA cluster has a default machine pool that is created when the cluster is created. For more information, see the machine pool documentation.

1.6.1. Creating a machine pool

You can create a machine pool with either the command-line interface (CLI) or the user interface (UI).

1.6.1.1. Creating a machine pool with the CLI
  1. Run the following command:

    $ rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes>
    Copy to Clipboard Toggle word wrap

    Example input

     $ rosa create machinepool --cluster=my-rosa-cluster --name=new-mp
     --replicas=2
    Copy to Clipboard Toggle word wrap

    Example output

    I: Machine pool 'new-mp' created successfully on cluster 'my-rosa-cluster'
    I: To view all machine pools, run 'rosa list machinepools -c my-rosa-cluster'
    Copy to Clipboard Toggle word wrap

  2. Optional: Add node labels or taints to specific nodes in a new machine pool by running the following command:

    rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes> --labels=`<key=pair>`
    Copy to Clipboard Toggle word wrap

    Example input

    $ rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-mp --replicas=2 --labels='app=db','tier=backend'
    Copy to Clipboard Toggle word wrap

    Example output

    I: Machine pool 'db-nodes-mp' created successfully on cluster 'my-rosa-cluster'
    Copy to Clipboard Toggle word wrap

    This creates an additional 2 nodes that can be managed as a unit and also assigns them the labels shown.

  3. Run the following command to confirm machine pool creation and the assigned labels:

    rosa list machinepools --cluster=<cluster-name>
    Copy to Clipboard Toggle word wrap

    Example output

    ID       AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS    TAINTS    AVAILABILITY ZONE  SUBNET                    DISK SIZE  VERSION  AUTOREPAIR
    workers  Yes          2/2-4     m5.xlarge                          us-east-1f         subnet-<subnet_id>  300 GiB    4.14.36  Yes
    Copy to Clipboard Toggle word wrap

1.6.1.2. Creating a machine pool with the UI
  1. Log in to the OpenShift Cluster Manager and click your cluster.

  2. Click the Machine pools tab.

    cloud experts getting started managing mp ocm

  3. Click Add machine pool.
  4. Enter the desired configuration.

    Tip

    You can also and expand the Edit node labels and taints section to add node labels and taints to the nodes in the machine pool.

  5. Click the Add machine pool button to save.
  6. You will see the new machine pool you created.

1.6.2. Scaling worker nodes

Edit a machine pool to scale the number of worker nodes in that specific machine pool. You can use either the CLI or the UI to scale worker nodes.

1.6.2.1. Scaling worker nodes using the CLI
  1. Run the following command to see the default machine pool that is created with each cluster:

    rosa list machinepools --cluster=<cluster-name>
    Copy to Clipboard Toggle word wrap

    Example output

    ID          AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS            TAINTS    AVAILABILITY ZONES
    Default     No           2         m5.xlarge                                  us-east-1a
    Copy to Clipboard Toggle word wrap

  2. To scale the default machine pool out to a different number of nodes, run the following command:

    rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> <machinepool-name>
    Copy to Clipboard Toggle word wrap

    Example input

    rosa edit machinepool --cluster=my-rosa-cluster --replicas 3 Default
    Copy to Clipboard Toggle word wrap

  3. Run the following command to confirm that the machine pool has scaled:

    rosa describe cluster --cluster=<cluster-name> | grep Compute
    Copy to Clipboard Toggle word wrap

    Example input

    $ rosa describe cluster --cluster=my-rosa-cluster | grep Compute
    Copy to Clipboard Toggle word wrap

    Example output

     - Compute (Autoscaled):    2-4
     - Compute (current):       2
    Copy to Clipboard Toggle word wrap

1.6.2.2. Scaling worker nodes using the UI
  1. Click the three dots to the right of the machine pool you want to edit.
  2. Click Edit.
  3. Enter the desired number of nodes, and click Save.
  4. Confirm that the cluster has scaled by selecting the cluster, clicking the Overview tab, and scrolling to Compute listing. The compute listing should equal the scaled nodes. For example, 3/3.

1.6.2.3. Adding node labels
  1. Use the following command to add node labels:

    rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> --labels='key=value' <machinepool-name>
    Copy to Clipboard Toggle word wrap

    Example input

    rosa edit machinepool --cluster=my-rosa-cluster --replicas=2 --labels 'foo=bar','baz=one' new-mp
    Copy to Clipboard Toggle word wrap

    This adds 2 labels to the new machine pool.

Important

This command replaces all machine pool configurations with the newly defined configuration. If you want to add another label and keep the old label, you must state both the new and preexisting the label. Otherwise the command will replace all preexisting labels with the one you wanted to add. Similarly, if you want to delete a label, run the command and state the ones you want, excluding the one you want to delete.

1.6.3. Mixing node types

You can also mix different worker node machine types in the same cluster by using new machine pools. You cannot change the node type of a machine pool once it is created, but you can create a new machine pool with different nodes by adding the --instance-type flag.

  1. For example, to change the database nodes to a different node type, run the following command:

    rosa create machinepool --cluster=<cluster-name> --name=<mp-name> --replicas=<number-nodes> --labels='<key=pair>' --instance-type=<type>
    Copy to Clipboard Toggle word wrap

    Example input

    rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-large-mp --replicas=2 --labels='app=db','tier=backend' --instance-type=m5.2xlarge
    Copy to Clipboard Toggle word wrap

  2. To see all the instance types available, run the following command:

    rosa list instance-types
    Copy to Clipboard Toggle word wrap
  3. To make step-by-step changes, use the --interactive flag:

    rosa create machinepool -c <cluster-name> --interactive
    Copy to Clipboard Toggle word wrap
  4. Run the following command to list the machine pools and see the new, larger instance type:

    rosa list machinepools -c <cluster-name>
    Copy to Clipboard Toggle word wrap

1.7. Autoscaling

The cluster autoscaler adds or removes worker nodes from a cluster based on pod resources.

The cluster autoscaler increases the size of the cluster when:

  • Pods fail to schedule on the current nodes due to insufficient resources.
  • Another node is necessary to meet deployment needs.

The cluster autoscaler does not increase the cluster resources beyond the limits that you specify.

The cluster autoscaler decreases the size of the cluster when:

  • Some nodes are consistently not needed for a significant period. For example, when a node has low resource use and all of its important pods can fit on other nodes.

1.7.1. Enabling autoscaling for an existing machine pool using the CLI

Note

Cluster autoscaling can be enabled at cluster creation and when creating a new machine pool by using the --enable-autoscaling option.

  1. Autoscaling is set based on machine pool availability. To find out which machine pools are available for autoscaling, run the following command:

    $ rosa list machinepools -c <cluster-name>
    Copy to Clipboard Toggle word wrap

    Example output

    ID       AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS    TAINTS    AVAILABILITY ZONE  SUBNET                    DISK SIZE  VERSION  AUTOREPAIR
    workers  No           2/2       m5.xlarge                          us-east-1f         subnet-<subnet_id>  300 GiB    4.14.36  Yes
    Copy to Clipboard Toggle word wrap

  2. Run the following command to add autoscaling to an available machine pool:

    $ rosa edit machinepool -c <cluster-name> --enable-autoscaling <machinepool-name> --min-replicas=<num> --max-replicas=<num>
    Copy to Clipboard Toggle word wrap

    Example input

    $ rosa edit machinepool -c my-rosa-cluster --enable-autoscaling workers --min-replicas=2 --max-replicas=4
    Copy to Clipboard Toggle word wrap

    The above command creates an autoscaler for the worker nodes that scales between 2 and 4 nodes depending on the resources.

1.7.2. Enabling autoscaling for an existing machine pool using the UI

Note

Cluster autoscaling can be enabled at cluster creation by checking the Enable autoscaling checkbox when creating machine pools.

  1. Go to the Machine pools tab and click the three dots in the right..
  2. Click Edit, then Enable autoscaling.
  3. Edit the number of minimum and maximum node counts or leave the default numbers.
  4. Click Save.
  5. Run the following command to confirm that autoscaling was added:

    $ rosa list machinepools -c <cluster-name>
    Copy to Clipboard Toggle word wrap

    Example output

    ID       AUTOSCALING  REPLICAS  INSTANCE TYPE  LABELS    TAINTS    AVAILABILITY ZONE  SUBNET                    DISK SIZE  VERSION  AUTOREPAIR
    workers  Yes          2/2-4     m5.xlarge                          us-east-1f         subnet-<subnet_id>  300 GiB    4.14.36  Yes
    Copy to Clipboard Toggle word wrap

1.8. Upgrading your cluster

Red Hat OpenShift Service on AWS (ROSA) executes all cluster upgrades as part of the managed service. You do not need to run any commands or make changes to the cluster. You can schedule the upgrades at a convenient time.

Ways to schedule a cluster upgrade include:

  • Manually using the command line interface (CLI): Start a one-time immediate upgrade or schedule a one-time upgrade for a future date and time.
  • Manually using the Red Hat OpenShift Cluster Manager user interface (UI): Start a one-time immediate upgrade or schedule a one-time upgrade for a future date and time.
  • Automated upgrades: Set an upgrade window for recurring y-stream upgrades whenever a new version is available without needing to manually schedule it. Minor versions have to be manually scheduled.

For more details about cluster upgrades, run the following command:

$ rosa upgrade cluster --help
Copy to Clipboard Toggle word wrap

1.8.1. Manually upgrading your cluster using the CLI

  1. Check if there is an upgrade available by running the following command:

    $ rosa list upgrade -c <cluster-name>
    Copy to Clipboard Toggle word wrap

    Example output

    $ rosa list upgrade -c <cluster-name>
    VERSION  NOTES
    4.14.7   recommended
    4.14.6
    ...
    Copy to Clipboard Toggle word wrap

    In the above example, versions 4.14.7 and 4.14.6 are both available.

  2. Schedule the cluster to upgrade within the hour by running the following command:

    $ rosa upgrade cluster -c --control-plane <cluster-name> --version <desired-version>
    Copy to Clipboard Toggle word wrap
  3. Optional: Schedule the cluster to upgrade at a later date and time by running the following command:

    $ rosa upgrade cluster -c <cluster-name> --version <desired-version> --schedule-date <future-date-for-update> --schedule-time <future-time-for-update>
    Copy to Clipboard Toggle word wrap

1.8.2. Manually upgrading your cluster using the UI

  1. Log in to the OpenShift Cluster Manager, and select the cluster you want to upgrade.
  2. Click the Settings tab.
  3. If an upgrade is available, click Update.

  4. Select the version to which you want to upgrade in the new window.
  5. Schedule a time for the upgrade or begin it immediately.

1.8.3. Setting up automatic recurring upgrades

  1. Log in to the OpenShift Cluster Manager, and select the cluster you want to upgrade.
  2. Click the Settings tab.

    1. Under Update Strategy, click Recurring updates.
  3. Set the day and time for the upgrade to occur.
  4. Click Save.

1.9. Deleting your cluster

You can delete your Red Hat OpenShift Service on AWS (ROSA) cluster using either the command-line interface (CLI) or the user interface (UI).

1.9.1. Deleting a ROSA cluster using the CLI

  1. Optional: List your clusters to make sure you are deleting the correct one by running the following command:

    $ rosa list clusters
    Copy to Clipboard Toggle word wrap
  2. Delete a cluster by running the following command:

    $ rosa delete cluster --cluster <cluster-name>
    Copy to Clipboard Toggle word wrap
    Warning

    This command is non-recoverable.

  3. The CLI prompts you to confirm that you want to delete the cluster. Press y and then Enter. The cluster and all its associated infrastructure will be deleted.

    Note

    All AWS STS and IAM roles and policies will remain and must be deleted manually once the cluster deletion is complete by following the steps below.

  4. The CLI outputs the commands to delete the OpenID Connect (OIDC) provider and Operator IAM roles resources that were created. Wait until the cluster finishes deleting before deleting these resources. Perform a quick status check by running the following command:

    $ rosa list clusters
    Copy to Clipboard Toggle word wrap
  5. Once the cluster is deleted, delete the OIDC provider by running the following command:

    $ rosa delete oidc-provider -c <clusterID> --mode auto --yes
    Copy to Clipboard Toggle word wrap
  6. Delete the Operator IAM roles by running the following command:

    $ rosa delete operator-roles -c <clusterID> --mode auto --yes
    Copy to Clipboard Toggle word wrap
    Note

    This command requires the cluster ID and not the cluster name.

  7. Only remove the remaining account roles if they are no longer needed by other clusters in the same account. If you want to create other ROSA clusters in this account, do not perform this step.

    To delete the account roles, you need to know the prefix used when creating them. The default is "ManagedOpenShift" unless you specified otherwise.

    Delete the account roles by running the following command:

    $ rosa delete account-roles --prefix <prefix> --mode auto --yes
    Copy to Clipboard Toggle word wrap

1.9.2. Deleting a ROSA cluster using the UI

  1. Log in to the OpenShift Cluster Manager, and locate the cluster you want to delete.
  2. Click the three dots to the right of the cluster.

  3. In the dropdown menu, click Delete cluster.

  4. Enter the name of the cluster to confirm deletion, and click Delete.

1.10. Obtaining support

Finding the right help when you need it is important. These are some of the resources at your disposal when you need assistance.

1.10.1. Adding support contacts

You can add additional email addresses for communications about your cluster.

  1. On the Red Hat OpenShift Cluster Manager user interface (UI), click Cluster List in the side navigation tabs.
  2. Select the cluster that needs support.
  3. Click the Support tab.
  4. Click Add notification contact, and enter the additional email addresses.

1.10.2. Contacting Red Hat for support using the UI

  1. On the OpenShift Cluster Manager UI, click the Support tab.
  2. Click Open support case.

1.10.3. Contacting Red Hat for support using the support page

  1. Go to the Red Hat support page.
  2. Click Open a new Case.

  3. Log in to your Red Hat account.
  4. Select the reason for contacting support.

  5. Select Red Hat OpenShift Service on AWS.

  6. Click continue.
  7. Enter a summary of the issue and the details of your request. Upload any files, logs, and screenshots. The more details you provide, the better Red Hat support can help your case.

    Note

    Relevant suggestions that might help with your issue will appear at the bottom of this page.

  8. Click Continue.
  9. Answer the questions in the new fields.
  10. Click Continue.
  11. Enter the following information about your case:

    1. Support level: Premium
    2. Severity: Review the Red Hat Support Severity Level Definitions to choose the correct one.
    3. Group: If this is related to a few other cases you can select the corresponding group.
    4. Language
    5. Send notifications: Add any additional email addresses to keep notified of activity.
    6. Red Hat associates: If you are working with anyone from Red Hat and want to keep them in the loop you can enter their email address here.
    7. Alternate Case ID: If you want to attach your own ID to it you can enter it here.
  12. Click Continue.
  13. On the review screen make sure you select the correct cluster ID that you are contacting support about.

  14. Click Submit.
  15. You will be contacted based on the response time committed to for the indicated severity level.

Chapter 2. Deploying an application workshop

2.1. Workshop overview

2.1.1. Introduction

After successfully provisioning your cluster, follow this workshop to deploy an application on it to understand the concepts of deploying and operating container-based applications.

Workshop objectives

  • Deploy a Node.js-based application by using Source-to-Image (S2I) and Kubernetes deployment objects
  • Set up a continuous delivery (CD) pipeline to automatically push source code changes
  • Experience self-healing applications
  • Explore configuration management through ConfigMaps, secrets, and environment variables
  • Use persistent storage to share data across pod restarts
  • Explore networking in Kubernetes and applications
  • Familiarize yourself with ROSA and Kubernetes functionality
  • Automatically scale pods based on loads from the Horizontal Pod Autoscaler (HPA)

Prerequisites

2.1.2. About the OSToy application

OSToy is a Node.js application that deploys to a ROSA cluster to help explore the functionality of Kubernetes.

This application has a user interface where you can:

  • Write messages to the log (stdout / stderr)
  • Intentionally crash the application to view self-healing
  • Toggle a liveness probe and monitor OpenShift behavior
  • Read ConfigMaps, secrets, and environment variables
  • Read and write files when connected to shared storage
  • Check network connectivity, intra-cluster DNS, and intra-communication with the included microservice
  • Increase the load to view automatic scaling of the pods by using the HPA
2.1.2.1. OSToy application diagram
2.1.2.2. Understanding the OSToy UI
  1. Pod name
  2. Home: Application home page
  3. Persistent Storage: Writes data to the persistent volume bound to the application
  4. Config Maps: Shows ConfigMaps available to the application and the key:value pairs
  5. Secrets: Shows secrets available to the application and the key:value pairs
  6. ENV Variables: Shows environment variables available to the application
  7. Networking: Networking tools
  8. Pod Auto Scaling: Increase the load of the pods and test the Horizontal Pod Autoscaler (HPA)
  9. ACK S3: Integrate with AWS S3 to read and write objects to a bucket
  10. About: Application information
2.1.2.3. Lab resources
  • OSToy application source code
  • OSToy front-end container image
  • OSToy microservice container image
  • Deployment definition YAML files:

    ostoy-frontend-deployment.yaml

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: ostoy-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ostoy-frontend
      labels:
        app: ostoy
    spec:
      selector:
        matchLabels:
          app: ostoy-frontend
      strategy:
        type: Recreate
      replicas: 1
      template:
        metadata:
          labels:
            app: ostoy-frontend
        spec:
          # Uncomment to use with ACK portion of the workshop
          # If you chose a different service account name please replace it.
          # serviceAccount: ostoy-sa
          containers:
          - name: ostoy-frontend
            securityContext:
              allowPrivilegeEscalation: false
              runAsNonRoot: true
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                - ALL
            image: quay.io/ostoylab/ostoy-frontend:1.6.0
            imagePullPolicy: IfNotPresent
            ports:
            - name: ostoy-port
              containerPort: 8080
            resources:
              requests:
                memory: "256Mi"
                cpu: "100m"
              limits:
                memory: "512Mi"
                cpu: "200m"
            volumeMounts:
            - name: configvol
              mountPath: /var/config
            - name: secretvol
              mountPath: /var/secret
            - name: datavol
              mountPath: /var/demo_files
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
              initialDelaySeconds: 10
              periodSeconds: 5
            env:
            - name: ENV_TOY_SECRET
              valueFrom:
                secretKeyRef:
                  name: ostoy-secret-env
                  key: ENV_TOY_SECRET
            - name: MICROSERVICE_NAME
              value: OSTOY_MICROSERVICE_SVC
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          volumes:
            - name: configvol
              configMap:
                name: ostoy-configmap-files
            - name: secretvol
              secret:
                defaultMode: 420
                secretName: ostoy-secret
            - name: datavol
              persistentVolumeClaim:
                claimName: ostoy-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-frontend-svc
      labels:
        app: ostoy-frontend
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: ostoy-port
          protocol: TCP
          name: ostoy
      selector:
        app: ostoy-frontend
    ---
    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: ostoy-route
    spec:
      to:
        kind: Service
        name: ostoy-frontend-svc
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ostoy-secret-env
    type: Opaque
    data:
      ENV_TOY_SECRET: VGhpcyBpcyBhIHRlc3Q=
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ostoy-configmap-files
    data:
      config.json:  '{ "default": "123" }'
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: ostoy-secret
    data:
      secret.txt: VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1
    type: Opaque
    Copy to Clipboard Toggle word wrap

    ostoy-microservice-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ostoy-microservice
      labels:
        app: ostoy
    spec:
      selector:
        matchLabels:
          app: ostoy-microservice
      replicas: 1
      template:
        metadata:
          labels:
            app: ostoy-microservice
        spec:
          containers:
          - name: ostoy-microservice
            securityContext:
              allowPrivilegeEscalation: false
              runAsNonRoot: true
              seccompProfile:
                type: RuntimeDefault
              capabilities:
                drop:
                - ALL
            image: quay.io/ostoylab/ostoy-microservice:1.5.0
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 8080
              protocol: TCP
            resources:
              requests:
                memory: "128Mi"
                cpu: "50m"
              limits:
                memory: "256Mi"
                cpu: "100m"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-microservice-svc
      labels:
        app: ostoy-microservice
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: 8080
          protocol: TCP
      selector:
        app: ostoy-microservice
    Copy to Clipboard Toggle word wrap

  • S3 bucket manifest for ACK S3

    s3-bucket.yaml

    apiVersion: s3.services.k8s.aws/v1alpha1
    kind: Bucket
    metadata:
      name: ostoy-bucket
      namespace: ostoy
    spec:
      name: ostoy-bucket
    Copy to Clipboard Toggle word wrap

Note

To simplify deployment of the OSToy application, all of the objects required in the above deployment manifests are grouped together. For a typical enterprise deployment, a separate manifest file for each Kubernetes object is recommended.

2.2. Deploying the OSToy application with Kubernetes

Deploying an application involves creating a container image, storing it in an image repository, and defining Deployment object that uses that image.

Deploying an application involves the following steps:

  • Create the images for the front-end and back-end microservice containers
  • Store the container images in an image repository
  • Create the Kubernetes Deployment object for the application
  • Deploy the application
Note

This workshop focuses on application deployment and has users run a remote file which uses an existing image.

2.2.1. Retrieving the login command

Procedure

  1. Confirm you are logged in to the command-line interface (CLI) by running the following command:

    rosa whoami
    Copy to Clipboard Toggle word wrap

    If you are logged in to the command-line interface, skip to "Creating a new project". If you are not logged in to the command-line interface, continue this procedure.

  2. Access your cluster with the web console.
  3. Click the dropdown arrow next to your login name in the upper right corner, and select Copy Login Command.

    A new tab opens.

  4. Select your authentication method.
  5. Click Display Token.
  6. Copy the command under Log in with this token.
  7. From your terminal, paste and run the copied command. If the login is successful, you will see the following confirmation message:

    $ oc login --token=<your_token> --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
    Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
    
    You don't have any projects. You can try to create a new project, by running
    
    oc new-project <project name>
    Copy to Clipboard Toggle word wrap

2.2.2. Creating a new project

Use your preferred interface to create a new project.

2.2.2.1. Creating a new project using the CLI

Procedure

  • Create a new project named ostoy in your cluster by running following command:

    $ oc new-project ostoy
    Copy to Clipboard Toggle word wrap

    Example output

    Now using project "ostoy" on server "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443".
    Copy to Clipboard Toggle word wrap

    • Optional: Create a unique project name by running the following command:

      $ oc new-project ostoy-$(uuidgen | cut -d - -f 2 | tr '[:upper:]' '[:lower:]')
      Copy to Clipboard Toggle word wrap
2.2.2.2. Creating a new project using the web console

Procedure

  1. From the web console, click Home → Projects.
  2. On the Projects page, click create Create Project.

  3. In the Create Project box, enter a project name in the Name field.
  4. Click Create.

2.2.3. Deploying the back-end microservice

The microservice serves internal web requests and returns a JSON object containing the current hostname and a randomly generated color string.

Procedure

  • Deploy the microservice by running the following command:

    $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
    Copy to Clipboard Toggle word wrap

    Example output

    $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-microservice-deployment.yaml
    deployment.apps/ostoy-microservice created
    service/ostoy-microservice-svc created
    Copy to Clipboard Toggle word wrap

2.2.4. Deploying the front-end microservice

The front-end deployment uses the Node.js front-end for the application and additional Kubernetes objects.

Front-end deployment defines the following features:

  • Persistent volume claim
  • Deployment object
  • Service
  • Route
  • ConfigMaps
  • Secrets

Procedure

  • Deploy the application front-end and create the objects by running the following command:

    $ oc apply -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/ostoy-frontend-deployment.yaml
    Copy to Clipboard Toggle word wrap

    Example output

    persistentvolumeclaim/ostoy-pvc created
    deployment.apps/ostoy-frontend created
    service/ostoy-frontend-svc created
    route.route.openshift.io/ostoy-route created
    configmap/ostoy-configmap-env created
    secret/ostoy-secret-env created
    configmap/ostoy-configmap-files created
    secret/ostoy-secret created
    Copy to Clipboard Toggle word wrap

    All objects should create successfully.

2.2.5. Obtain the route to your application

Obtain the route to access the application.

Procedure

  • Get the route to your application by running the following command:

    $ oc get route
    Copy to Clipboard Toggle word wrap

    Example output

    NAME          HOST/PORT                                                 PATH   SERVICES             PORT    TERMINATION   WILDCARD
    ostoy-route   ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com          ostoy-frontend-svc   <all>                 None
    Copy to Clipboard Toggle word wrap

2.2.6. Viewing the application

Procedure

  1. Copy the ostoy-route-ostoy.apps.<your-rosa-cluster>.abcd.p1.openshiftapps.com URL output from the previous step.
  2. Paste the copied URL into your web browser and press enter. You should see the homepage of your application. If the page does not load, make sure you used http and not https.

2.3. Health check

See how Kubernetes responds to pod failure by intentionally crashing your pod and making it unresponsive to the Kubernetes liveness probes.

2.3.1. Preparing your desktop

Procedure

  • From the OpenShift web console, select Workloads > Deployments > ostoy-frontend to view the OSToy deployment.

2.3.2. Crashing the pod

Procedure

  1. From the OSToy application web console, click Home in the left menu, and enter a message in the Crash Pod box, for example, This is goodbye!.
  2. Click Crash Pod.

    The pod crashes and Kubernetes restarts the pod.

2.3.3. Viewing the revived pod

Procedure

  • From the OpenShift web console, quickly switch to the Deployments screen. You will see that the pod turns yellow, which means it is down. It should quickly revive and turn blue. The revival process happens quickly.

Verification

  1. From the web console, click Pods > ostoy-frontend-xxxxxxx-xxxx to change to the pods screen.

  2. Click the Events subtab, and verify that the container crashed and restarted.

2.3.4. Making the application malfunction

Procedure

~~. Keep the pod events page open.~~

  1. From the OSToy application, click Toggle Health in the Toggle Health Status tile. Watch Current Health switch to I’m not feeling all that well.

Verification

After you make the application malfunction, the application stops responding with a 200 HTTP code. After 3 consecutive failures, Kubernetes stops the pod and restarts it.

  • From the web console, switch back to the pod events page to see that the liveness probe failed and the pod restarted.

The following image shows an example of what you will see on your pod events page.

A. The pod has three consecutive failures.

B. Kubernetes stops the pod.

C. Kubernetes restarts the pod.

2.4. Persistent volumes for cluster storage

{rosa-classic-first} and Red Hat OpenShift Service on AWS (ROSA) support storing persistent volumes with either Amazon Web Services (AWS) Elastic Block Store (EBS) or AWS Elastic File System (EFS).

2.4.1. Using persistent volumes

Use the following procedures to create a file, store it on a persistent volume in your cluster, and confirm that it still exists after pod failure and re-creation.

2.4.1.1. Viewing a persistent volume claim

Procedure

  1. Navigate to the cluster’s OpenShift web console.
  2. Click Storage in the left menu, then click PersistentVolumeClaims to see a list of all the persistent volume claims.
  3. Click a persistence volume claim to see the size, access mode, storage class, and other additional claim details.

    Note

    The access mode is ReadWriteOnce (RWO). This means that the volume can only be mounted to one node and the pod or pods can read and write to the volume.

2.4.1.2. Storing your file

Procedure

  1. In the OSToy app console, click Persistent Storage in the left menu.
  2. In the Filename box, enter a file name with a .txt extension, for example test-pv.txt.
  3. In the File contents box, enter a sentence of text, for example OpenShift is the greatest thing since sliced bread!.
  4. Click Create file.

  5. Scroll to Existing files on the OSToy app console.
  6. Click the file you created to see the file name and contents.

2.4.1.3. Crashing the pod

Procedure

  1. On the OSToy app console, click Home in the left menu.
  2. Click Crash pod.
2.4.1.4. Confirming persistent storage

Procedure

  1. Wait for the pod to re-create.
  2. On the OSToy app console, click Persistent Storage in the left menu.
  3. Find the file you created, and open it to view and confirm the contents.

Verification

The deployment YAML file shows that we mounted the directory /var/demo_files to our persistent volume claim.

  1. Retrieve the name of your front-end pod by running the following command:

    $ oc get pods
    Copy to Clipboard Toggle word wrap
  2. Start a secure shell (SSH) session in your container by running the following command:

    $ oc rsh <pod_name>
    Copy to Clipboard Toggle word wrap
  3. Go to the directory by running the following command:

    $ cd /var/demo_files
    Copy to Clipboard Toggle word wrap
  4. Optional: See all the files you created by running the following command:

    $ ls
    Copy to Clipboard Toggle word wrap
  5. Open the file to view the contents by running the following command:

    $ cat test-pv.txt
    Copy to Clipboard Toggle word wrap
  6. Verify that the output is the text you entered in the OSToy app console.

    Example terminal

    $ oc get pods
    NAME                                  READY     STATUS    RESTARTS   AGE
    ostoy-frontend-5fc8d486dc-wsw24       1/1       Running   0          18m
    ostoy-microservice-6cf764974f-hx4qm   1/1       Running   0          18m
    
    $ oc rsh ostoy-frontend-5fc8d486dc-wsw24
    
    $ cd /var/demo_files/
    
    $ ls
    lost+found   test-pv.txt
    
    $ cat test-pv.txt
    OpenShift is the greatest thing since sliced bread!
    Copy to Clipboard Toggle word wrap

2.4.1.5. Ending the session

Procedure

  • Type exit in your terminal to quit the session and return to the CLI.

2.5. ConfigMaps, secrets, and environment variables

This tutorial shows how to configure the OSToy application by using config maps, secrets, and environment variables.

2.5.1. Configuration using ConfigMaps

Config maps allow you to decouple configuration artifacts from container image content to keep containerized applications portable.

Procedure

  • In the OSToy app, in the left menu, click Config Maps, displaying the contents of the config map available to the OSToy application. The code snippet shows an example of a config map configuration:

    Example output

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: ostoy-configmap-files
    data:
      config.json:  '{ "default": "123" }'
    Copy to Clipboard Toggle word wrap

2.5.2. Configuration using secrets

Kubernetes Secret objects allow you to store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. Putting this information in a secret is safer and more flexible than putting it in plain text into a pod definition or a container image.

Procedure

  • In the OSToy app, in the left menu, click Secrets, displaying the contents of the secrets available to the OSToy application. The code snippet shows an example of a secret configuration:

    Example output

    USERNAME=my_user
    PASSWORD=VVNFUk5BTUU9bXlfdXNlcgpQQVNTV09SRD1AT3RCbCVYQXAhIzYzMlk1RndDQE1UUWsKU01UUD1sb2NhbGhvc3QKU01UUF9QT1JUPTI1
    SMTP=localhost
    SMTP_PORT=25
    Copy to Clipboard Toggle word wrap

2.5.3. Configuration using environment variables

Using environment variables is an easy way to change application behavior without requiring code changes. It allows different deployments of the same application to potentially behave differently based on the environment variables. Red Hat OpenShift Service on AWS makes it simple to set, view, and update environment variables for pods or deployments.

Procedure

  • In the OSToy app, in the left menu, click ENV Variables, displaying the environment variables available to the OSToy application. The code snippet shows an example of an environmental variable configuration:

    Example output

    {
      "npm_config_local_prefix": "/opt/app-root/src",
      "STI_SCRIPTS_PATH": "/usr/libexec/s2i",
      "npm_package_version": "1.7.0",
      "APP_ROOT": "/opt/app-root",
      "NPM_CONFIG_PREFIX": "/opt/app-root/src/.npm-global",
      "OSTOY_MICROSERVICE_PORT_8080_TCP_PORT": "8080",
      "NODE": "/usr/bin/node",
      "LD_PRELOAD": "libnss_wrapper.so",
      "KUBERNETES_SERVICE_HOST": "172.30.0.1",
      "OSTOY_MICROSERVICE_PORT": "tcp://172.30.60.255:8080",
      "OSTOY_PORT": "tcp://172.30.152.25:8080",
      "npm_package_name": "ostoy",
      "OSTOY_SERVICE_PORT_8080_TCP": "8080",
      "_": "/usr/bin/node"
      "ENV_TOY_CONFIGMAP": "ostoy-configmap -env"
    }
    Copy to Clipboard Toggle word wrap

2.6. Networking

The OSToy application uses intra-cluster networking to separate functions by using microservices.

In this workshop, there are at least two separate pods, each with its own service. One pod functions as the front end web application with a service and a publicly accessible route. The other pod functions as the backend microservice with a service object so that the front end pod can communicate with the microservice.

Communication occurs across the pods if there is more than one pod. The microservice is not accessible from outside the cluster and other namespaces or projects. The purpose of the microservice is to serve internal web requests and return a JSON object containing the current hostname (the pod’s name) and a randomly generated color string. This color string displays a box with that color on the OSToy application web console.

For more information about the networking limitations, see About network policy.

2.6.1. Intra-cluster networking

You can view your networking configurations in your OSToy application.

Procedure

  1. In the OSToy application web console, click Networking in the left menu.
  2. Review the networking configuration. The tile "Hostname Lookup" illustrates how the service name created for a pod translates into an internal ClusterIP address.

  3. Enter the name of the microservice created in the "Hostname Lookup" tile following the format: <service_name>.<namespace>.svc.cluster.local. You can find the microservice name in the service definition of ostoy-microservice.yaml by running the following command:

    $ oc get service <name_of_service> -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    apiVersion: v1
    kind: Service
    metadata:
      name: ostoy-microservice-svc
      labels:
        app: ostoy-microservice
    spec:
      type: ClusterIP
      ports:
        - port: 8080
          targetPort: 8080
          protocol: TCP
      selector:
        app: ostoy-microservice
    Copy to Clipboard Toggle word wrap

    In this example, the full hostname is ostoy-microservice-svc.ostoy.svc.cluster.local.

  4. An IP address is returned. In this example it is 172.30.165.246. This is the intra-cluster IP address, which is only accessible from within the cluster.

2.7. Scaling an application

Manually or automatically scale your pods by using the Horizontal Pod Autoscaler (HPA). You can also scale your cluster nodes.

Prerequisites

  • An active ROSA cluster
  • A deployed OSToy application

2.7.1. Manual pod scaling

You can manually scale your application’s pods by using one of the following methods:

  • Changing your ReplicaSet or deployment definition
  • Using the command line
  • Using the web console

This workshop starts by using only one pod for the microservice. By defining a replica of 1 in your deployment definition, the Kubernetes Replication Controller strives to keep one pod alive. You then learn how to define pod autoscaling by using the Horizontal Pod Autoscaler(HPA) which is based on the load and will scale out more pods when necessary.

Procedure

  1. In the OSToy app, click the Networking tab in the navigational menu.
  2. In the "Intra-cluster Communication" section, locate the box that randomly changes colors. Inside the box, you see the microservice’s pod name. There is only one box in this example because there is only one microservice pod.

  3. Confirm that there is only one pod running for the microservice by running the following command:

    $ oc get pods
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  READY     STATUS    RESTARTS   AGE
    ostoy-frontend-679cb85695-5cn7x       1/1       Running   0          1h
    ostoy-microservice-86b4c6f559-p594d   1/1       Running   0          1h
    Copy to Clipboard Toggle word wrap

  4. Download the ostoy-microservice-deployment.yaml and save it to your local machine.
  5. Change the deployment definition to three pods instead of one by using the following example:

    spec:
        selector:
          matchLabels:
            app: ostoy-microservice
        replicas: 3
    Copy to Clipboard Toggle word wrap
  6. Apply the replica changes by running the following command:

    $ oc apply -f ostoy-microservice-deployment.yaml
    Copy to Clipboard Toggle word wrap
    Note

    You can also edit the ostoy-microservice-deployment.yaml file in the OpenShift Web Console by going to the Workloads > Deployments > ostoy-microservice > YAML tab.

  7. Confirm that there are now 3 pods by running the following command:

    $ oc get pods
    Copy to Clipboard Toggle word wrap

    The output shows that there are now 3 pods for the microservice instead of only one.

    Example output

    NAME                                  READY   STATUS    RESTARTS   AGE
    ostoy-frontend-5fbcc7d9-rzlgz         1/1     Running   0          26m
    ostoy-microservice-6666dcf455-2lcv4   1/1     Running   0          81s
    ostoy-microservice-6666dcf455-5z56w   1/1     Running   0          81s
    ostoy-microservice-6666dcf455-tqzmn   1/1     Running   0          26m
    Copy to Clipboard Toggle word wrap

  8. Scale the application by using the command-line interface (CLI) or by using the web user interface (UI):

    • In the CLI, decrease the number of pods from 3 to 2 by running the following command:

      $ oc scale deployment ostoy-microservice --replicas=2
      Copy to Clipboard Toggle word wrap
    • From the navigational menu of the OpenShift web console UI, click Workloads > Deployments > ostoy-microservice.
    • Locate the blue circle with a "3 Pod" label in the middle.
    • Selecting the arrows next to the circle scales the number of pods. Select the down arrow to 2.

Verification

Check your pod counts by using the CLI, the web UI, or the OSToy application:

  • From the CLI, confirm that you are using two pods for the microservice by running the following command:

    $ oc get pods
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  READY   STATUS    RESTARTS   AGE
    ostoy-frontend-5fbcc7d9-rzlgz         1/1     Running   0          75m
    ostoy-microservice-6666dcf455-2lcv4   1/1     Running   0          50m
    ostoy-microservice-6666dcf455-tqzmn   1/1     Running   0          75m
    Copy to Clipboard Toggle word wrap

  • In the web UI, select Workloads > Deployments > ostoy-microservice.

  • You can also confirm that there are two pods by selecting Networking in the navigation menu of the OSToy application. There should be two colored boxes for the two pods.

2.7.2. Pod autoscaling

Red Hat OpenShift Service on AWS offers a Horizontal Pod Autoscaler (HPA). The HPA uses metrics to increase or decrease the number of pods when necessary.

Procedure

  1. From the navigational menu of the web UI, select Pod Auto Scaling.

  2. Create the HPA by running the following command:

    $ oc autoscale deployment/ostoy-microservice --cpu-percent=80 --min=1 --max=10
    Copy to Clipboard Toggle word wrap

    This command creates an HPA that maintains between 1 and 10 replicas of the pods controlled by the ostoy-microservice deployment. During deployment, HPA increases and decreases the number of replicas to keep the average CPU use across all pods at 80% and 40 millicores.

  3. On the Pod Auto Scaling > Horizontal Pod Autoscaling page, select Increase the load.

    Important

    Because increasing the load generates CPU intensive calculations, the page can become unresponsive. This is an expected response. Only click Increase the Load once. For more information about the process, see the microservice’s GitHub repository.

    After a few minutes, the new pods display on the page represented by colored boxes.

    Note

    The page can experience lag.

Verification

Check your pod counts with one of the following methods:

  • In the OSToy application’s web UI, see the remote pods box:

    Because there is only one pod, increasing the workload should trigger an increase of pods.

  • In the CLI, run the following command:

    oc get pods --field-selector=status.phase=Running | grep microservice
    Copy to Clipboard Toggle word wrap

    Example output

    ostoy-microservice-79894f6945-cdmbd   1/1     Running   0          3m14s
    ostoy-microservice-79894f6945-mgwk7   1/1     Running   0          4h24m
    ostoy-microservice-79894f6945-q925d   1/1     Running   0          3m14s
    Copy to Clipboard Toggle word wrap

  • You can also verify autoscaling from the OpenShift Cluster Manager

    1. In the OpenShift web console navigational menu, click Observe > Dashboards.
    2. In the dashboard, select Kubernetes / Compute Resources / Namespace (Pods) and your namespace ostoy.

    3. A graph appears showing your resource usage across CPU and memory. The top graph shows recent CPU consumption per pod and the lower graph indicates memory usage. The following lists the callouts in the graph:

      1. The load increased (A).
      2. Two new pods were created (B and C).
      3. The thickness of each graph represents the CPU consumption and indicates which pods handled more load.
      4. The load decreased (D), and the pods were deleted.

2.7.3. Node autoscaling

Red Hat OpenShift Service on AWS allows you to use node autoscaling. In this scenario, you will create a new project with a job that has a large workload that the cluster cannot handle. With autoscaling enabled, when the load is larger than your current capacity, the cluster will automatically create new nodes to handle the load.

Prerequisites

  • Autoscaling is enabled on your machine pools.

Procedure

  1. Create a new project called autoscale-ex by running the following command:

    $ oc new-project autoscale-ex
    Copy to Clipboard Toggle word wrap
  2. Create the job by running the following command:

    $ oc create -f https://raw.githubusercontent.com/openshift-cs/rosaworkshop/master/rosa-workshop/ostoy/yaml/job-work-queue.yaml
    Copy to Clipboard Toggle word wrap
  3. After a few minuts, run the following command to see the pods:

    $ oc get pods
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                     READY   STATUS    RESTARTS   AGE
    work-queue-5x2nq-24xxn   0/1     Pending   0          10s
    work-queue-5x2nq-57zpt   0/1     Pending   0          10s
    work-queue-5x2nq-58bvs   0/1     Pending   0          10s
    work-queue-5x2nq-6c5tl   1/1     Running   0          10s
    work-queue-5x2nq-7b84p   0/1     Pending   0          10s
    work-queue-5x2nq-7hktm   0/1     Pending   0          10s
    work-queue-5x2nq-7md52   0/1     Pending   0          10s
    work-queue-5x2nq-7qgmp   0/1     Pending   0          10s
    work-queue-5x2nq-8279r   0/1     Pending   0          10s
    work-queue-5x2nq-8rkj2   0/1     Pending   0          10s
    work-queue-5x2nq-96cdl   0/1     Pending   0          10s
    work-queue-5x2nq-96tfr   0/1     Pending   0          10s
    Copy to Clipboard Toggle word wrap

  4. Because there are many pods in a Pending state, this status triggers the autoscaler to create more nodes in your machine pool. Allow time to create these worker nodes.
  5. After a few minutes, use the following command to see how many worker nodes you now have:

    $ oc get nodes
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                         STATUS   ROLES          AGE     VERSION
    ip-10-0-138-106.us-west-2.compute.internal   Ready    infra,worker   22h     v1.23.5+3afdacb
    ip-10-0-153-68.us-west-2.compute.internal    Ready    worker         2m12s   v1.23.5+3afdacb
    ip-10-0-165-183.us-west-2.compute.internal   Ready    worker         2m8s    v1.23.5+3afdacb
    ip-10-0-176-123.us-west-2.compute.internal   Ready    infra,worker   22h     v1.23.5+3afdacb
    ip-10-0-195-210.us-west-2.compute.internal   Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-196-84.us-west-2.compute.internal    Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-203-104.us-west-2.compute.internal   Ready    worker         2m6s    v1.23.5+3afdacb
    ip-10-0-217-202.us-west-2.compute.internal   Ready    master         23h     v1.23.5+3afdacb
    ip-10-0-225-141.us-west-2.compute.internal   Ready    worker         23h     v1.23.5+3afdacb
    ip-10-0-231-245.us-west-2.compute.internal   Ready    worker         2m11s   v1.23.5+3afdacb
    ip-10-0-245-27.us-west-2.compute.internal    Ready    worker         2m8s    v1.23.5+3afdacb
    ip-10-0-245-7.us-west-2.compute.internal     Ready    worker         23h     v1.23.5+3afdacb
    Copy to Clipboard Toggle word wrap

    You can see the worker nodes were automatically created to handle the workload.

  6. Return to the OSToy application by entering the following command:

    $ oc project ostoy
    Copy to Clipboard Toggle word wrap

2.8. S2I deployments

The integrated Source-to-Image (S2I) builder is one method to deploy applications in OpenShift. The S2I is a tool for building reproducible, Docker-formatted container images. For more information, see OpenShift concepts.

Prerequisites

  • A ROSA cluster

2.8.1. Retrieving your login command

Procedure

  1. Confirm you are logged in to the command-line interface (CLI) by running the following command:

    rosa whoami
    Copy to Clipboard Toggle word wrap

    If you are logged in to the command-line interface, skip to "Creating a new project". If you are not logged in to the command-line interface, continue this procedure.

  2. If you are not logged in to the command-line interface (CLI), in OpenShift Cluster Manager, click the dropdown arrow next to your name in the upper-right and select Copy Login Command.

  3. A new tab opens. Enter your username and password, and select the authentication method.
  4. Click Display Token
  5. Copy the command under "Log in with this token".
  6. Log in to the CLI by running the copied command in your terminal.

    Example input

    $ oc login --token=RYhFlXXXXXXXXXXXX --server=https://api.osd4-demo.abc1.p1.openshiftapps.com:6443
    Copy to Clipboard Toggle word wrap

    Example output

    Logged into "https://api.myrosacluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided.
    
    You don't have any projects. You can try to create a new project, by running
    
    oc new-project <project name>
    Copy to Clipboard Toggle word wrap

2.8.2. Creating a new project

  • Create a new project from the CLI by running the following command:

    $ oc new-project ostoy-s2i
    Copy to Clipboard Toggle word wrap

2.8.3. Fork the OSToy repository

In order to trigger automated builds based on changes to the source code, you must set up a GitHub webhook. The webhook will trigger S2I builds when you push code into your GitHub repository. To set up the webhook, you must first fork the repository.

Important

Replace <UserName> with your own GitHub username for the following URLs in this guide.

2.8.4. Using S2i to deploy OSToy on your cluster

Procedure

  1. Add a secret to OpenShift.

    This example emulates a .env file. Files are easily moved directly into an OpenShift environment and can even be renamed in the secret.

    • Run the following command, replacing <UserName> with your GitHub username:

      $ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/secret.yaml
      Copy to Clipboard Toggle word wrap
  2. Add a ConfigMap to OpenShift.

    This example emulates an HAProxy config file, which is typically used for overriding default configurations in an OpenShift application. Files can be renamed in the ConfigMap.

    • Run the following command, replacing <UserName> with your GitHub username:

      $ oc create -f https://raw.githubusercontent.com/<UserName>/ostoy/master/deployment/yaml/configmap.yaml
      Copy to Clipboard Toggle word wrap
  3. Deploy the microservice.

    You must deploy the microservice to ensure that the service environment variables are available from the UI application.

    --context-dir builds the application defined in the microservice directory in the Git repository. The app label ensures the user interface (UI) application and microservice are both grouped in the OpenShift UI.

    • Run the following command to create the microservice, replacing <UserName> with your GitHub username:

      $ oc new-app https://github.com/<UserName>/ostoy \
          --context-dir=microservice \
          --name=ostoy-microservice \
          --labels=app=ostoy
      Copy to Clipboard Toggle word wrap

      Example output

      --> Creating resources with label app=ostoy ...
          imagestream.image.openshift.io "ostoy-microservice" created
          buildconfig.build.openshift.io "ostoy-microservice" created
          deployment.apps "ostoy-microservice" created
          service "ostoy-microservice" created
      --> Success
          Build scheduled, use 'oc logs -f buildconfig/ostoy-microservice' to track its progress.
          Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
           'oc expose service/ostoy-microservice'
          Run 'oc status' to view your app.
      Copy to Clipboard Toggle word wrap

  4. Check the status of the microservice.

    • Check that the microservice was created and is running correctly by running the following command:

      $ oc status
      Copy to Clipboard Toggle word wrap

      Example output

      In project ostoy-s2i on server https://api.myrosacluster.g14t.p1.openshiftapps.com:6443
      
      svc/ostoy-microservice - 172.30.47.74:8080
        dc/ostoy-microservice deploys istag/ostoy-microservice:latest <-
          bc/ostoy-microservice source builds https://github.com/UserName/ostoy on openshift/nodejs:14-ubi8
          deployment #1 deployed 34 seconds ago - 1 pod
      Copy to Clipboard Toggle word wrap

      Wait until you see that the microservice was successfully deployed. You can also check this through the web UI.

  5. Deploy the front end UI.

    The application relies on several environment variables to define external settings.

    • Attach the secret and ConfigMap and create a PersistentVolume by running the following command:

      $ oc new-app https://github.com/<UserName>/ostoy \
          --env=MICROSERVICE_NAME=OSTOY_MICROSERVICE
      Copy to Clipboard Toggle word wrap

      Example output

      --> Creating resources ...
          imagestream.image.openshift.io "ostoy" created
          buildconfig.build.openshift.io "ostoy" created
          deployment.apps "ostoy" created
          service "ostoy" created
      --> Success
          Build scheduled, use 'oc logs -f buildconfig/ostoy' to track its progress.
          Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
           'oc expose service/ostoy'
          Run 'oc status' to view your app.
      Copy to Clipboard Toggle word wrap

  6. Update the deployment by running the following command:

    $ oc patch deployment ostoy --type=json -p \
        '[{"op": "replace", "path": "/spec/strategy/type", "value": "Recreate"}, {"op": "remove", "path": "/spec/strategy/rollingUpdate"}]'
    Copy to Clipboard Toggle word wrap
  7. Set a liveness probe.

    Create a liveness probe to ensure the pod restarts if something is wrong in the application.

    • Run the following command:

      $ oc set probe deployment ostoy --liveness --get-url=http://:8080/health
      Copy to Clipboard Toggle word wrap
  8. Attach the secret, ConfigMap, and persistent volume to the deployment.

    1. Run the following command to attach your secret:

      $ oc set volume deployment ostoy --add \
          --secret-name=ostoy-secret \
          --mount-path=/var/secret
      Copy to Clipboard Toggle word wrap
    2. Run the following command to attach your ConfigMap:

      $ oc set volume deployment ostoy --add \
          --configmap-name=ostoy-config \
          -m /var/config
      Copy to Clipboard Toggle word wrap
    3. Run the following command to create and attach your persistent volume:

      $ oc set volume deployment ostoy --add \
          --type=pvc \
          --claim-size=1G \
          -m /var/demo_files
      Copy to Clipboard Toggle word wrap
  9. Expose the UI application as an OpenShift Route.

    • Run the following command to deploy the application as an HTTPS application that uses the included TLS wildcard certificates:

      $ oc create route edge --service=ostoy --insecure-policy=Redirect
      Copy to Clipboard Toggle word wrap
  10. Browse to your application with the following methods:

    • Run the following command to open a web browser with your OSToy application:

      $ python -m webbrowser "$(oc get route ostoy -o template --template='https://{{.spec.host}}')"
      Copy to Clipboard Toggle word wrap
    • Run the following command to get the route for the application and copy and paste the route into your browser:

      $ oc get route
      Copy to Clipboard Toggle word wrap

2.9. Using Source-to-Image (S2I) webhooks for automated deployment

Automatically trigger a build and deploy any time you change the source code by using a webhook. For more information about this process, see Triggering Builds.

Procedure

  1. Obtain the GitHub webhook trigger secret by running the following command:

    $ oc get bc/ostoy-microservice -o=jsonpath='{.spec.triggers..github.secret}'
    Copy to Clipboard Toggle word wrap

    Example output

    `o_3x9M1qoI2Wj_cz1WiK`
    Copy to Clipboard Toggle word wrap

    Important

    You need to use this secret in a later step in this process.

  2. Obtain the GitHub webhook trigger URL from the OSToy’s buildconfig by running the following command:

    $ oc describe bc/ostoy-microservice
    Copy to Clipboard Toggle word wrap

    Example output

    [...]
    Webhook GitHub:
    	URL:	https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy/webhooks/<secret>/github
    [...]
    Copy to Clipboard Toggle word wrap

  3. In the GitHub webhook URL, replace the <secret> text with the secret you retrieved. Your URL will resemble the following example output:

    Example output

    https://api.demo1234.openshift.com:443/apis/build.openshift.io/v1/namespaces/ostoy-s2i/buildconfigs/ostoy-microservice/webhooks/o_3x9M1qoI2Wj_czR1WiK/github
    Copy to Clipboard Toggle word wrap

  4. Set up the webhook URL in GitHub repository.

    1. In your repository, click Settings > Webhooks > Add webhook.

    2. Paste the GitHub webhook URL with the Secret included in the "Payload URL" field.
    3. Change the "Content type" to application/json.
    4. Click the Add webhook button.

      You should see a message from GitHub stating that your webhook was successfully configured. Now, whenever you push a change to your GitHub repository, a new build automatically starts, and upon a successful build, a new deployment starts.

  5. Make a change in the source code. Any changes automatically trigger a build and deployment. In this example, the colors that denote the status of your OSToy app are randomly selected. To test the configuration, change the box to display grayscale.

    1. Go to the source code in your repository https://github.com/<username>/ostoy/blob/master/microservice/app.js.
    2. Edit the file.
    3. Comment out line 8 (containing let randomColor = getRandomColor();).
    4. Uncomment line 9 (containing let randomColor = getRandomGrayScaleColor();).

      7   app.get('/', function(request, response) {
      8   //let randomColor = getRandomColor(); // <-- comment this
      9   let randomColor = getRandomGrayScaleColor(); // <-- uncomment this
      10
      11  response.writeHead(200, {'Content-Type': 'application/json'});
      Copy to Clipboard Toggle word wrap
    5. Enter a message for the update, such as "changed box to grayscale colors".
    6. Click Commit at the bottom to commit the changes to the main branch.
  6. In your cluster’s web UI, click Builds > Builds to determine the status of the build. After this build is completed, the deployment begins. You can also check the status by running oc status in your terminal.

  7. After the deployment has finished, return to the OSToy application in your browser. Access the Networking menu item on the left. The box color is now limited to grayscale colors only.

Legal Notice

Copyright © 2025 Red Hat

OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).

Modified versions must remove all Red Hat trademarks.

Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.

Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Torna in cima
Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2025 Red Hat