Search

Chapter 12. Expanding the cluster

download PDF

You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API.

12.1. Checking for multi-architecture support

You must check that your cluster can support multiple architectures before you add a node with a different architecture.

Procedure

  1. Log in to the cluster using the CLI.
  2. Check that your cluster uses the architecture payload by running the following command:

    $ oc adm release info -o json | jq .metadata.metadata

Verification

  • If you see the following output, your cluster supports multiple architectures:

    {
      "release.openshift.io/architecture": "multi"
    }

12.2. Installing a multi-architecture cluster

A cluster with an x86_64 control plane can support worker nodes that have two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads.

For example, you can add arm64, IBM Power®, or IBM Z® worker nodes to an existing OpenShift Container Platform cluster with an x86_64.

The main steps of the installation are as follows:

  1. Create and register a multi-architecture cluster.
  2. Create an x86_64 infrastructure environment, download the ISO discovery image for x86_64, and add the control plane. The control plane must have the x86_64 architecture.
  3. Create an arm64, IBM Power®, or IBM Z® infrastructure environment, download the ISO discovery images for arm64, IBM Power®, or IBM Z®, and add the worker nodes.

Supported platforms

The following table lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing.

OpenShift Container Platform versionSupported platformsDay 1 control plane architectureDay 2 node architecture

4.12.0

  • Microsoft Azure (TP)
  • x86_64
  • arm64

4.13.0

  • Microsoft Azure
  • Amazon Web Services
  • Bare metal (TP)
  • x86_64
  • x86_64
  • x86_64
  • arm64
  • arm64
  • arm64

4.14.0

  • Microsoft Azure
  • Amazon Web Services
  • Bare metal
  • Google Cloud Platform
  • IBM Power®
  • IBM Z®
  • x86_64
  • x86_64
  • x86_64
  • x86_64
  • x86_64
  • x86_64
  • arm64
  • arm64
  • arm64
  • arm64
  • ppc64le
  • s390x
Important

Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Main steps

  1. Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section.
  2. When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster:

    $ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
       --slurpfile pull_secret ~/Downloads/pull-secret.txt '
    {
       "name": "testcluster",
       "openshift_version": "<version-number>-multi", 1
       "cpu_architecture" : "multi" 2
       "high_availability_mode": "full" 3
       "base_dns_domain": "example.com",
       "pull_secret": $pull_secret[0] | tojson
    }
    ')" | jq '.id'
    Note
    1
    Use the multi- option for the OpenShift Container Platform version number; for example, "4.12-multi".
    2
    Set the CPU architecture to "multi".
    3
    Use the full value to indicate Multi-Node OpenShift Container Platform.
  3. When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64:

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
     --slurpfile pull_secret ~/Downloads/pull-secret.txt \
     --arg cluster_id ${CLUSTER_ID} '
       {
         "name": "testcluster-infra-env",
         "image_type":"full-iso",
         "cluster_id": $cluster_id,
         "cpu_architecture" : "x86_64"
         "pull_secret": $pull_secret[0] | tojson
       }
    ')" | jq '.id'
  4. When you reach the "Adding hosts" step of the installation, set host_role to master:

    Note

    For more information, see Assigning Roles to Hosts in Additional Resources.

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
       {
         "host_role":"master"
       }
    ' | jq
  5. Download the discovery image for the x86_64 architecture.
  6. Boot the x86_64 architecture hosts using the generated discovery image.
  7. Start the installation and wait for the cluster to be fully installed.
  8. Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power®), s390x (for IBM Z®), or arm64. For example:

    $ curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d "$(jq --null-input \
       --slurpfile pull_secret ~/Downloads/pull-secret.txt '
    {
       "name": "testcluster",
       "openshift_version": "4.12",
       "cpu_architecture" : "arm64"
       "high_availability_mode": "full"
       "base_dns_domain": "example.com",
       "pull_secret": $pull_secret[0] | tojson
    }
    ')" | jq '.id'
  9. Repeat the "Adding hosts" step of the installation. This time, set host_role to worker:

    Note

    For more details, see Assigning Roles to Hosts in Additional Resources.

    $ curl https://api.openshift.com/api/assisted-install/v2/infra-envs/${INFRA_ENV_ID}/hosts/<host_id> \
    -X PATCH \
    -H "Authorization: Bearer ${API_TOKEN}" \
    -H "Content-Type: application/json" \
    -d '
       {
         "host_role":"worker"
       }
    ' | jq
  10. Download the discovery image for the arm64, ppc64le or s390x architecture.
  11. Boot the architecture hosts using the generated discovery image.
  12. Start the installation and wait for the cluster to be fully installed.

Verification

  • View the arm64, ppc64le, or s390x worker nodes in the cluster by running the following command:

    $ oc get nodes -o wide

12.3. Adding hosts with the web console

You can add hosts to clusters that were created using the Assisted Installer.

Important
  • Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and later.
  • When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the machineNetwork field of the install-config.yaml file. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks.

Procedure

  1. Log in to OpenShift Cluster Manager and click the cluster that you want to expand.
  2. Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed.
  3. Optional: Modify ignition files as needed.
  4. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console.
  5. Select the host role. It can be either a worker or a control plane host.
  6. Start the installation.
  7. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation.

    When the host is successfully installed, it is listed as a host in the cluster web console.

Important

New hosts will be encrypted using the same method as the original cluster.

12.4. Adding hosts with the API

You can add hosts to clusters using the Assisted Installer REST API.

Prerequisites

  • Install the Red Hat OpenShift Cluster Manager CLI (ocm).
  • Log in to Red Hat OpenShift Cluster Manager as a user with cluster creation privileges.
  • Install jq.
  • Ensure that all the required DNS records exist for the cluster that you want to expand.
Important

When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the machineNetwork field of the install-config.yaml file. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks.

Procedure

  1. Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only.
  2. Set the $API_URL variable by running the following command:

    $ export API_URL=<api_url> 1
    1
    Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com
  3. Import the cluster by running the following commands:

    1. Set the $CLUSTER_ID variable:

      1. Log in to the cluster and run the following command:

        $ export CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')
      2. Display the $CLUSTER_ID variable output:

        $ echo ${CLUSTER_ID}
    2. Set the $CLUSTER_REQUEST variable that is used to import the cluster:

      $ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$CLUSTER_ID" \
      '{
        "api_vip_dnsname": "<api_vip>", 1
        "openshift_cluster_id": "<cluster_id>", 2
        "name": "<openshift_cluster_name>" 3
      }')
      1
      Replace <api_vip> with the hostname for the cluster’s API server. This can be the DNS domain for the API server or the IP address of the single node which the host can reach. For example, api.compute-1.example.com.
      2
      Replace <cluster_id> with the $CLUSTER_ID output from the previous substep.
      3
      Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.
    3. Import the cluster and set the $CLUSTER_ID variable. Run the following command:

      $ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer ${API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \
        -d "$CLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id')
  4. Generate the InfraEnv resource for the cluster and set the $INFRA_ENV_ID variable by running the following commands:

    1. Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com.
    2. Set the $INFRA_ENV_REQUEST variable:

      export INFRA_ENV_REQUEST=$(jq --null-input \
          --slurpfile pull_secret <path_to_pull_secret_file> \1
          --arg ssh_pub_key "$(cat <path_to_ssh_pub_key>)" \2
          --arg cluster_id "$CLUSTER_ID" '{
        "name": "<infraenv_name>", 3
        "pull_secret": $pull_secret[0] | tojson,
        "cluster_id": $cluster_id,
        "ssh_authorized_key": $ssh_pub_key,
        "image_type": "<iso_image_type>" 4
      }')
      1
      Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com.
      2
      Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode.
      3
      Replace <infraenv_name> with the plain text name for the InfraEnv resource.
      4
      Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso.
    3. Post the $INFRA_ENV_REQUEST to the /v2/infra-envs API and set the $INFRA_ENV_ID variable:

      $ INFRA_ENV_ID=$(curl "$API_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer ${API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "$INFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id')
  5. Get the URL of the discovery ISO for the cluster host by running the following command:

    $ curl -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -r '.download_url'

    Example output

    https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12

  6. Download the ISO:

    $ curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1
    1
    Replace <iso_url> with the URL for the ISO from the previous step.
  7. Boot the new worker host from the downloaded rhcos-live-minimal.iso.
  8. Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up:

    $ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id'

    Example output

    2294ba03-c264-4f11-ac08-2f1bb2f8c296

  9. Set the $HOST_ID variable for the new host, for example:

    $ HOST_ID=<host_id> 1
    1
    Replace <host_id> with the host ID from the previous step.
  10. Check that the host is ready to install by running the following command:

    Note

    Ensure that you copy the entire command including the complete jq expression.

    $ curl -s $API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID -H "Authorization: Bearer ${API_TOKEN}" | jq '
    def host_name($host):
        if (.suggested_hostname // "") == "" then
            if (.inventory // "") == "" then
                "Unknown hostname, please wait"
            else
                .inventory | fromjson | .hostname
            end
        else
            .suggested_hostname
        end;
    
    def is_notable($validation):
        ["failure", "pending", "error"] | any(. == $validation.status);
    
    def notable_validations($validations_info):
        [
            $validations_info // "{}"
            | fromjson
            | to_entries[].value[]
            | select(is_notable(.))
        ];
    
    {
        "Hosts validations": {
            "Hosts": [
                .hosts[]
                | select(.status != "installed")
                | {
                    "id": .id,
                    "name": host_name(.),
                    "status": .status,
                    "notable_validations": notable_validations(.validations_info)
                }
            ]
        },
        "Cluster validations info": {
            "notable_validations": notable_validations(.validations_info)
        }
    }
    ' -r

    Example output

    {
      "Hosts validations": {
        "Hosts": [
          {
            "id": "97ec378c-3568-460c-bc22-df54534ff08f",
            "name": "localhost.localdomain",
            "status": "insufficient",
            "notable_validations": [
              {
                "id": "ntp-synced",
                "status": "failure",
                "message": "Host couldn't synchronize with any NTP server"
              },
              {
                "id": "api-domain-name-resolved-correctly",
                "status": "error",
                "message": "Parse error for domain name resolutions result"
              },
              {
                "id": "api-int-domain-name-resolved-correctly",
                "status": "error",
                "message": "Parse error for domain name resolutions result"
              },
              {
                "id": "apps-domain-name-resolved-correctly",
                "status": "error",
                "message": "Parse error for domain name resolutions result"
              }
            ]
          }
        ]
      },
      "Cluster validations info": {
        "notable_validations": []
      }
    }

  11. When the previous command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command:

    $ curl -X POST -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts/$HOST_ID/actions/install"  -H "Authorization: Bearer ${API_TOKEN}"
  12. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host.

    Important

    You must approve the CSRs to complete the installation.

    Keep running the following API call to monitor the cluster installation:

    $ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq '{
        "Cluster day-2 hosts":
            [
                .hosts[]
                | select(.status != "installed")
                | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at}
            ]
    }'

    Example output

    {
      "Cluster day-2 hosts": [
        {
          "id": "a1c52dde-3432-4f59-b2ae-0a530c851480",
          "requested_hostname": "control-plane-1",
          "status": "added-to-existing-cluster",
          "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs",
          "progress": {
            "current_stage": "Done",
            "installation_percentage": 100,
            "stage_started_at": "2022-07-08T10:56:20.476Z",
            "stage_updated_at": "2022-07-08T10:56:20.476Z"
          },
          "status_updated_at": "2022-07-08T10:56:20.476Z",
          "updated_at": "2022-07-08T10:57:15.306369Z",
          "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3",
          "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae",
          "created_at": "2022-07-06T22:54:57.161614Z"
        }
      ]
    }

  13. Optional: Run the following command to see all the events for the cluster:

    $ curl -s "$API_URL/api/assisted-install/v2/events?cluster_id=$CLUSTER_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}'

    Example output

    {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}
    {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"}

  14. Log in to the cluster and approve the pending CSRs to complete the installation.

Verification

  • Check that the new host was successfully added to the cluster with a status of Ready:

    $ oc get nodes

    Example output

    NAME                           STATUS   ROLES           AGE   VERSION
    control-plane-1.example.com    Ready    master,worker   56m   v1.25.0
    compute-1.example.com          Ready    worker          11m   v1.25.0

12.5. Installing a primary control plane node on a healthy cluster

This procedure describes how to install a primary control plane node on a healthy OpenShift Container Platform cluster.

If the cluster is unhealthy, additional operations are required before they can be managed. See Additional Resources for more information.

Prerequisites

  • You are using OpenShift Container Platform 4.11 or later with the correct etcd-operator version.
  • You have installed a healthy cluster with a minimum of three nodes.
  • You have created a single control plane node.

Procedure

  1. Retrieve pending CertificateSigningRequests (CSRs):

    $ oc get csr | grep Pending

    Example output

    csr-5sd59   8m19s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   <none>              Pending
    csr-xzqts   10s     kubernetes.io/kubelet-serving                 system:node:worker-6                                                   <none>              Pending

  2. Approve pending CSRs:

    $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
    Important

    You must approve the CSRs to complete the installation.

  3. Confirm the primary node is in Ready status:

    $ oc get nodes

    Example output

    NAME       STATUS   ROLES    AGE     VERSION
    master-0   Ready    master   4h42m   v1.24.0+3882f8f
    worker-1   Ready    worker   4h29m   v1.24.0+3882f8f
    master-2   Ready    master   4h43m   v1.24.0+3882f8f
    master-3   Ready    master   4h27m   v1.24.0+3882f8f
    worker-4   Ready    worker   4h30m   v1.24.0+3882f8f
    master-5   Ready    master   105s    v1.24.0+3882f8f

    Note

    The etcd-operator requires a Machine Custom Resource (CR) referencing the new node when the cluster runs with a functional Machine API.

  4. Link the Machine CR with BareMetalHost and Node:

    1. Create the BareMetalHost CR with a unique .metadata.name value:

      apiVersion: metal3.io/v1alpha1
      kind: BareMetalHost
      metadata:
        name: custom-master3
        namespace: openshift-machine-api
        annotations:
      spec:
        automatedCleaningMode: metadata
        bootMACAddress: 00:00:00:00:00:02
        bootMode: UEFI
        customDeploy:
          method: install_coreos
        externallyProvisioned: true
        online: true
        userData:
          name: master-user-data-managed
          namespace: openshift-machine-api
      $ oc create -f <filename>
    2. Apply the BareMetalHost CR:

      $ oc apply -f <filename>
    3. Create the Machine CR using the unique .machine.name value:

      apiVersion: machine.openshift.io/v1beta1
      kind: Machine
      metadata:
        annotations:
          machine.openshift.io/instance-state: externally provisioned
          metal3.io/BareMetalHost: openshift-machine-api/custom-master3
        finalizers:
        - machine.machine.openshift.io
        generation: 3
        labels:
          machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96
          machine.openshift.io/cluster-api-machine-role: master
          machine.openshift.io/cluster-api-machine-type: master
        name: custom-master3
        namespace: openshift-machine-api
      spec:
        metadata: {}
        providerSpec:
          value:
            apiVersion: baremetal.cluster.k8s.io/v1alpha1
            customDeploy:
              method: install_coreos
            hostSelector: {}
            image:
              checksum: ""
              url: ""
            kind: BareMetalMachineProviderSpec
            metadata:
              creationTimestamp: null
            userData:
              name: master-user-data-managed
      $ oc create -f <filename>
    4. Apply the Machine CR:

      $ oc apply -f <filename>
    5. Link BareMetalHost, Machine, and Node using the link-machine-and-node.sh script:

      #!/bin/bash
      
      # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238.
      # This script will link Machine object and Node object. This is needed
      # in order to have IP address of the Node present in the status of the Machine.
      
      # set -x
      set -e
      
      machine="$1"
      node="$2"
      
      if [ -z "$machine" ] || [ -z "$node" ]; then
          echo "Usage: $0 MACHINE NODE"
          exit 1
      fi
      
      # uid=$(echo "${node}" | cut -f1 -d':')
      node_name=$(echo "${node}" | cut -f2 -d':')
      
      oc proxy &
      proxy_pid=$!
      function kill_proxy {
          kill $proxy_pid
      }
      trap kill_proxy EXIT SIGINT
      
      HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts"
      
      function print_nics() {
          local ips
          local eob
          declare -a ips
      
          readarray -t ips < <(echo "${1}" \
                               | jq '.[] | select(. | .type == "InternalIP") | .address' \
                               | sed 's/"//g')
      
          eob=','
          for (( i=0; i<${#ips[@]}; i++ )); do
              if [ $((i+1)) -eq ${#ips[@]} ]; then
                  eob=""
              fi
              cat <<- EOF
                {
                  "ip": "${ips[$i]}",
                  "mac": "00:00:00:00:00:00",
                  "model": "unknown",
                  "speedGbps": 10,
                  "vlanId": 0,
                  "pxe": true,
                  "name": "eth1"
                }${eob}
      EOF
          done
      }
      
      function wait_for_json() {
          local name
          local url
          local curl_opts
          local timeout
      
          local start_time
          local curr_time
          local time_diff
      
          name="$1"
          url="$2"
          timeout="$3"
          shift 3
          curl_opts="$@"
          echo -n "Waiting for $name to respond"
          start_time=$(date +%s)
          until curl -g -X GET "$url" "${curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do
              echo -n "."
              curr_time=$(date +%s)
              time_diff=$((curr_time - start_time))
              if [[ $time_diff -gt $timeout ]]; then
                  printf '\nTimed out waiting for %s' "${name}"
                  return 1
              fi
              sleep 5
          done
          echo " Success!"
          return 0
      }
      wait_for_json oc_proxy "${HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json"
      
      addresses=$(oc get node -n openshift-machine-api "${node_name}" -o json | jq -c '.status.addresses')
      
      machine_data=$(oc get machines.machine.openshift.io -n openshift-machine-api -o json "${machine}")
      host=$(echo "$machine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g')
      
      if [ -z "$host" ]; then
          echo "Machine $machine is not linked to a host yet." 1>&2
          exit 1
      fi
      
      # The address structure on the host doesn't match the node, so extract
      # the values we want into separate variables so we can build the patch
      # we need.
      hostname=$(echo "${addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g')
      
      set +e
      read -r -d '' host_patch << EOF
      {
        "status": {
          "hardware": {
            "hostname": "${hostname}",
            "nics": [
      $(print_nics "${addresses}")
            ],
            "systemVendor": {
              "manufacturer": "Red Hat",
              "productName": "product name",
              "serialNumber": ""
            },
            "firmware": {
              "bios": {
                "date": "04/01/2014",
                "vendor": "SeaBIOS",
                "version": "1.11.0-2.el7"
              }
            },
            "ramMebibytes": 0,
            "storage": [],
            "cpu": {
              "arch": "x86_64",
              "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz",
              "clockMegahertz": 2199.998,
              "count": 4,
              "flags": []
            }
          }
        }
      }
      EOF
      set -e
      
      echo "PATCHING HOST"
      echo "${host_patch}" | jq .
      
      curl -s \
           -X PATCH \
           "${HOST_PROXY_API_PATH}/${host}/status" \
           -H "Content-type: application/merge-patch+json" \
           -d "${host_patch}"
      
      oc get baremetalhost -n openshift-machine-api -o yaml "${host}"
      $ bash link-machine-and-node.sh custom-master3 worker-5
  5. Confirm etcd members:

    $ oc rsh -n openshift-etcd etcd-worker-2
    etcdctl member list -w table

    Example output

    +--------+---------+--------+--------------+--------------+---------+
    |   ID   |  STATUS |  NAME  |  PEER ADDRS  | CLIENT ADDRS | LEARNER |
    +--------+---------+--------+--------------+--------------+---------+
    |2c18942f| started |worker-3|192.168.111.26|192.168.111.26|  false  |
    |61e2a860| started |worker-2|192.168.111.25|192.168.111.25|  false  |
    |ead4f280| started |worker-5|192.168.111.28|192.168.111.28|  false  |
    +--------+---------+--------+--------------+--------------+---------+

  6. Confirm the etcd-operator configuration applies to all nodes:

    $ oc get clusteroperator etcd

    Example output

    NAME   VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
    etcd   4.11.5    True        False         False      5h54m

  7. Confirm etcd-operator health:

    $ oc rsh -n openshift-etcd etcd-worker-0
    etcdctl endpoint health

    Example output

    192.168.111.26 is healthy: committed proposal: took = 11.297561ms
    192.168.111.25 is healthy: committed proposal: took = 13.892416ms
    192.168.111.28 is healthy: committed proposal: took = 11.870755ms

  8. Confirm node health:

    $ oc get nodes

    Example output

    NAME       STATUS   ROLES    AGE     VERSION
    master-0   Ready    master   6h20m   v1.24.0+3882f8f
    worker-1   Ready    worker   6h7m    v1.24.0+3882f8f
    master-2   Ready    master   6h20m   v1.24.0+3882f8f
    master-3   Ready    master   6h4m    v1.24.0+3882f8f
    worker-4   Ready    worker   6h7m    v1.24.0+3882f8f
    master-5   Ready    master   99m     v1.24.0+3882f8f

  9. Confirm the ClusterOperators health:

    $ oc get ClusterOperators

    Example output

    NAME                                      VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG
    authentication                            4.11.5  True      False       False    5h57m
    baremetal                                 4.11.5  True      False       False    6h19m
    cloud-controller-manager                  4.11.5  True      False       False    6h20m
    cloud-credential                          4.11.5  True      False       False    6h23m
    cluster-autoscaler                        4.11.5  True      False       False    6h18m
    config-operator                           4.11.5  True      False       False    6h19m
    console                                   4.11.5  True      False       False    6h4m
    csi-snapshot-controller                   4.11.5  True      False       False    6h19m
    dns                                       4.11.5  True      False       False    6h18m
    etcd                                      4.11.5  True      False       False    6h17m
    image-registry                            4.11.5  True      False       False    6h7m
    ingress                                   4.11.5  True      False       False    6h6m
    insights                                  4.11.5  True      False       False    6h12m
    kube-apiserver                            4.11.5  True      False       False    6h16m
    kube-controller-manager                   4.11.5  True      False       False    6h16m
    kube-scheduler                            4.11.5  True      False       False    6h16m
    kube-storage-version-migrator             4.11.5  True      False       False    6h19m
    machine-api                               4.11.5  True      False       False    6h15m
    machine-approver                          4.11.5  True      False       False    6h19m
    machine-config                            4.11.5  True      False       False    6h18m
    marketplace                               4.11.5  True      False       False    6h18m
    monitoring                                4.11.5  True      False       False    6h4m
    network                                   4.11.5  True      False       False    6h20m
    node-tuning                               4.11.5  True      False       False    6h18m
    openshift-apiserver                       4.11.5  True      False       False    6h8m
    openshift-controller-manager              4.11.5  True      False       False    6h7m
    openshift-samples                         4.11.5  True      False       False    6h12m
    operator-lifecycle-manager                4.11.5  True      False       False    6h18m
    operator-lifecycle-manager-catalog        4.11.5  True      False       False    6h19m
    operator-lifecycle-manager-pkgsvr         4.11.5  True      False       False    6h12m
    service-ca                                4.11.5  True      False       False    6h19m
    storage                                   4.11.5  True      False       False    6h19m

  10. Confirm the ClusterVersion:

    $ oc get ClusterVersion

    Example output

    NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
    version   4.11.5    True        False         5h57m   Cluster version is 4.11.5

  11. Remove the old control plane node:

    1. Delete the BareMetalHost CR:

      $ oc delete bmh -n openshift-machine-api custom-master3
    2. Confirm the Machine is unhealthy:

      $ oc get machine -A

      Example output

      NAMESPACE              NAME                              PHASE    AGE
      openshift-machine-api  custom-master3                    Running  14h
      openshift-machine-api  test-day2-1-6qv96-master-0        Failed   20h
      openshift-machine-api  test-day2-1-6qv96-master-1        Running  20h
      openshift-machine-api  test-day2-1-6qv96-master-2        Running  20h
      openshift-machine-api  test-day2-1-6qv96-worker-0-8w7vr  Running  19h
      openshift-machine-api  test-day2-1-6qv96-worker-0-rxddj  Running  19h

    3. Delete the Machine CR:

      $ oc delete machine -n openshift-machine-api   test-day2-1-6qv96-master-0
      machine.machine.openshift.io "test-day2-1-6qv96-master-0" deleted
    4. Confirm removal of the Node CR:

      $ oc get nodes

      Example output

      NAME       STATUS   ROLES    AGE   VERSION
      worker-1   Ready    worker   19h   v1.24.0+3882f8f
      master-2   Ready    master   20h   v1.24.0+3882f8f
      master-3   Ready    master   19h   v1.24.0+3882f8f
      worker-4   Ready    worker   19h   v1.24.0+3882f8f
      master-5   Ready    master   15h   v1.24.0+3882f8f

  12. Check etcd-operator logs to confirm status of the etcd cluster:

    $ oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf

    Example output

    E0927 07:53:10.597523       1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource

  13. Remove the physical machine to allow etcd-operator to reconcile the cluster members:

    $ oc rsh -n openshift-etcd etcd-worker-2
    etcdctl member list -w table; etcdctl endpoint health

    Example output

    +--------+---------+--------+--------------+--------------+---------+
    |   ID   |  STATUS |  NAME  |  PEER ADDRS  | CLIENT ADDRS | LEARNER |
    +--------+---------+--------+--------------+--------------+---------+
    |2c18942f| started |worker-3|192.168.111.26|192.168.111.26|  false  |
    |61e2a860| started |worker-2|192.168.111.25|192.168.111.25|  false  |
    |ead4f280| started |worker-5|192.168.111.28|192.168.111.28|  false  |
    +--------+---------+--------+--------------+--------------+---------+
    192.168.111.26 is healthy: committed proposal: took = 10.458132ms
    192.168.111.25 is healthy: committed proposal: took = 11.047349ms
    192.168.111.28 is healthy: committed proposal: took = 11.414402ms

12.6. Installing a primary control plane node on an unhealthy cluster

This procedure describes how to install a primary control plane node on an unhealthy OpenShift Container Platform cluster.

Prerequisites

  • You have installed a healthy cluster with a minimum of three nodes.
  • You have created the Day 2 control plane.
  • You have created a single control plane node.

Procedure

  1. Confirm initial state of the cluster:

    $ oc get nodes

    Example output

    NAME       STATUS     ROLES    AGE   VERSION
    worker-1   Ready      worker   20h   v1.24.0+3882f8f
    master-2   NotReady   master   20h   v1.24.0+3882f8f
    master-3   Ready      master   20h   v1.24.0+3882f8f
    worker-4   Ready      worker   20h   v1.24.0+3882f8f
    master-5   Ready      master   15h   v1.24.0+3882f8f

  2. Confirm the etcd-operator detects the cluster as unhealthy:

    $ oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf

    Example output

    E0927 08:24:23.983733       1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy

  3. Confirm the etcdctl members:

    $ oc rsh -n openshift-etcd etcd-worker-3
    etcdctl member list -w table

    Example output

    +--------+---------+--------+--------------+--------------+---------+
    |   ID   | STATUS  |  NAME  |  PEER ADDRS  | CLIENT ADDRS | LEARNER |
    +--------+---------+--------+--------------+--------------+---------+
    |2c18942f| started |worker-3|192.168.111.26|192.168.111.26|  false  |
    |61e2a860| started |worker-2|192.168.111.25|192.168.111.25|  false  |
    |ead4f280| started |worker-5|192.168.111.28|192.168.111.28|  false  |
    +--------+---------+--------+--------------+--------------+---------+

  4. Confirm that etcdctl reports an unhealthy member of the cluster:

    $ etcdctl endpoint health

    Example output

    {"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""}
    192.168.111.28 is healthy: committed proposal: took = 12.465641ms
    192.168.111.26 is healthy: committed proposal: took = 12.297059ms
    192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded
    Error: unhealthy cluster

  5. Remove the unhealthy control plane by deleting the Machine Custom Resource:

    $ oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2
    Note

    The Machine and Node Custom Resources (CRs) will not be deleted if the unhealthy cluster cannot run successfully.

  6. Confirm that etcd-operator has not removed the unhealthy machine:

    $ oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f

    Example output

    I0927 08:58:41.249222       1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}]

  7. Remove the unhealthy etcdctl member manually:

    $ oc rsh -n openshift-etcd etcd-worker-3\
    etcdctl member list -w table

    Example output

    +--------+---------+--------+--------------+--------------+---------+
    |   ID   |  STATUS |  NAME  |  PEER ADDRS  | CLIENT ADDRS | LEARNER |
    +--------+---------+--------+--------------+--------------+---------+
    |2c18942f| started |worker-3|192.168.111.26|192.168.111.26|  false  |
    |61e2a860| started |worker-2|192.168.111.25|192.168.111.25|  false  |
    |ead4f280| started |worker-5|192.168.111.28|192.168.111.28|  false  |
    +--------+---------+--------+--------------+--------------+---------+

  8. Confirm that etcdctl reports an unhealthy member of the cluster:

    $ etcdctl endpoint health

    Example output

    {"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""}
    192.168.111.28 is healthy: committed proposal: took = 13.038278ms
    192.168.111.26 is healthy: committed proposal: took = 12.950355ms
    192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded
    Error: unhealthy cluster

  9. Remove the unhealthy cluster by deleting the etcdctl member Custom Resource:

    $ etcdctl member remove 61e2a86084aafa62

    Example output

    Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7

  10. Confirm members of etcdctl by running the following command:

    $ etcdctl member list -w table

    Example output

    +----------+---------+--------+--------------+--------------+-------+
    |    ID    | STATUS  |  NAME  |  PEER ADDRS  | CLIENT ADDRS |LEARNER|
    +----------+---------+--------+--------------+--------------+-------+
    | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false |
    | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false |
    +----------+---------+--------+--------------+--------------+-------+

  11. Review and approve Certificate Signing Requests

    1. Review the Certificate Signing Requests (CSRs):

      $ oc get csr | grep Pending

      Example output

      csr-5sd59   8m19s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   <none>              Pending
      csr-xzqts   10s     kubernetes.io/kubelet-serving                 system:node:worker-6                                                   <none>              Pending

    2. Approve all pending CSRs:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      You must approve the CSRs to complete the installation.

  12. Confirm ready status of the control plane node:

    $ oc get nodes

    Example output

    NAME       STATUS   ROLES    AGE     VERSION
    worker-1   Ready    worker   22h     v1.24.0+3882f8f
    master-3   Ready    master   22h     v1.24.0+3882f8f
    worker-4   Ready    worker   22h     v1.24.0+3882f8f
    master-5   Ready    master   17h     v1.24.0+3882f8f
    master-6   Ready    master   2m52s   v1.24.0+3882f8f

  13. Validate the Machine, Node and BareMetalHost Custom Resources.

    The etcd-operator requires Machine CRs to be present if the cluster is running with the functional Machine API. Machine CRs are displayed during the Running phase when present.

  14. Create Machine Custom Resource linked with BareMetalHost and Node.

    Make sure there is a Machine CR referencing the newly added node.

    Important

    Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors when running etcd-operator.

  15. Add BareMetalHost Custom Resource:

    $ oc create bmh -n openshift-machine-api custom-master3
  16. Add Machine Custom Resource:

    $ oc create machine -n openshift-machine-api custom-master3
  17. Link BareMetalHost, Machine, and Node by running the link-machine-and-node.sh script:

    #!/bin/bash
    
    # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238.
    # This script will link Machine object and Node object. This is needed
    # in order to have IP address of the Node present in the status of the Machine.
    
    # set -x
    set -e
    
    machine="$1"
    node="$2"
    
    if [ -z "$machine" ] || [ -z "$node" ]; then
        echo "Usage: $0 MACHINE NODE"
        exit 1
    fi
    
    # uid=$(echo "${node}" | cut -f1 -d':')
    node_name=$(echo "${node}" | cut -f2 -d':')
    
    oc proxy &
    proxy_pid=$!
    function kill_proxy {
        kill $proxy_pid
    }
    trap kill_proxy EXIT SIGINT
    
    HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts"
    
    function print_nics() {
        local ips
        local eob
        declare -a ips
    
        readarray -t ips < <(echo "${1}" \
                             | jq '.[] | select(. | .type == "InternalIP") | .address' \
                             | sed 's/"//g')
    
        eob=','
        for (( i=0; i<${#ips[@]}; i++ )); do
            if [ $((i+1)) -eq ${#ips[@]} ]; then
                eob=""
            fi
            cat <<- EOF
              {
                "ip": "${ips[$i]}",
                "mac": "00:00:00:00:00:00",
                "model": "unknown",
                "speedGbps": 10,
                "vlanId": 0,
                "pxe": true,
                "name": "eth1"
              }${eob}
    EOF
        done
    }
    
    function wait_for_json() {
        local name
        local url
        local curl_opts
        local timeout
    
        local start_time
        local curr_time
        local time_diff
    
        name="$1"
        url="$2"
        timeout="$3"
        shift 3
        curl_opts="$@"
        echo -n "Waiting for $name to respond"
        start_time=$(date +%s)
        until curl -g -X GET "$url" "${curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do
            echo -n "."
            curr_time=$(date +%s)
            time_diff=$((curr_time - start_time))
            if [[ $time_diff -gt $timeout ]]; then
                printf '\nTimed out waiting for %s' "${name}"
                return 1
            fi
            sleep 5
        done
        echo " Success!"
        return 0
    }
    wait_for_json oc_proxy "${HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json"
    
    addresses=$(oc get node -n openshift-machine-api "${node_name}" -o json | jq -c '.status.addresses')
    
    machine_data=$(oc get machines.machine.openshift.io -n openshift-machine-api -o json "${machine}")
    host=$(echo "$machine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g')
    
    if [ -z "$host" ]; then
        echo "Machine $machine is not linked to a host yet." 1>&2
        exit 1
    fi
    
    # The address structure on the host doesn't match the node, so extract
    # the values we want into separate variables so we can build the patch
    # we need.
    hostname=$(echo "${addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g')
    
    set +e
    read -r -d '' host_patch << EOF
    {
      "status": {
        "hardware": {
          "hostname": "${hostname}",
          "nics": [
    $(print_nics "${addresses}")
          ],
          "systemVendor": {
            "manufacturer": "Red Hat",
            "productName": "product name",
            "serialNumber": ""
          },
          "firmware": {
            "bios": {
              "date": "04/01/2014",
              "vendor": "SeaBIOS",
              "version": "1.11.0-2.el7"
            }
          },
          "ramMebibytes": 0,
          "storage": [],
          "cpu": {
            "arch": "x86_64",
            "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz",
            "clockMegahertz": 2199.998,
            "count": 4,
            "flags": []
          }
        }
      }
    }
    EOF
    set -e
    
    echo "PATCHING HOST"
    echo "${host_patch}" | jq .
    
    curl -s \
         -X PATCH \
         "${HOST_PROXY_API_PATH}/${host}/status" \
         -H "Content-type: application/merge-patch+json" \
         -d "${host_patch}"
    
    oc get baremetalhost -n openshift-machine-api -o yaml "${host}"
    $ bash link-machine-and-node.sh custom-master3 worker-3
  18. Confirm members of etcdctl by running the following command:

    $ oc rsh -n openshift-etcd etcd-worker-3
    etcdctl member list -w table

    Example output

    +---------+-------+--------+--------------+--------------+-------+
    |   ID    | STATUS|  NAME  |   PEER ADDRS | CLIENT ADDRS |LEARNER|
    +---------+-------+--------+--------------+--------------+-------+
    | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false |
    | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false |
    | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false |
    +---------+-------+--------+--------------+--------------+-------+

  19. Confirm the etcd operator has configured all nodes:

    $ oc get clusteroperator etcd

    Example output

    NAME   VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    etcd   4.11.5    True        False         False      22h

  20. Confirm health of etcdctl:

    $ oc rsh -n openshift-etcd etcd-worker-3
    etcdctl endpoint health

    Example output

    192.168.111.26 is healthy: committed proposal: took = 9.105375ms
    192.168.111.28 is healthy: committed proposal: took = 9.15205ms
    192.168.111.29 is healthy: committed proposal: took = 10.277577ms

  21. Confirm the health of the nodes:

    $ oc get Nodes

    Example output

    NAME       STATUS   ROLES    AGE   VERSION
    worker-1   Ready    worker   22h   v1.24.0+3882f8f
    master-3   Ready    master   22h   v1.24.0+3882f8f
    worker-4   Ready    worker   22h   v1.24.0+3882f8f
    master-5   Ready    master   18h   v1.24.0+3882f8f
    master-6   Ready    master   40m   v1.24.0+3882f8f

  22. Confirm the health of the ClusterOperators:

    $ oc get ClusterOperators

    Example output

    NAME                               VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                     4.11.5    True        False         False      150m
    baremetal                          4.11.5    True        False         False      22h
    cloud-controller-manager           4.11.5    True        False         False      22h
    cloud-credential                   4.11.5    True        False         False      22h
    cluster-autoscaler                 4.11.5    True        False         False      22h
    config-operator                    4.11.5    True        False         False      22h
    console                            4.11.5    True        False         False      145m
    csi-snapshot-controller            4.11.5    True        False         False      22h
    dns                                4.11.5    True        False         False      22h
    etcd                               4.11.5    True        False         False      22h
    image-registry                     4.11.5    True        False         False      22h
    ingress                            4.11.5    True        False         False      22h
    insights                           4.11.5    True        False         False      22h
    kube-apiserver                     4.11.5    True        False         False      22h
    kube-controller-manager            4.11.5    True        False         False      22h
    kube-scheduler                     4.11.5    True        False         False      22h
    kube-storage-version-migrator      4.11.5    True        False         False      148m
    machine-api                        4.11.5    True        False         False      22h
    machine-approver                   4.11.5    True        False         False      22h
    machine-config                     4.11.5    True        False         False      110m
    marketplace                        4.11.5    True        False         False      22h
    monitoring                         4.11.5    True        False         False      22h
    network                            4.11.5    True        False         False      22h
    node-tuning                        4.11.5    True        False         False      22h
    openshift-apiserver                4.11.5    True        False         False      163m
    openshift-controller-manager       4.11.5    True        False         False      22h
    openshift-samples                  4.11.5    True        False         False      22h
    operator-lifecycle-manager         4.11.5    True        False         False      22h
    operator-lifecycle-manager-catalog 4.11.5    True        False         False      22h
    operator-lifecycle-manager-pkgsvr  4.11.5    True        False         False      22h
    service-ca                         4.11.5    True        False         False      22h
    storage                            4.11.5    True        False         False      22h

  23. Confirm the ClusterVersion:

    $ oc get ClusterVersion

    Example output

    NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
    version   4.11.5    True        False         22h     Cluster version is 4.11.5

12.7. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.