Ricerca

Questo contenuto non è disponibile nella lingua selezionata.

Chapter 9. Adding RHEL compute machines to an OpenShift Container Platform cluster

download PDF

In OpenShift Container Platform, you can add Red Hat Enterprise Linux (RHEL) compute machines to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster on the x86_64 architecture. You can use RHEL as the operating system only on compute machines.

9.1. About adding RHEL compute nodes to a cluster

In OpenShift Container Platform 4.14, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster.

If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks.

For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default.

Important
  • Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster.
  • Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines.

9.2. System requirements for RHEL compute nodes

The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements:

  • You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information.
  • Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
  • Each system must meet the following hardware requirements:

    • Physical or virtual system, or an instance running on a public or private IaaS.
    • Base OS: RHEL 8.6 and later with "Minimal" installation option.

      Important

      Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported.

      If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information.

      For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes.

    • If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation.
Important

To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.

  • NetworkManager 1.0 or later.
  • 1 vCPU.
  • Minimum 8 GB RAM.
  • Minimum 15 GB hard disk space for the file system containing /var/.
  • Minimum 1 GB hard disk space for the file system containing /usr/local/bin/.
  • Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library.

    • Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set.
    • Each system must be able to access the cluster’s API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster’s API service endpoints.

Additional resources

9.2.1. Certificate signing requests management

Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.

9.3. Preparing an image for your cloud

Amazon Machine Images (AMI) are required because various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You will need a valid AMI ID so that the correct RHEL version needed for the compute machines is selected.

9.3.1. Listing latest available RHEL images on AWS

AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs.

Prerequisites

  • You have installed the AWS CLI.

Procedure

  • Use this command to list RHEL 8.4 Amazon Machine Images (AMI):

    $ aws ec2 describe-images --owners 309956199498 \ 1
    --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2
    --filters "Name=name,Values=RHEL-8.4*" \ 3
    --region us-east-1 \ 4
    --output table 5
    1
    The --owners command option shows Red Hat images based on the account ID 309956199498.
    Important

    This account ID is required to display AMI IDs for images that are provided by Red Hat.

    2
    The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]'. In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs.
    3
    The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.4*", then RHEL 8.4 AMIs are shown.
    4
    The --region command option sets where the region where an AMI is stored.
    5
    The --output command option sets how the results are displayed.
Note

When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.4 or 8.5.

Example output

------------------------------------------------------------------------------------------------------------
|                                              DescribeImages                                              |
+---------------------------+-----------------------------------------------------+------------------------+
|  2021-03-18T14:23:11.000Z |  RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2  |  ami-07eeb4db5f7e5a8fb |
|  2021-03-18T14:38:28.000Z |  RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2   |  ami-069d22ec49577d4bf |
|  2021-05-18T19:06:34.000Z |  RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2        |  ami-01fc429821bf1f4b4 |
|  2021-05-18T20:09:47.000Z |  RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2       |  ami-0b0af3577fe5e3532 |
+---------------------------+-----------------------------------------------------+------------------------+

Additional resources

9.4. Preparing the machine to run the playbook

Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.14 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it.

Prerequisites

  • Install the OpenShift CLI (oc) on the machine that you run the playbook on.
  • Log in as a user with cluster-admin permission.

Procedure

  1. Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster.
  2. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN.
  3. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts.

    Important

    If you use SSH key-based authentication, you must manage the key with an SSH agent.

  4. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it:

    1. Register the machine with RHSM:

      # subscription-manager register --username=<user_name> --password=<password>
    2. Pull the latest subscription data from RHSM:

      # subscription-manager refresh
    3. List the available subscriptions:

      # subscription-manager list --available --matches '*OpenShift*'
    4. In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:

      # subscription-manager attach --pool=<pool_id>
  5. Enable the repositories required by OpenShift Container Platform 4.14:

    # subscription-manager repos \
        --enable="rhel-8-for-x86_64-baseos-rpms" \
        --enable="rhel-8-for-x86_64-appstream-rpms" \
        --enable="rhocp-4.14-for-rhel-8-x86_64-rpms"
  6. Install the required packages, including openshift-ansible:

    # yum install openshift-ansible openshift-clients jq

    The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line.

9.5. Preparing a RHEL compute node

Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories.

  1. On each host, register with RHSM:

    # subscription-manager register --username=<user_name> --password=<password>
  2. Pull the latest subscription data from RHSM:

    # subscription-manager refresh
  3. List the available subscriptions:

    # subscription-manager list --available --matches '*OpenShift*'
  4. In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:

    # subscription-manager attach --pool=<pool_id>
  5. Disable all yum repositories:

    1. Disable all the enabled RHSM repositories:

      # subscription-manager repos --disable="*"
    2. List the remaining yum repositories and note their names under repo id, if any:

      # yum repolist
    3. Use yum-config-manager to disable the remaining yum repositories:

      # yum-config-manager --disable <repo_id>

      Alternatively, disable all repositories:

      # yum-config-manager --disable \*

      Note that this might take a few minutes if you have a large number of available repositories

  6. Enable only the repositories required by OpenShift Container Platform 4.14:

    # subscription-manager repos \
        --enable="rhel-8-for-x86_64-baseos-rpms" \
        --enable="rhel-8-for-x86_64-appstream-rpms" \
        --enable="rhocp-4.14-for-rhel-8-x86_64-rpms" \
        --enable="fast-datapath-for-rhel-8-x86_64-rpms"
  7. Stop and disable firewalld on the host:

    # systemctl disable --now firewalld.service
    Note

    You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker.

9.6. Attaching the role permissions to RHEL instance in AWS

Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node.

Procedure

  1. From the AWS IAM console, create your desired IAM role.
  2. Attach the IAM role to the desired worker node.

Additional resources

9.7. Tagging a RHEL worker node as owned or shared

A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster.

  • The owned tag value should be added if the resource should be destroyed as part of destroying the cluster.
  • The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource.

Procedure

  • With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared.
Note

Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer.

9.8. Adding a RHEL compute machine to your cluster

You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.14 cluster.

Prerequisites

  • You installed the required packages and performed the necessary configuration on the machine that you run the playbook on.
  • You prepared the RHEL hosts for installation.

Procedure

Perform the following steps on the machine that you prepared to run the playbook:

  1. Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables:

    [all:vars]
    ansible_user=root 1
    #ansible_become=True 2
    
    openshift_kubeconfig_path="~/.kube/config" 3
    
    [new_workers] 4
    mycluster-rhel8-0.example.com
    mycluster-rhel8-1.example.com
    1
    Specify the user name that runs the Ansible tasks on the remote compute machines.
    2
    If you do not specify root for the ansible_user, you must set ansible_become to True and assign the user sudo permissions.
    3
    Specify the path and file name of the kubeconfig file for your cluster.
    4
    List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine.
  2. Navigate to the Ansible playbook directory:

    $ cd /usr/share/ansible/openshift-ansible
  3. Run the playbook:

    $ ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1
    1
    For <path>, specify the path to the Ansible inventory file that you created.

9.9. Approving the certificate signing requests for your machines

When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

  • You added machines to your cluster.

Procedure

  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.27.3
    master-1  Ready     master  63m  v1.27.3
    master-2  Ready     master  64m  v1.27.3

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve
      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr

    Example output

    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...

  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name> 1
      1
      <csr_name> is the name of a CSR from the list of current CSRs.
    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes

    Example output

    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.27.3
    master-1  Ready     master  73m  v1.27.3
    master-2  Ready     master  74m  v1.27.3
    worker-0  Ready     worker  11m  v1.27.3
    worker-1  Ready     worker  11m  v1.27.3

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Additional information

9.10. Required parameters for the Ansible hosts file

You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster.

ParameterDescriptionValues

ansible_user

The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent.

A user name on the system. The default value is root.

ansible_become

If the values of ansible_user is not root, you must set ansible_become to True, and the user that you specify as the ansible_user must be configured for passwordless sudo access.

True. If the value is not True, do not specify and define this parameter.

openshift_kubeconfig_path

Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster.

The path and name of the configuration file.

9.10.1. Optional: Removing RHCOS compute machines from a cluster

After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources.

Prerequisites

  • You have added RHEL compute machines to your cluster.

Procedure

  1. View the list of machines and record the node names of the RHCOS compute machines:

    $ oc get nodes -o wide
  2. For each RHCOS compute machine, delete the node:

    1. Mark the node as unschedulable by running the oc adm cordon command:

      $ oc adm cordon <node_name> 1
      1
      Specify the node name of one of the RHCOS compute machines.
    2. Drain all the pods from the node:

      $ oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1
      1
      Specify the node name of the RHCOS compute machine that you isolated.
    3. Delete the node:

      $ oc delete nodes <node_name> 1
      1
      Specify the node name of the RHCOS compute machine that you drained.
  3. Review the list of compute machines to ensure that only the RHEL nodes remain:

    $ oc get nodes -o wide
  4. Remove the RHCOS machines from the load balancer for your cluster’s compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines.
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.