Este contenido no está disponible en el idioma seleccionado.
Chapter 11. Expanding the cluster
You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API.
11.1. Checking for multi-architecture support Copiar enlaceEnlace copiado en el portapapeles!
You must check that your cluster can support multiple architectures before you add a node with a different architecture.
Procedure
- Log in to the cluster using the CLI.
Check that your cluster uses the architecture payload by running the following command:
oc adm release info -o json | jq .metadata.metadata
$ oc adm release info -o json | jq .metadata.metadataCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
If you see the following output, your cluster supports multiple architectures:
{ "release.openshift.io/architecture": "multi" }{ "release.openshift.io/architecture": "multi" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.2. Installing multi-architecture compute clusters Copiar enlaceEnlace copiado en el portapapeles!
A cluster with an x86_64 or arm64 control plane can support worker nodes that have two different CPU architectures. Multi-architecture clusters combine the strengths of each architecture and support a variety of workloads.
For example, you can add arm64, IBM Power® (ppc64le), or IBM Z® (s390x) worker nodes to an existing OpenShift Container Platform cluster with an x86_64.
The main steps of the installation are as follows:
- Create and register a multi-architecture compute cluster.
-
Create an
x86_64orarm64infrastructure environment, download the ISO discovery image for the environment, and add the control plane. Anarm64infrastructure environment is available for Amazon Web Services (AWS) and Google Cloud (GC) only. -
Create an
arm64,ppc64le, ors390xinfrastructure environment, download the ISO discovery images forarm64,ppc64le, ors390x, and add the worker nodes.
Supported platforms
For the supported platforms for each OpenShift Container Platform version, see About clusters with multi-architecture compute machines. Use the appropriate platforms for the version you are installing.
Main steps
- Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section.
When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture compute cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note- 1
- Use the
multi-option for the OpenShift Container Platform version number; for example,"4.19-multi". - 2
- Set the CPU architecture to
"multi". - 3
- Set the number of control plane nodes to "3", "4", or "5". The option of 4 or 5 control plane nodes is available from OpenShift Container Platform 4.18 and later. Single-node OpenShift is not supported for a multi-architecture compute cluster. The
control_plane_countfield replaceshigh_availability_mode, which is deprecated.
When you reach the "Registering a new infrastructure environment" step of the installation, set
cpu_architecturetox86_64:Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you reach the "Adding hosts" step of the installation, set
host_roletomaster:NoteFor more information, see Assigning Roles to Hosts in Additional Resources.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Download the discovery image for the
x86_64architecture. -
Boot the
x86_64architecture hosts using the generated discovery image. - Start the installation and wait for the cluster to be fully installed.
Repeat the "Registering a new infrastructure environment" step of the installation. This time, set
cpu_architectureto one of the following:ppc64le(for IBM Power®),s390x(for IBM Z®), orarm64. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat the "Adding hosts" step of the installation. This time, set
host_roletoworker:NoteFor more details, see Assigning Roles to Hosts in Additional Resources.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Download the discovery image for the arm64, ppc64le or s390x architecture.
- Boot the architecture hosts using the generated discovery image.
- Start the installation and wait for the cluster to be fully installed.
Verification
View the arm64, ppc64le, or s390x worker nodes in the cluster by running the following command:
oc get nodes -o wide
$ oc get nodes -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow
11.3. Adding hosts with the web console Copiar enlaceEnlace copiado en el portapapeles!
You can add hosts to clusters that were created using the Assisted Installer.
- Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and later.
-
When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the
machineNetworkfield of theinstall-config.yamlfile. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks.
Procedure
- Log in to OpenShift Cluster Manager and click the cluster that you want to expand.
- Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed.
- Optional: Modify ignition files as needed.
- Boot the target host using the discovery ISO, and wait for the host to be discovered in the console.
-
Select the host role. It can be either a
workeror acontrol planehost. - Start the installation.
As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation.
When the host is successfully installed, it is listed as a host in the cluster web console.
New hosts will be encrypted using the same method as the original cluster.
11.4. Adding hosts with the API Copiar enlaceEnlace copiado en el portapapeles!
You can add hosts to clusters using the Assisted Installer REST API.
Prerequisites
-
Install the Red Hat OpenShift Cluster Manager CLI (
ocm). - Log in to Red Hat OpenShift Cluster Manager as a user with cluster creation privileges.
-
Install
jq. - Ensure that all the required DNS records exist for the cluster that you want to expand.
When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the machineNetwork field of the install-config.yaml file. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks.
Procedure
- Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only.
Set the
$API_URLvariable by running the following command:export API_URL=<api_url>
$ export API_URL=<api_url>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<api_url>with the Assisted Installer API URL, for example,https://api.openshift.com
Import the cluster by running the following commands:
Set the
$CLUSTER_IDvariable:Log in to the cluster and run the following command:
export CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')$ export CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display the
$CLUSTER_IDvariable output:echo ${CLUSTER_ID}$ echo ${CLUSTER_ID}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set the
$CLUSTER_REQUESTvariable that is used to import the cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<api_vip>with the hostname for the cluster’s API server. This can be the DNS domain for the API server or the IP address of the single node which the host can reach. For example,api.compute-1.example.com. - 2
- Replace
<cluster_id>with the$CLUSTER_IDoutput from the previous substep. - 3
- Replace
<openshift_cluster_name>with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.
Import the cluster and set the
$CLUSTER_IDvariable. Run the following command:CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer ${API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "$CLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id')$ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer ${API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "$CLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id')Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Generate the
InfraEnvresource for the cluster and set the$INFRA_ENV_IDvariable by running the following commands:- Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com.
Set the
$INFRA_ENV_REQUESTvariable:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<path_to_pull_secret_file>with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com. - 2
- Replace
<path_to_ssh_pub_key>with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. - 3
- Replace
<infraenv_name>with the plain text name for theInfraEnvresource. - 4
- Replace
<iso_image_type>with the ISO image type, eitherfull-isoorminimal-iso.
Post the
$INFRA_ENV_REQUESTto the /v2/infra-envs API and set the$INFRA_ENV_IDvariable:INFRA_ENV_ID=$(curl "$API_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer ${API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "$INFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id')$ INFRA_ENV_ID=$(curl "$API_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer ${API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "$INFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id')Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Get the URL of the discovery ISO for the cluster host by running the following command:
curl -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -r '.download_url'$ curl -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -r '.download_url'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12
https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download the ISO:
curl -L -s '<iso_url>' --output rhcos-live-minimal.iso
$ curl -L -s '<iso_url>' --output rhcos-live-minimal.iso1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<iso_url>with the URL for the ISO from the previous step.
-
Boot the new worker host from the downloaded
rhcos-live-minimal.iso. Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up:
curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id'$ curl -s "$API_URL/api/assisted-install/v2/clusters/$CLUSTER_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
2294ba03-c264-4f11-ac08-2f1bb2f8c296
2294ba03-c264-4f11-ac08-2f1bb2f8c296Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
$HOST_IDvariable for the new host, for example:HOST_ID=<host_id>
$ HOST_ID=<host_id>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<host_id>with the host ID from the previous step.
Check that the host is ready to install by running the following command:
NoteEnsure that you copy the entire command including the complete
jqexpression.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the previous command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command:
curl -X POST -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts/$HOST_ID/actions/install" -H "Authorization: Bearer ${API_TOKEN}"$ curl -X POST -s "$API_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts/$HOST_ID/actions/install" -H "Authorization: Bearer ${API_TOKEN}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host.
ImportantYou must approve the CSRs to complete the installation.
Keep running the following API call to monitor the cluster installation:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Run the following command to see all the events for the cluster:
curl -s "$API_URL/api/assisted-install/v2/events?cluster_id=$CLUSTER_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}'$ curl -s "$API_URL/api/assisted-install/v2/events?cluster_id=$CLUSTER_ID" -H "Authorization: Bearer ${API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to the cluster and approve the pending CSRs to complete the installation.
Verification
Check that the new host was successfully added to the cluster with a status of
Ready:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0
NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5. Replacing a control plane node in a healthy cluster Copiar enlaceEnlace copiado en el portapapeles!
You can replace a control plane (master) node in a healthy OpenShift Container Platform cluster that has three to five control plane nodes, by adding a new control plane node and removing an existing control plane node.
If the cluster is unhealthy, you must peform additional operations before you can manage the control plane nodes. See Replacing a control plane node in an unhealthy cluster for more information.
11.5.1. Adding a new control plane node Copiar enlaceEnlace copiado en el portapapeles!
Add the new control plane node, and verify that it is healthy. In the example below, the new node is node-5.
Prerequisites
- You are using OpenShift Container Platform 4.11 or later.
- You have installed a healthy cluster with at least three control plane nodes.
- You have created a single control plane node to be added to the cluster for Day 2.
Procedure
Retrieve pending Certificate Signing Requests (CSRs) for the new Day 2 control plane node:
oc get csr | grep Pending
$ oc get csr | grep PendingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending
csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> PendingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all pending CSRs for the new node (
node-5in this example):oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approveCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantYou must approve the CSRs to complete the installation.
Confirm that the new control plane node is in
Readystatus:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
etcdoperator requires aMachinecustom resource (CR) that references the new node when the cluster runs with a Machine API. The machine API is automatically activated when the cluster has three or more control plane nodes.Create the
BareMetalHostandMachineCRs and link them to the new control plane’sNodeCR.Create the
BareMetalHostCR with a unique.metadata.namevalue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
BareMetalHostCR:oc apply -f <filename>
$ oc apply -f <filename>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <filename> with the name of the
BareMetalHostCR.
Create the
MachineCR using the unique.metadata.namevalue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<cluster_name>with the name of the specific cluster, for example,test-day2-1-6qv96.
To get the cluster name, run the following command:
oc get infrastructure cluster -o=jsonpath='{.status.infrastructureName}{"\n"}'$ oc get infrastructure cluster -o=jsonpath='{.status.infrastructureName}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
MachineCR:oc apply -f <filename>
$ oc apply -f <filename>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<filename>with the name of theMachineCR.
Link
BareMetalHost,Machine, andNodeby running thelink-machine-and-node.shscript:Copy the
link-machine-and-node.shscript below to a local machine:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the script executable:
chmod +x link-machine-and-node.sh
$ chmod +x link-machine-and-node.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the script:
bash link-machine-and-node.sh node-5 node-5
$ bash link-machine-and-node.sh node-5 node-5Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe first
node-5instance represents the machine, and the second represents the node.
Confirm members of
etcdby executing into one of the pre-existing control plane nodes:Open a remote shell session to the control plane node:
oc rsh -n openshift-etcd etcd-node-0
$ oc rsh -n openshift-etcd etcd-node-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow List
etcdmembers:etcdctl member list -w table
# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Monitor the
etcdoperator configuration process until completion:oc get clusteroperator etcd
$ oc get clusteroperator etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output (upon completion)
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm
etcdhealth by running the following commands:Open a remote shell session to the control plane node:
oc rsh -n openshift-etcd etcd-node-0
$ oc rsh -n openshift-etcd etcd-node-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check endpoint health:
etcdctl endpoint health
# etcdctl endpoint healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
192.168.111.24 is healthy: committed proposal: took = 10.383651ms 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms
192.168.111.24 is healthy: committed proposal: took = 10.383651ms 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755msCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that all nodes are ready:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster Operators are all available:
oc get ClusterOperators
$ oc get ClusterOperatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster version is correct:
oc get ClusterVersion
$ oc get ClusterVersionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.5.2. Removing the existing control plane node Copiar enlaceEnlace copiado en el portapapeles!
Remove the control plane node that you are replacing. This is node-0 in the example below.
Prerequisites
- You have added a new healthy control plane node.
Procedure
Delete the
BareMetalHostCR of the pre-existing control plane node:oc delete bmh -n openshift-machine-api node-0
$ oc delete bmh -n openshift-machine-api node-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the machine is unhealthy:
oc get machine -A
$ oc get machine -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
MachineCR:oc delete machine -n openshift-machine-api node-0
$ oc delete machine -n openshift-machine-api node-0 machine.machine.openshift.io "node-0" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm removal of the
NodeCR:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check
etcd-operatorlogs to confirm status of theetcdcluster:oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf
$ oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource
E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resourceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the physical machine to allow the
etcdoperator to reconcile the cluster members:Open a remote shell session to the control plane node:
oc rsh -n openshift-etcd etcd-node-1
$ oc rsh -n openshift-etcd etcd-node-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Monitor the progress of
etcdoperator reconciliation by checking members and endpoint health:etcdctl member list -w table; etcdctl endpoint health
# etcdctl member list -w table; etcdctl endpoint healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6. Replacing a control plane node in an unhealthy cluster Copiar enlaceEnlace copiado en el portapapeles!
You can replace an unhealthy control plane (master) node in an OpenShift Container Platform cluster that has three to five control plane nodes, by removing the unhealthy control plane node and adding a new one.
For details on replacing a control plane node in a healthy cluster, see Replacing a control plane node in a healthy cluster.
11.6.1. Removing an unhealthy control plane node Copiar enlaceEnlace copiado en el portapapeles!
Remove the unhealthy control plane node from the cluster. This is node-0 in the example below.
Prerequisites
- You have installed a cluster with at least three control plane nodes.
- At least one of the control plane nodes is not ready.
Procedure
Check the node status to confirm that a control plane node is not ready:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm in the
etcd-operatorlogs that the cluster is unhealthy:oc logs -n openshift-etcd-operator etcd-operator deployment/etcd-operator
$ oc logs -n openshift-etcd-operator etcd-operator deployment/etcd-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, node-0 is unhealthy
E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, node-0 is unhealthyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm the
etcdmembers by running the following commands:Open a remote shell session to the control plane node:
oc rsh -n openshift-etcd node-1
$ oc rsh -n openshift-etcd node-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the
etcdctlmembers:etcdctl member list -w table
# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm that
etcdctlendpoint health reports an unhealthy member of the cluster:etcdctl endpoint health
# etcdctl endpoint healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster{"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the unhealthy control plane by deleting the
Machinecustom resource (CR):oc delete machine -n openshift-machine-api node-0
$ oc delete machine -n openshift-machine-api node-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
MachineandNodeCRs might not be deleted because they are protected by finalizers. If this occurs, you must delete theMachineCR manually by removing all finalizers.Verify in the
etcd-operatorlogs whether the unhealthy machine has been removed:oc logs -n openshift-etcd-operator etcd-operator deployment/ettcd-operator
$ oc logs -n openshift-etcd-operator etcd-operator deployment/ettcd-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine node-0 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.25}]I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine node-0 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.25}]Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you see that removal has been skipped, as in the above log example, manually remove the unhealthy
etcdctlmember:Open a remote shell session to the control plane node:
oc rsh -n openshift-etcd node-1
$ oc rsh -n openshift-etcd node-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the
etcdctlmembers:etcdctl member list -w table
# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that
etcdctlendpoint health reports an unhealthy member of the cluster:etcdctl endpoint health
# etcdctl endpoint healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster{"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the unhealthy
etcdctlmember from the cluster:etcdctl member remove 61e2a86084aafa62
# etcdctl member remove 61e2a86084aafa62Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7
Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the unhealthy
etcdctlmember was removed by running the following command:etcdctl member list -w table
# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
11.6.2. Adding a new control plane node Copiar enlaceEnlace copiado en el portapapeles!
Add a new control plane node to replace the unhealthy node that you removed. In the example below, the new node is node-5.
Prerequisites
- You have installed a control plane node for Day 2. For more information, see Adding hosts with the web console or Adding hosts with the API.
Procedure
Retrieve pending Certificate Signing Requests (CSRs) for the new Day 2 control plane node:
oc get csr | grep Pending
$ oc get csr | grep PendingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending
csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> PendingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all pending CSRs for the new node (
node-5in this example):oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approveCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must approve the CSRs to complete the installation.
Confirm that the control plane node is in
Readystatus:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
etcdoperator requires aMachineCR referencing the new node when the cluster runs with a Machine API. The machine API is automatically activated when the cluster has three control plane nodes.Create the
BareMetalHostandMachineCRs and link them to the new control plane’sNodeCR.ImportantBoot-it-yourself will not create
BareMetalHostandMachineCRs, so you must create them. Failure to create theBareMetalHostandMachineCRs will generate errors in theetcdoperator.Create the
BareMetalHostCR with a unique.metadata.namevalue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
BareMetalHostCR:oc apply -f <filename>
$ oc apply -f <filename>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <filename> with the name of the
BareMetalHostCR.
Create the
MachineCR using the unique.metadata.namevalue:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
MachineCR:oc apply -f <filename>
$ oc apply -f <filename>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace <filename> with the name of the
MachineCR.
Link
BareMetalHost,Machine, andNodeby running thelink-machine-and-node.shscript:Copy the
link-machine-and-node.shscript below to a local machine:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the script executable:
chmod +x link-machine-and-node.sh
$ chmod +x link-machine-and-node.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the script:
bash link-machine-and-node.sh node-5 node-5
$ bash link-machine-and-node.sh node-5 node-5Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe first
node-5instance represents the machine, and the second represents the node.
Confirm members of
etcdby running the following commands:Open a remote shell session to the control plane node:
oc rsh -n openshift-etcd node-1
$ oc rsh -n openshift-etcd node-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the
etcdctlmembers:etcdctl member list -w table
# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Monitor the
etcdoperator configuration process until completion:oc get clusteroperator etcd
$ oc get clusteroperator etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output (upon completion)
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm
etcdctlhealth by running the following commands:Open a remote shell session to the control plane node:
oc rsh -n openshift-etcd node-1
$ oc rsh -n openshift-etcd node-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check endpoint health:
etcdctl endpoint health
# etcdctl endpoint healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms
192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577msCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Confirm the health of the nodes:
oc get Nodes
$ oc get NodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster Operators are all available:
oc get ClusterOperators
$ oc get ClusterOperatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster version is correct:
oc get ClusterVersion
$ oc get ClusterVersionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5Copy to Clipboard Copied! Toggle word wrap Toggle overflow