This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.Este contenido no está disponible en el idioma seleccionado.
Backup and restore
Backing up and restoring your OpenShift Container Platform cluster
Abstract
Chapter 1. Backup and restore Copiar enlaceEnlace copiado en el portapapeles!
1.1. Overview of backup and restore operations in OpenShift Container Platform Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you might need to stop an OpenShift Container Platform cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In OpenShift Container Platform, you can perform a graceful shutdown of a cluster so that you can easily restart the cluster later.
You must back up etcd data before shutting down a cluster; etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In OpenShift Container Platform, you can also replace an unhealthy etcd member.
When you want to get your cluster running again, restart the cluster gracefully.
A cluster’s certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still approve the certificate signing requests (CSRs).
You might run into several situations where OpenShift Container Platform does not work as expected, such as:
- You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure, or network connectivity issues.
- You have deleted something critical in the cluster by mistake.
- You have lost the majority of your control plane hosts, leading to etcd quorum loss.
You can always recover from a disaster situation by restoring your cluster to its previous state using the saved etcd snapshots.
1.2. Application backup and restore operations Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you can back up and restore applications running on OpenShift Container Platform by using the OpenShift API for Data Protection (OADP).
OADP backs up and restores Kubernetes resources and internal images, at the granularity of a namespace, by using Velero 1.7. OADP backs up and restores persistent volumes (PVs) by using snapshots or Restic. For details, see OADP features.
1.2.1. OADP requirements Copiar enlaceEnlace copiado en el portapapeles!
OADP has the following requirements:
-
You must be logged in as a user with a
cluster-adminrole. You must have object storage for storing backups, such as one of the following storage types:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- Multicloud Object Gateway
- S3-compatible object storage, such as Noobaa or Minio
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
To back up PVs with snapshots, you must have cloud storage that has a native snapshot API or supports Container Storage Interface (CSI) snapshots, such as the following providers:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- CSI snapshot-enabled cloud storage, such as Ceph RBD or Ceph FS
If you do not want to back up PVs by using snapshots, you can use Restic, which is installed by the OADP Operator by default.
1.2.2. Backing up and restoring applications Copiar enlaceEnlace copiado en el portapapeles!
You back up applications by creating a Backup custom resource (CR). You can configure the following backup options:
- Backup hooks to run commands before or after the backup operation
- Scheduled backups
- Restic backups
You restore applications by creating a Restore CR. You can configure restore hooks to run commands in init containers or in the application container during the restore operation.
Chapter 2. Shutting down the cluster gracefully Copiar enlaceEnlace copiado en el portapapeles!
This document describes the process to gracefully shut down your cluster. You might need to temporarily shut down your cluster for maintenance reasons, or to save on resource costs.
2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Take an etcd backup prior to shutting down the cluster.
2.2. Shutting down the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can shut down your cluster in a graceful manner so that it can be restarted at a later date.
You can shut down a cluster until a year from the installation date and expect it to restart gracefully. After a year from the installation date, the cluster certificates expire.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues when restarting the cluster.
Procedure
If you are shutting the cluster down for an extended period, determine the date on which certificates expire.
oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\.openshift\.io/certificate-not-after}'$ oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\.openshift\.io/certificate-not-after}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
2022-08-05T14:37:50Zuser@user:~ $
2022-08-05T14:37:50Zuser@user:~ $1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- To ensure that the cluster can restart gracefully, plan to restart it on or before the specified date. As the cluster restarts, the process might require you to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates.
Shut down all of the nodes in the cluster. You can do this from your cloud provider’s web console, or run the following loop:
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 1; done$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 1; done1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
-h 1indicates how long, in minutes, this process lasts before the control-plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to 10 minutes or longer to make sure all the compute nodes have time to shut down first.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Shutting down the nodes using one of these methods allows pods to terminate gracefully, which reduces the chance for data corruption.
NoteAdjust the shut down time to be longer for large-scale clusters:
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 10; done$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 10; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is not necessary to drain control plane nodes (also known as the master nodes) of the standard pods that ship with OpenShift Container Platform prior to shutdown.
Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained control plane nodes prior to shutdown because of custom workloads, you must mark the control plane nodes as schedulable before the cluster will be functional again after restart.
- Shut off any cluster dependencies that are no longer needed, such as external storage or an LDAP server. Be sure to consult your vendor’s documentation before doing so.
Chapter 3. Restarting the cluster gracefully Copiar enlaceEnlace copiado en el portapapeles!
This document describes the process to restart your cluster after a graceful shutdown.
Even though the cluster is expected to be functional after the restart, the cluster might not recover due to unexpected conditions, for example:
- etcd data corruption during shutdown
- Node failure due to hardware
- Network connectivity issues
If your cluster fails to recover, follow the steps to restore to a previous cluster state.
3.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- You have gracefully shut down your cluster.
3.2. Restarting the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can restart your cluster after it has been shut down gracefully.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - This procedure assumes that you gracefully shut down the cluster.
Procedure
- Power on any cluster dependencies, such as external storage or an LDAP server.
Start all cluster machines.
Use the appropriate method for your cloud environment to start the machines, for example, from your cloud provider’s web console.
Wait approximately 10 minutes before continuing to check the status of control plane nodes (also known as the master nodes).
Verify that all control plane nodes are ready.
oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane nodes are ready if the status is
Ready, as shown in the following output:NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 75m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 75m v1.20.0
NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.20.0 ip-10-0-170-223.ec2.internal Ready master 75m v1.20.0 ip-10-0-211-16.ec2.internal Ready master 75m v1.20.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the control plane nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After the control plane nodes are ready, verify that all worker nodes are ready.
oc get nodes -l node-role.kubernetes.io/worker
$ oc get nodes -l node-role.kubernetes.io/workerCopy to Clipboard Copied! Toggle word wrap Toggle overflow The worker nodes are ready if the status is
Ready, as shown in the following output:NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 64m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 64m v1.20.0
NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.20.0 ip-10-0-182-134.ec2.internal Ready worker 64m v1.20.0 ip-10-0-250-100.ec2.internal Ready worker 64m v1.20.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the worker nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the cluster started properly.
Check that there are no degraded cluster Operators.
oc get clusteroperators
$ oc get clusteroperatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that there are no cluster Operators with the
DEGRADEDcondition set toTrue.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that all nodes are in the
Readystate:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the status for all nodes is
Ready.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If the cluster did not start properly, you might need to restore your cluster using an etcd backup.
Chapter 4. Application backup and restore Copiar enlaceEnlace copiado en el portapapeles!
4.1. OADP features and plug-ins Copiar enlaceEnlace copiado en el portapapeles!
OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications.
The default plug-ins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources.
4.1.1. OADP features Copiar enlaceEnlace copiado en el portapapeles!
OpenShift API for Data Protection (OADP) supports the following features:
- Backup
You can back up all resources in your cluster or you can filter the resources by type, namespace, or label.
OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic.
- Restore
- You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the restored objects by namespace, PV, or label.
- Schedule
- You can schedule backups at specified intervals.
- Hooks
-
You can use hooks to run commands in a container on a pod, for example,
fsfreezeto freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container.
4.1.2. OADP plug-ins Copiar enlaceEnlace copiado en el portapapeles!
The OpenShift API for Data Protection (OADP) provides default Velero plug-ins that are integrated with storage providers to support backup and snapshot operations. You can create custom plug-ins based on the Velero plug-ins.
OADP also provides plug-ins for OpenShift Container Platform resource backups and Container Storage Interface (CSI) snapshots.
| OADP plug-in | Function | Storage location |
|---|---|---|
|
| Backs up and restores Kubernetes objects by using object store. | AWS S3 |
| Backs up and restores volumes by using snapshots. | AWS EBS | |
|
| Backs up and restores Kubernetes objects by using object store. | Microsoft Azure Blob storage |
| Backs up and restores volumes by using snapshots. | Microsoft Azure Managed Disks | |
|
| Backs up and restores Kubernetes objects by using object store. | Google Cloud Storage |
| Backs up and restores volumes by using snapshots. | Google Compute Engine Disks | |
|
| Backs up and restores OpenShift Container Platform resources by using object store. [1] | Object store |
|
| Backs up and restores volumes by using CSI snapshots. [2] | Cloud storage that supports CSI snapshots |
- Mandatory.
-
The
csiplug-in uses the Velero CSI beta snapshot API.
4.1.3. About OADP Velero plug-ins Copiar enlaceEnlace copiado en el portapapeles!
You can configure two types of plug-ins when you install Velero:
- Default cloud provider plug-ins
- Custom plug-ins
Both types of plug-in are optional, but most users configure at least one cloud provider plug-in.
4.1.3.1. Default Velero cloud provider plug-ins Copiar enlaceEnlace copiado en el portapapeles!
You can install any of the following default Velero cloud provider plug-ins when you configure the oadp_v1alpha1_dpa.yaml file during deployment:
-
aws(Amazon Web Services) -
gcp(Google Cloud Platform) -
azure(Microsoft Azure) -
openshift(OpenShift Velero plug-in) -
csi(Container Storage Interface) -
kubevirt(KubeVirt)
You specify the desired default plug-ins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the openshift, aws, azure, and gcp plug-ins:
4.1.3.2. Custom Velero plug-ins Copiar enlaceEnlace copiado en el portapapeles!
You can install a custom Velero plug-in by specifying the plug-in image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment.
You specify the desired custom plug-ins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the default openshift, azure, and gcp plug-ins and a custom plug-in that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin:
4.2. Installing and configuring OADP Copiar enlaceEnlace copiado en el portapapeles!
4.2.1. About installing OADP Copiar enlaceEnlace copiado en el portapapeles!
As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.7.
To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- Multicloud Object Gateway
- S3-compatible object storage, such as Noobaa or Minio
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
You can back up persistent volumes (PVs) by using snapshots or Restic.
To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- CSI snapshot-enabled cloud provider, such as OpenShift Container Storage
If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Restic.
You create a Secret object for your storage provider credentials and then you install the Data Protection Application.
Additional resources
- Overview of backup locations and snapshot locations in the Velero documentation.
4.2.2. Installing and configuring the OpenShift API for Data Protection with Amazon Web Services Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator, configuring AWS for Velero, and then installing the Data Protection Application.
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.2.2.1. Installing the OADP Operator Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.7.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. - Click Operators → Installed Operators to verify the installation.
4.2.2.2. Configuring Amazon Web Services Copiar enlaceEnlace copiado en el portapapeles!
You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP).
Prerequisites
- You must have the AWS CLI installed.
Procedure
Set the
BUCKETvariable:BUCKET=<your_bucket>
$ BUCKET=<your_bucket>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
REGIONvariable:REGION=<your_region>
$ REGION=<your_region>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS S3 bucket:
aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION$ aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
us-east-1does not support aLocationConstraint. If your region isus-east-1, omit--create-bucket-configuration LocationConstraint=$REGION.
Create an IAM user:
aws iam create-user --user-name velero
$ aws iam create-user --user-name velero1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
Create a
velero-policy.jsonfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the necessary permissions:aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json
$ aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an access key for the
velerouser:aws iam create-access-key --user-name velero
$ aws iam create-access-key --user-name veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
credentials-velerofile:cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF
$ cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for AWS before you install the Data Protection Application.
4.2.2.3. Creating a secret for backup and snapshot locations Copiar enlaceEnlace copiado en el portapapeles!
You create a Secret object for the backup and snapshot locations if they use the same credentials.
The default name of the Secret is cloud-credentials.
Prerequisites
- Your object storage and cloud storage must use the same credentials.
- You must configure object storage for Velero.
You must create a
credentials-velerofile for the object storage in the appropriate format.NoteThe
DataProtectionApplicationcustom resource (CR) requires aSecretfor installation. If nospec.backupLocations.credential.namevalue is specified, the default name is used.If you do not want to specify the backup locations or the snapshot locations, you must create a
Secretwith the default name by using an emptycredentials-velerofile.
Procedure
Create a
Secretwith the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.2.2.3.1. Configuring secrets for different backup and snapshot location credentials Copiar enlaceEnlace copiado en el portapapeles!
If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file.
Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR).
Procedure
Create a
credentials-velerofile with separate profiles for the backup and snapshot locations, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretobject with thecredentials-velerofile:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the profiles to the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2.4. Configuring the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You can configure Velero resource allocations and enable self-signed CA certificates.
4.2.2.4.1. Setting Velero CPU and memory resource allocations Copiar enlaceEnlace copiado en el portapapeles!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2.4.2. Enabling self-signed CA certificates Copiar enlaceEnlace copiado en el portapapeles!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2.5. Installing the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create a
Secretwith the default name,cloud-credentials, which contains separate profiles for the backup and snapshot location credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> The
openshiftplug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set tofalseif you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node hasResticpods running. You configure Restic for backups by addingspec.defaultVolumesToRestic: trueto theBackupCR. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example,velero, if the bucket is used for multiple purposes. <.> Specify the name of theSecretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the backup location. <.> You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. <.> The snapshot location must be in the same region as the PVs.- Click Create.
Verify the installation by viewing the OADP resources:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2.5.1. Enabling CSI in the DataProtectionApplication CR Copiar enlaceEnlace copiado en el portapapeles!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
4.2.3. Installing and configuring the OpenShift API for Data Protection with Microsoft Azure Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator, configuring Azure for Velero, and then installing the Data Protection Application.
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.2.3.1. Installing the OADP Operator Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.7.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. - Click Operators → Installed Operators to verify the installation.
4.2.3.2. Configuring Microsoft Azure Copiar enlaceEnlace copiado en el portapapeles!
You configure a Microsoft Azure for the OpenShift API for Data Protection (OADP).
Prerequisites
- You must have the Azure CLI installed.
Procedure
Log in to Azure:
az login
$ az loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
AZURE_RESOURCE_GROUPvariable:AZURE_RESOURCE_GROUP=Velero_Backups
$ AZURE_RESOURCE_GROUP=Velero_BackupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure resource group:
az group create -n $AZURE_RESOURCE_GROUP --location CentralUS
$ az group create -n $AZURE_RESOURCE_GROUP --location CentralUS1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify your location.
Set the
AZURE_STORAGE_ACCOUNT_IDvariable:AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
$ AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure storage account:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BLOB_CONTAINERvariable:BLOB_CONTAINER=velero
$ BLOB_CONTAINER=veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure Blob storage container:
az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID
$ az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the storage account access key:
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list \ --account-name $AZURE_STORAGE_ACCOUNT_ID \ --query "[?keyName == 'key1'].value" -o tsv`
$ AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list \ --account-name $AZURE_STORAGE_ACCOUNT_ID \ --query "[?keyName == 'key1'].value" -o tsv`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
credentials-velerofile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Mandatory. You cannot back up internal images if the
credentials-velerofile contains only the service principal credentials.
You use the
credentials-velerofile to create aSecretobject for Azure before you install the Data Protection Application.
4.2.3.3. Creating a secret for backup and snapshot locations Copiar enlaceEnlace copiado en el portapapeles!
You create a Secret object for the backup and snapshot locations if they use the same credentials.
The default name of the Secret is cloud-credentials-azure.
Prerequisites
- Your object storage and cloud storage must use the same credentials.
- You must configure object storage for Velero.
You must create a
credentials-velerofile for the object storage in the appropriate format.NoteThe
DataProtectionApplicationcustom resource (CR) requires aSecretfor installation. If nospec.backupLocations.credential.namevalue is specified, the default name is used.If you do not want to specify the backup locations or the snapshot locations, you must create a
Secretwith the default name by using an emptycredentials-velerofile.
Procedure
Create a
Secretwith the default name:oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.2.3.3.1. Configuring secrets for different backup and snapshot location credentials Copiar enlaceEnlace copiado en el portapapeles!
If your backup and snapshot locations use different credentials, you create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials-azure. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
4.2.3.4. Configuring the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You can configure Velero resource allocations and enable self-signed CA certificates.
4.2.3.4.1. Setting Velero CPU and memory resource allocations Copiar enlaceEnlace copiado en el portapapeles!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3.4.2. Enabling self-signed CA certificates Copiar enlaceEnlace copiado en el portapapeles!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3.5. Installing the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials-azure. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. Secretwith the default name,cloud-credentials-azure, for the snapshot location. ThisSecretis not referenced in theDataProtectionApplicationCR.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> The
openshiftplug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set tofalseif you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node hasResticpods running. You configure Restic for backups by addingspec.defaultVolumesToRestic: trueto theBackupCR. <.> Specify the Azure resource group. <.> Specify the Azure storage account ID. <.> Specify the Azure subscription ID. <.> If you do not specify this value, the default name,cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example,velero, if the bucket is used for multiple purposes. <.> You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs.- Click Create.
Verify the installation by viewing the OADP resources:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.3.5.1. Enabling CSI in the DataProtectionApplication CR Copiar enlaceEnlace copiado en el portapapeles!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
4.2.4. Installing and configuring the OpenShift API for Data Protection with Google Cloud Platform Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator, configuring GCP for Velero, and then installing the Data Protection Application.
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.2.4.1. Installing the OADP Operator Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.7.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. - Click Operators → Installed Operators to verify the installation.
4.2.4.2. Configuring Google Cloud Platform Copiar enlaceEnlace copiado en el portapapeles!
You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP).
Prerequisites
-
You must have the
gcloudandgsutilCLI tools installed. See the Google cloud documentation for details.
Procedure
Log in to GCP:
gcloud auth login
$ gcloud auth loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BUCKETvariable:BUCKET=<bucket>
$ BUCKET=<bucket>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify your bucket name.
Create the storage bucket:
gsutil mb gs://$BUCKET/
$ gsutil mb gs://$BUCKET/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
PROJECT_IDvariable to your active project:PROJECT_ID=$(gcloud config get-value project)
$ PROJECT_ID=$(gcloud config get-value project)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account:
gcloud iam service-accounts create velero \ --display-name "Velero service account"$ gcloud iam service-accounts create velero \ --display-name "Velero service account"Copy to Clipboard Copied! Toggle word wrap Toggle overflow List your service accounts:
gcloud iam service-accounts list
$ gcloud iam service-accounts listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
SERVICE_ACCOUNT_EMAILvariable to match itsemailvalue:SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')$ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the necessary permissions:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
velero.servercustom role:gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"$ gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add IAM policy binding to the project:
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server$ gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the IAM service account:
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the IAM service account keys to the
credentials-velerofile in the current directory:gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL$ gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAILCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for GCP before you install the Data Protection Application.
4.2.4.3. Creating a secret for backup and snapshot locations Copiar enlaceEnlace copiado en el portapapeles!
You create a Secret object for the backup and snapshot locations if they use the same credentials.
The default name of the Secret is cloud-credentials-gcp.
Prerequisites
- Your object storage and cloud storage must use the same credentials.
- You must configure object storage for Velero.
-
You must create a
credentials-velerofile for the object storage in the appropriate format.
Procedure
Create a
Secretwith the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.2.4.3.1. Configuring secrets for different backup and snapshot location credentials Copiar enlaceEnlace copiado en el portapapeles!
If your backup and snapshot locations use different credentials, you create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials-gcp. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
4.2.4.4. Configuring the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You can configure Velero resource allocations and enable self-signed CA certificates.
4.2.4.4.1. Setting Velero CPU and memory resource allocations Copiar enlaceEnlace copiado en el portapapeles!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4.4.2. Enabling self-signed CA certificates Copiar enlaceEnlace copiado en el portapapeles!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4.5. Installing the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials-gcp. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. Secretwith the default name,cloud-credentials-gcp, for the snapshot location. ThisSecretis not referenced in theDataProtectionApplicationCR.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> The
openshiftplug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set tofalseif you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node hasResticpods running. You configure Restic for backups by addingspec.defaultVolumesToRestic: trueto theBackupCR. <.> If you do not specify this value, the default name,cloud-credentials-gcp, is used. If you specify a custom name, the custom name is used for the backup location. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example,velero, if the bucket is used for multiple purposes. <.> You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs. <.> The snapshot location must be in the same region as the PVs.- Click Create.
Verify the installation by viewing the OADP resources:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.4.5.1. Enabling CSI in the DataProtectionApplication CR Copiar enlaceEnlace copiado en el portapapeles!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
4.2.5. Installing and configuring the OpenShift API for Data Protection with Multicloud Object Gateway Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) with Multicloud Object Gateway (MCG) by installing the OADP Operator, creating a Secret object, and then installing the Data Protection Application.
MCG is a component of OpenShift Container Storage (OCS). You configure MCG as a backup location in the DataProtectionApplication custom resource (CR).
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
If your cloud provider has a native snapshot API, configure a snapshot location. If your cloud provider does not support snapshots or if your storage is NFS, you can create backups with Restic.
You do not need to specify a snapshot location in the DataProtectionApplication CR for Restic or Container Storage Interface (CSI) snapshots.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks.
4.2.5.1. Installing the OADP Operator Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.7.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. - Click Operators → Installed Operators to verify the installation.
4.2.5.2. Retrieving Multicloud Object Gateway credentials Copiar enlaceEnlace copiado en el portapapeles!
You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP).
MCG is a component of OpenShift Container Storage.
Prerequisites
- You must deploy OpenShift Container Storage by using the appropriate OpenShift Container Storage deployment guide.
Procedure
-
Obtain the S3 endpoint,
AWS_ACCESS_KEY_ID, andAWS_SECRET_ACCESS_KEYby running thedescribecommand on theNooBaacustom resource. Create a
credentials-velerofile:cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF
$ cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject when you install the Data Protection Application.
4.2.5.3. Creating a secret for backup and snapshot locations Copiar enlaceEnlace copiado en el portapapeles!
You create a Secret object for the backup and snapshot locations if they use the same credentials.
The default name of the Secret is cloud-credentials.
Prerequisites
- Your object storage and cloud storage must use the same credentials.
- You must configure object storage for Velero.
-
You must create a
credentials-velerofile for the object storage in the appropriate format.
Procedure
Create a
Secretwith the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.2.5.3.1. Configuring secrets for different backup and snapshot location credentials Copiar enlaceEnlace copiado en el portapapeles!
If your backup and snapshot locations use different credentials, you create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
4.2.5.4. Configuring the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You can configure Velero resource allocations and enable self-signed CA certificates.
4.2.5.4.1. Setting Velero CPU and memory resource allocations Copiar enlaceEnlace copiado en el portapapeles!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.5.4.2. Enabling self-signed CA certificates Copiar enlaceEnlace copiado en el portapapeles!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.5.5. Installing the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. Secretwith the default name,cloud-credentials, for the snapshot location. ThisSecretis not referenced in theDataProtectionApplicationCR.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> The
openshiftplug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set tofalseif you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node hasResticpods running. You configure Restic for backups by addingspec.defaultVolumesToRestic: trueto theBackupCR. <.> Specify the URL of the S3 endpoint. <.> If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the backup location. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example,velero, if the bucket is used for multiple purposes.- Click Create.
Verify the installation by viewing the OADP resources:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.5.5.1. Enabling CSI in the DataProtectionApplication CR Copiar enlaceEnlace copiado en el portapapeles!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
4.2.6. Installing and configuring the OpenShift API for Data Protection with OpenShift Container Storage Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) with OpenShift Container Storage (OCS) by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application.
You can configure Multicloud Object Gateway or any S3-compatible object storage as a backup location in the DataProtectionApplication custom resource (CR).
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
If the cloud provider has a native snapshot API, you can configure cloud storage as a snapshot location in the DataProtectionApplication CR. You do not need to specify a snapshot location for Restic or Container Storage Interface (CSI) snapshots.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks.
4.2.6.1. Installing the OADP Operator Copiar enlaceEnlace copiado en el portapapeles!
You install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.7 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.7.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. - Click Operators → Installed Operators to verify the installation.
After you install the OADP Operator, you configure object storage as a backup location and cloud storage as a snapshot location, if the cloud provider supports a native snapshot API.
If the cloud provider does not support snapshots or if your storage is NFS, you can create backups with Restic. Restic does not require a snapshot location.
4.2.6.2. Creating a secret for backup and snapshot locations Copiar enlaceEnlace copiado en el portapapeles!
You create a Secret object for the backup and snapshot locations if they use the same credentials.
The default name of the Secret is cloud-credentials, unless you specify a default plug-in for the backup storage provider.
Prerequisites
- Your object storage and cloud storage must use the same credentials.
- You must configure object storage for Velero.
-
You must create a
credentials-velerofile for the object storage in the appropriate format.
Procedure
Create a
Secretwith the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.2.6.2.1. Configuring secrets for different backup and snapshot location credentials Copiar enlaceEnlace copiado en el portapapeles!
If your backup and snapshot locations use different credentials, you create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
4.2.6.3. Configuring the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You can configure Velero resource allocations and enable self-signed CA certificates.
4.2.6.3.1. Setting Velero CPU and memory resource allocations Copiar enlaceEnlace copiado en el portapapeles!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.6.3.2. Enabling self-signed CA certificates Copiar enlaceEnlace copiado en el portapapeles!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.6.4. Installing the Data Protection Application Copiar enlaceEnlace copiado en el portapapeles!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. Secretwith the default name,cloud-credentials, for the snapshot location. ThisSecretis not referenced in theDataProtectionApplicationCR.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow <.> Specify the default plug-in for the backup provider, for example,
gcp, if appropriate. <.> Specify thecsidefault plug-in if you use CSI snapshots to back up PVs. Thecsiplug-in uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location. <.> Theopenshiftplug-in is mandatory in order to back up and restore namespaces on an OpenShift Container Platform cluster. <.> Set tofalseif you want to disable the Restic installation. Restic deploys a daemon set, which means that each worker node hasResticpods running. You configure Restic for backups by addingspec.defaultVolumesToRestic: trueto theBackupCR. <.> Specify the backup provider. <.> If you use a default plug-in for the backup provider, you must specify the correct default name for theSecret, for example,cloud-credentials-gcp. If you specify a custom name, the custom name is used for the backup location. If you do not specify aSecretname, the default name is used. <.> Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. <.> Specify a prefix for Velero backups, for example,velero, if the bucket is used for multiple purposes.- Click Create.
Verify the installation by viewing the OADP resources:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.6.4.1. Enabling CSI in the DataProtectionApplication CR Copiar enlaceEnlace copiado en el portapapeles!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
4.2.7. Uninstalling the OpenShift API for Data Protection Copiar enlaceEnlace copiado en el portapapeles!
You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details.
4.3. Backing up and restoring Copiar enlaceEnlace copiado en el portapapeles!
4.3.1. Backing up applications Copiar enlaceEnlace copiado en el portapapeles!
You back up applications by creating a Backup custom resource (CR).
The Backup CR creates backup files for Kubernetes resources and internal images, on S3 object storage, and snapshots for persistent volumes (PVs), if the cloud provider uses a native snapshot API or the Container Storage Interface (CSI) to create snapshots, such as OpenShift Container Storage 4. For more information, see CSI volume snapshots.
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
If your cloud provider has a native snapshot API or supports Container Storage Interface (CSI) snapshots, the Backup CR backs up persistent volumes by creating snapshots. For more information, see the Overview of CSI volume snapshots in the OpenShift Container Platform documentation.
If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Restic.
You can create backup hooks to run commands before or after the backup operation.
You can schedule backups by creating a Schedule CR instead of a Backup CR.
4.3.1.1. Creating a Backup CR Copiar enlaceEnlace copiado en el portapapeles!
You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR).
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
The
DataProtectionApplicationCR must be in aReadystate. Backup location prerequisites:
- You must have S3 object storage configured for Velero.
-
You must have a backup location configured in the
DataProtectionApplicationCR.
Snapshot location prerequisites:
- Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots.
-
For CSI snapshots, you must create a
VolumeSnapshotClassCR to register the CSI driver. -
You must have a volume location configured in the
DataProtectionApplicationCR.
Procedure
Retrieve the
backupStorageLocationsCRs:oc get backupStorageLocations
$ oc get backupStorageLocationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m
NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
BackupCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the status of the
BackupCR isCompleted:oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}'$ oc get backup -n openshift-adp <backup> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.1.2. Backing up persistent volumes with CSI snapshots Copiar enlaceEnlace copiado en el portapapeles!
You back up persistent volumes with Container Storage Interface (CSI) snapshots by creating a VolumeSnapshotClass custom resource (CR) to register the CSI driver before you create the Backup CR.
Prerequisites
- The cloud provider must support CSI snapshots.
-
You must enable CSI in the
DataProtectionApplicationCR.
Procedure
Create a
VolumeSnapshotClassCR, as in the following examples:Ceph RBD
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ceph FS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Other cloud providers
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can now create a Backup CR.
4.3.1.3. Backing up applications with Restic Copiar enlaceEnlace copiado en el portapapeles!
You back up Kubernetes resources, internal images, and persistent volumes with Restic by editing the Backup custom resource (CR).
You do not need to specify a snapshot location in the DataProtectionApplication CR.
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
You must not disable the default Restic installation by setting
spec.configuration.restic.enabletofalsein theDataProtectionApplicationCR. -
The
DataProtectionApplicationCR must be in aReadystate.
Procedure
Edit the
BackupCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add
defaultVolumesToRestic: trueto thespecblock.
4.3.1.4. Creating backup hooks Copiar enlaceEnlace copiado en el portapapeles!
You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR).
Pre hooks run before the pod is backed up. Post hooks run after the backup.
Procedure
Add a hook to the
spec.hooksblock of theBackupCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces.
- 2
- Currently, pods are the only supported resource.
- 3
- Optional: This hook only applies to objects matching the label selector.
- 4
- Array of hooks to run before the backup.
- 5
- Optional: If the container is not specified, the command runs in the first container in the pod.
- 6
- Array of commands that the hook runs.
- 7
- Allowed values for error handling are
FailandContinue. The default isFail. - 8
- Optional: How long to wait for the commands to run. The default is
30s. - 9
- This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks.
4.3.1.5. Scheduling backups Copiar enlaceEnlace copiado en el portapapeles!
You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR.
Leave enough time in your backup schedule for a backup to finish before another backup is created.
For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes.
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
The
DataProtectionApplicationCR must be in aReadystate.
Procedure
Retrieve the
backupStorageLocationsCRs:oc get backupStorageLocations
$ oc get backupStorageLocationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31m
NAME PHASE LAST VALIDATED AGE DEFAULT velero-sample-1 Available 11s 31mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ScheduleCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the status of the
ScheduleCR isCompletedafter the scheduled backup runs:oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'$ oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.3.2. Restoring applications Copiar enlaceEnlace copiado en el portapapeles!
You restore application backups by creating a Restore custom resources (CRs).
You can create restore hooks to run commands in init containers, before the application container starts, or in the application container itself.
4.3.2.1. Creating a Restore CR Copiar enlaceEnlace copiado en el portapapeles!
You restore a Backup custom resource (CR) by creating a Restore CR.
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
The
DataProtectionApplicationCR must be in aReadystate. -
You must have a Velero
BackupCR. - Adjust the requested size so the persistent volume (PV) capacity matches the requested size at backup time.
Procedure
Create a
RestoreCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the
BackupCR.
Verify that the status of the
RestoreCR isCompleted:oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'$ oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup resources have been restored:
oc get all -n <namespace>
$ oc get all -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Namespace that you backed up.
4.3.2.2. Creating restore hooks Copiar enlaceEnlace copiado en el portapapeles!
You create restore hooks to run commands in a container in a pod while restoring your application by editing the Restore custom resource (CR).
You can create two types of restore hooks:
An
inithook adds an init container to a pod to perform setup tasks before the application container starts.If you restore a Restic backup, the
restic-waitinit container is added before the restore hook init container.-
An
exechook runs commands or scripts in a container of a restored pod.
Procedure
Add a hook to the
spec.hooksblock of theRestoreCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces.
- 2
- Currently, pods are the only supported resource.
- 3
- Optional: This hook only applies to objects matching the label selector.
- 4
- Optional: If the container is not specified, the command runs in the first container in the pod.
- 5
- Array of commands that the hook runs.
- 6
- Optional: If the
waitTimeoutis not specified, the restore waits indefinitely. You can specify how long to wait for a container to start and for preceding hooks in the container to complete. The wait timeout starts when the container is restored and might require time for the container to pull the image and mount the volumes. - 7
- Optional: How long to wait for the commands to run. The default is
30s. - 8
- Allowed values for error handling are
FailandContinue:-
Continue: Only command failures are logged. -
Fail: No more restore hooks run in any container in any pod. The status of theRestoreCR will bePartiallyFailed.
-
4.4. Troubleshooting Copiar enlaceEnlace copiado en el portapapeles!
You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool. The Velero CLI tool provides more detailed logs and information.
You can check installation issues, backup and restore CR issues, and Restic issues.
You can collect logs, CR information, and Prometheus metric data by using the must-gather tool.
You can obtain the Velero CLI tool by:
- Downloading the Velero CLI tool
- Accessing the Velero binary in the Velero deployment in the cluster
4.4.1. Downloading the Velero CLI tool Copiar enlaceEnlace copiado en el portapapeles!
You can download and install the Velero CLI tool by following the instructions on the Velero documentation page.
The page includes instructions for:
- macOS by using Homebrew
- GitHub
- Windows by using Chocolatey
Prerequisites
- You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
-
You have installed
kubectllocally.
Procedure
- Open a browser and navigate to "Install the CLI" on the Verleo website.
- Follow the appropriate procedure for macOS, GitHub, or Windows.
Download the Velero version appropriate for your version of OADP, according to the table that follows:
Expand Table 4.2. OADP-Velero version relationship OADP version Velero version 0.2.6
1.6.0
0.5.5
1.7.1
1.0.0
1.7.1
1.0.1
1.7.1
1.0.2
1.7.1
1.0.3
1.7.1
4.4.2. Accessing the Velero binary in the Velero deployment in the cluster Copiar enlaceEnlace copiado en el portapapeles!
You can use a shell command to access the Velero binary in the Velero deployment in the cluster.
Prerequisites
-
Your
DataProtectionApplicationcustom resource has a status ofReconcile complete.
Procedure
Enter the following command to set the needed alias:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.3. Debugging Velero resources with the OpenShift CLI tool Copiar enlaceEnlace copiado en el portapapeles!
You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool.
Velero CRs
Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR:
oc describe <velero_cr> <cr_name>
$ oc describe <velero_cr> <cr_name>
Velero pod logs
Use the oc logs command to retrieve the Velero pod logs:
oc logs pod/<velero>
$ oc logs pod/<velero>
Velero pod debug logs
You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example.
This option is available starting from OADP 1.0.3.
The following logLevel values are available:
-
trace -
debug -
info -
warning -
error -
fatal -
panic
It is recommended to use debug for most logs.
4.4.4. Debugging Velero resources with the Velero CLI tool Copiar enlaceEnlace copiado en el portapapeles!
You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool.
The Velero CLI tool provides more detailed information than the OpenShift CLI tool.
Syntax
Use the oc exec command to run a Velero CLI command:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name>
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
<backup_restore_cr> <command> <cr_name>
Example
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
Help option
Use the velero --help option to list all Velero CLI commands:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
--help
Describe command
Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name>
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
<backup_restore_cr> describe <cr_name>
Example
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
Logs command
Use the velero logs command to retrieve the logs of a Backup or Restore CR:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name>
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
<backup_restore_cr> logs <cr_name>
Example
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf
4.4.5. Installation issues Copiar enlaceEnlace copiado en el portapapeles!
You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application.
4.4.5.1. Backup storage contains invalid directories Copiar enlaceEnlace copiado en el portapapeles!
The Velero pod log displays the error message, Backup storage contains invalid top-level directories.
Cause
The object storage contains top-level directories that are not Velero directories.
Solution
If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest.
4.4.5.2. Incorrect AWS credentials Copiar enlaceEnlace copiado en el portapapeles!
The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain.
Cause
The credentials-velero file used to create the Secret object is incorrectly formatted.
Solution
Ensure that the credentials-velero file is correctly formatted, as in the following example:
Example credentials-velero file
[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
4.4.6. Backup and Restore CR issues Copiar enlaceEnlace copiado en el portapapeles!
You might encounter these common issues with Backup and Restore custom resources (CRs).
4.4.6.1. Backup CR cannot retrieve volume Copiar enlaceEnlace copiado en el portapapeles!
The Backup CR displays the error message, InvalidVolume.NotFound: The volume ‘vol-xxxx’ does not exist.
Cause
The persistent volume (PV) and the snapshot locations are in different regions.
Solution
-
Edit the value of the
spec.snapshotLocations.velero.config.regionkey in theDataProtectionApplicationmanifest so that the snapshot location is in the same region as the PV. -
Create a new
BackupCR.
4.4.6.2. Backup CR status remains in progress Copiar enlaceEnlace copiado en el portapapeles!
The status of a Backup CR remains in the InProgress phase and does not complete.
Cause
If a backup is interrupted, it cannot be resumed.
Solution
Retrieve the details of the
BackupCR:oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup>$ oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
BackupCR:oc delete backup <backup> -n openshift-adp
$ oc delete backup <backup> -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow You do not need to clean up the backup location because a
BackupCR in progress has not uploaded files to object storage.-
Create a new
BackupCR.
4.4.7. Restic issues Copiar enlaceEnlace copiado en el portapapeles!
You might encounter these issues when you back up applications with Restic.
4.4.7.1. Restic permission error for NFS data volumes with root_squash enabled Copiar enlaceEnlace copiado en el portapapeles!
The Restic pod log displays the error message, controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied".
Cause
If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups.
Solution
You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest:
-
Create a supplemental group for
Resticon the NFS data volume. -
Set the
setgidbit on the NFS directories so that group ownership is inherited. Add the
spec.configuration.restic.supplementalGroupsparameter and the group ID to theDataProtectionApplicationmanifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the supplemental group ID.
-
Wait for the
Resticpods to restart so that the changes are applied.
4.4.7.2. Restore CR of Restic backup is "PartiallyFailed", "Failed", or remains "InProgress" Copiar enlaceEnlace copiado en el portapapeles!
The Restore CR of a Restic backup completes with a PartiallyFailed or Failed status or it remains InProgress and does not complete.
If the status is PartiallyFailed or Failed, the Velero pod log displays the error message, level=error msg="unable to successfully complete restic restores of pod’s volumes".
If the status is InProgress, the Restore CR logs are unavailable and no errors appear in the Restic pod logs.
Cause
The DeploymentConfig object redeploys the Restore pod, causing the Restore CR to fail.
Solution
Create a
RestoreCR that excludes theReplicationController,DeploymentConfig, andTemplateInstancesresources:velero restore create --from-backup=<backup> -n openshift-adp \ --include-namespaces <namespace> \ --exclude-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io \ --restore-volumes=true
$ velero restore create --from-backup=<backup> -n openshift-adp \1 --include-namespaces <namespace> \2 --exclude-resources replicationcontroller,deploymentconfig,templateinstances.template.openshift.io \ --restore-volumes=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the status of the
RestoreCR isCompleted:oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'$ oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
RestoreCR that includes theReplicationControllerandDeploymentConfigresources:velero restore create --from-backup=<backup> -n openshift-adp \ --include-namespaces <namespace> \ --include-resources replicationcontroller,deploymentconfig \ --restore-volumes=true
$ velero restore create --from-backup=<backup> -n openshift-adp \ --include-namespaces <namespace> \ --include-resources replicationcontroller,deploymentconfig \ --restore-volumes=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the status of the
RestoreCR isCompleted:oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'$ oc get restore -n openshift-adp <restore> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup resources have been restored:
oc get all -n <namespace>
$ oc get all -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.7.3. Restic Backup CR cannot be recreated after bucket is emptied Copiar enlaceEnlace copiado en el portapapeles!
If you create a Restic Backup CR for a namespace, empty the S3 bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails.
The velero pod log displays the error message, msg="Error checking repository for stale locks".
Cause
Velero does not create the Restic repository from the ResticRepository manifest if the Restic directories are deleted on object storage. See (Velero issue 4421) for details.
4.4.8. Using the must-gather tool Copiar enlaceEnlace copiado en el portapapeles!
You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool.
The must-gather data must be attached to all customer cases.
You can run the must-gather tool with the following data collection options:
-
Full
must-gatherdata collection collects Prometheus metrics, pod logs, and Velero CR information for all namespaces where the OADP Operator is installed. -
Essential
must-gatherdata collection collects pod logs and Velero CR information for a specific duration of time, for example, one hour or 24 hours. Prometheus metrics and duplicate logs are not included. -
must-gatherdata collection with timeout. Data collection can take a long time if there are many failedBackupCRs. You can improve performance by setting a timeout value. - Prometheus metrics data dump downloads an archive file containing the metrics data collected by Prometheus.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. - You must have the OpenShift CLI installed.
Procedure
-
Navigate to the directory where you want to store the
must-gatherdata. Run the
oc adm must-gathercommand for one of the following data collection options:Full
must-gatherdata collection, including Prometheus metrics:oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow The data is saved as
must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.Essential
must-gatherdata collection, without Prometheus metrics, for a specific time duration:oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_<time>_essential
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_<time>_essential1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the time in hours. Allowed values are
1h,6h,24h,72h, orall, for example,gather_1h_essentialorgather_all_essential.
must-gatherdata collection with timeout:oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_with_timeout <timeout>
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_with_timeout <timeout>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a timeout value in seconds.
Prometheus metrics data dump:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_metrics_dump
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.0 \ -- /usr/bin/gather_metrics_dumpCopy to Clipboard Copied! Toggle word wrap Toggle overflow This operation can take a long time. The data is saved as
must-gather/metrics/prom_data.tar.gz.
Viewing metrics data with the Prometheus console
You can view the metrics data with the Prometheus console.
Procedure
Decompress the
prom_data.tar.gzfile:tar -xvzf must-gather/metrics/prom_data.tar.gz
$ tar -xvzf must-gather/metrics/prom_data.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a local Prometheus instance:
make prometheus-run
$ make prometheus-runCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command outputs the Prometheus URL.
Output
Started Prometheus on http://localhost:9090
Started Prometheus on http://localhost:9090Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Launch a web browser and navigate to the URL to view the data by using the Prometheus web console.
After you have viewed the data, delete the Prometheus instance and data:
make prometheus-cleanup
$ make prometheus-cleanupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 5. Control plane backup and restore Copiar enlaceEnlace copiado en el portapapeles!
5.1. Backing up etcd Copiar enlaceEnlace copiado en el portapapeles!
etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects.
Back up your cluster’s etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
Be sure to take an etcd backup after you upgrade your cluster. This is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.
Back up your cluster’s etcd data by performing a single invocation of the backup script on a control plane host (also known as the master host). Do not take a backup for each control plane host.
After you have an etcd backup, you can restore to a previous cluster state.
5.1.1. Backing up etcd data Copiar enlaceEnlace copiado en el portapapeles!
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single control plane host (also known as the master host). Do not take a backup from each control plane host in the cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. You have checked whether the cluster-wide proxy is enabled.
TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml. The proxy is enabled if thehttpProxy,httpsProxy, andnoProxyfields have values set.
Procedure
Start a debug session for a control plane node:
oc debug node/<node_name>
$ oc debug node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change your root directory to the host:
chroot /host
sh-4.2# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY,HTTP_PROXY, andHTTPS_PROXYenvironment variables. Run the
cluster-backup.shscript and pass in the location to save the backup to.TipThe
cluster-backup.shscript is maintained as a component of the etcd Cluster Operator and is a wrapper around theetcdctl snapshot savecommand./usr/local/bin/cluster-backup.sh /home/core/assets/backup
sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two files are created in the
/home/core/assets/backup/directory on the control plane host:-
snapshot_<datetimestamp>.db: This file is the etcd snapshot. Thecluster-backup.shscript confirms its validity. static_kuberesources_<datetimestamp>.tar.gz: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.NoteIf etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
-
5.2. Replacing an unhealthy etcd member Copiar enlaceEnlace copiado en el portapapeles!
This document describes the process to replace a single unhealthy etcd member.
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.
If you have lost the majority of your control plane hosts (also known as the master hosts), leading to etcd quorum loss, then you must follow the disaster recovery procedure to restore to a previous cluster state instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure.
If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
5.2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
- Take an etcd backup prior to replacing an unhealthy etcd member.
5.2.2. Identifying an unhealthy etcd member Copiar enlaceEnlace copiado en el portapapeles!
You can identify if your cluster has an unhealthy etcd member.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole.
Procedure
Check the status of the
EtcdMembersAvailablestatus condition using the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the output:
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthyCopy to Clipboard Copied! Toggle word wrap Toggle overflow This example output shows that the
ip-10-0-131-183.ec2.internaletcd member is unhealthy.
5.2.3. Determining the state of the unhealthy etcd member Copiar enlaceEnlace copiado en el portapapeles!
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
- The machine is not running or the node is not ready
- The etcd pod is crashlooping
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - You have identified an unhealthy etcd member.
Procedure
Determine if the machine is not running:
oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running$ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal stopped
ip-10-0-131-183.ec2.internal stopped1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This output lists the node and the status of the node’s machine. If the status is anything other than
running, then the machine is not running.
If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the node is not ready.
If either of the following scenarios are true, then the node is not ready.
If the machine is running, then check whether the node is unreachable:
oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable$ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed with an
unreachabletaint, then the node is not ready.
If the node is still reachable, then check whether the node is listed as
NotReady:oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
$ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal NotReady master 122m v1.20.0
ip-10-0-131-183.ec2.internal NotReady master 122m v1.20.01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed as
NotReady, then the node is not ready.
If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the etcd pod is crashlooping.
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
Verify that all control plane nodes (also known as the master nodes) are listed as
Ready:oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.20.0 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.20.0 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.20.0
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.20.0 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.20.0 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.20.0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the status of an etcd pod is either
ErrororCrashloopBackoff:oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Since this status of this pod is
Error, then the etcd pod is crashlooping.
If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.
5.2.4. Replacing the unhealthy etcd member Copiar enlaceEnlace copiado en el portapapeles!
Depending on the state of your unhealthy etcd member, use one of the following procedures:
5.2.4.1. Replacing an unhealthy etcd member whose machine is not running or whose node is not ready Copiar enlaceEnlace copiado en el portapapeles!
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that either the machine is not running or the node is not ready.
-
You have access to the cluster as a user with the
cluster-adminrole. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure. The
$ etcdctl endpoint healthcommand will list the removed member until the procedure of replacement is finished and a new member is added.Remove the unhealthy etcd member by providing the ID to the
etcdctl member removecommand:etcdctl member remove 6fc1e7c9db35841d
sh-4.2# etcdctl member remove 6fc1e7c9db35841dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346
Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
ImportantAfter you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete and recreate the control plane machine (also known as the master machine). After this machine is recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new master using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the control plane machine for the unhealthy node,
ip-10-0-131-183.ec2.internal.
Save the machine configuration to a file on your file system:
oc get machine clustername-8qw5l-master-0 \ -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml$ oc get machine clustername-8qw5l-master-0 \1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the unhealthy node.
Edit the
new-master-machine.yamlfile that was created in the previous step to assign a new name and remove unnecessary fields.Remove the entire
statussection:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
metadata.namefield to a new name.It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0is changed toclustername-8qw5l-master-3.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
spec.providerIDfield:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.annotationsandmetadata.generationfields:annotations: machine.openshift.io/instance-state: running ... generation: 2annotations: machine.openshift.io/instance-state: running ... generation: 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.resourceVersionandmetadata.uidfields:resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31aCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the machine of the unhealthy member:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the unhealthy node.
Verify that the machine was deleted:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine using the
new-master-machine.yamlfile:oc apply -f new-master-machine.yaml
$ oc apply -f new-master-machine.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new machine has been created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3is being created and is ready once the phase changes fromProvisioningtoRunning.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124mCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
Verify that there are exactly three etcd members.
Connect to the running etcd container, passing in the name of a pod that was not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
WarningBe sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
5.2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping Copiar enlaceEnlace copiado en el portapapeles!
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that the etcd pod is crashlooping.
-
You have access to the cluster as a user with the
cluster-adminrole. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc debug node/ip-10-0-131-183.ec2.internal
$ oc debug node/ip-10-0-131-183.ec2.internal1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace this with the name of the unhealthy node.
Change your root directory to the host:
chroot /host
sh-4.2# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move the existing etcd pod file out of the kubelet manifest directory:
mkdir /var/lib/etcd-backup
sh-4.2# mkdir /var/lib/etcd-backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the etcd data directory to a different location:
mv /var/lib/etcd/ /tmp
sh-4.2# mv /var/lib/etcd/ /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member removecommand:etcdctl member remove 62bcf33650a7170a
sh-4.2# etcdctl member remove 62bcf33650a7170aCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes (also known as the master nodes) have a functioning etcd pod.
Verification
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all members are healthy:
etcdctl endpoint health --cluster
sh-4.2# etcdctl endpoint health --clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645msCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Disaster recovery Copiar enlaceEnlace copiado en el portapapeles!
5.3.1. About disaster recovery Copiar enlaceEnlace copiado en el portapapeles!
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state.
Disaster recovery requires you to have at least one healthy control plane host (also known as the master host).
- Restoring to a previous cluster state
This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state.
If applicable, you might also need to recover from expired control plane certificates.
WarningRestoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort.
Prior to performing a restore, see About restoring cluster state for more information on the impact to the cluster.
NoteIf you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member.
- Recovering from expired control plane certificates
- This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
5.3.2. Restoring to a previous cluster state Copiar enlaceEnlace copiado en el portapapeles!
To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.
5.3.2.1. About restoring cluster state Copiar enlaceEnlace copiado en el portapapeles!
You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:
- The cluster has lost the majority of control plane hosts (quorum loss).
- An administrator has deleted something critical and must restore to recover the cluster.
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort.
If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup.
Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, SDN controllers, and persistent volume controllers.
It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.
In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.
5.3.2.2. Restoring to a previous cluster state Copiar enlaceEnlace copiado en el portapapeles!
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts (also known as the master hosts).
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole. - A healthy control plane host to use as the recovery host.
- SSH access to control plane hosts.
-
A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats:
snapshot_<datetimestamp>.dbandstatic_kuberesources_<datetimestamp>.tar.gz.
For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.
Procedure
- Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.
ImportantIf you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the etcd backup directory to the recovery control plane host.
This procedure assumes that you copied the
backupdirectory containing the etcd snapshot and the resources for the static pods to the/home/core/directory of your recovery control plane host.Stop the static pods on any other control plane nodes.
NoteIt is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.
- Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory:
sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the etcd pods are stopped.
sudo crictl ps | grep etcd | grep -v operator
$ sudo crictl ps | grep etcd | grep -v operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the existing Kubernetes API server pod file out of the kubelet manifest directory:
sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Kubernetes API server pods are stopped.
sudo crictl ps | grep kube-apiserver | grep -v operator
$ sudo crictl ps | grep kube-apiserver | grep -v operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Move the etcd data directory to a different location:
sudo mv /var/lib/etcd/ /tmp
$ sudo mv /var/lib/etcd/ /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on each of the other control plane hosts that is not the recovery host.
- Access the recovery control plane host.
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY,HTTP_PROXY, andHTTPS_PROXYenvironment variables.TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml. The proxy is enabled if thehttpProxy,httpsProxy, andnoProxyfields have values set.Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:
sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup
$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe restore process can cause nodes to enter the
NotReadystate if the node certificates were updated after the last etcd backup.Check the nodes to ensure they are in the
Readystate.Run the following command:
oc get nodes -w
$ oc get nodes -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It can take several minutes for all nodes to report their state.
If any nodes are in the
NotReadystate, log in to the nodes and remove all of the PEM files from the/var/lib/kubelet/pkidirectory on each node. You can SSH into the nodes or use the terminal window in the web console.ssh -i <ssh-key-path> core@<master-hostname>
$ ssh -i <ssh-key-path> core@<master-hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample
pkidirectorypwd ls
sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart the kubelet service on all control plane hosts.
From the recovery host, run the following command:
sudo systemctl restart kubelet.service
$ sudo systemctl restart kubelet.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on all other control plane hosts.
Approve the pending CSRs:
Get the list of current CSRs:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapperCSR:oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For user-provisioned installations, approve each valid kubelet service CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the single member control plane has started successfully.
From the recovery host, verify that the etcd container is running.
sudo crictl ps | grep etcd | grep -v operator
$ sudo crictl ps | grep etcd | grep -v operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the recovery host, verify that the etcd pod is running.
oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you attempt to run
oc loginprior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again.Unable to connect to the server: EOF
Unable to connect to the server: EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47sCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the status is
Pending, or the output lists more than one running etcd pod, wait a few minutes and check again.- Repeat this step for each lost control plane host that is not the recovery host.
Delete and recreate other non-recovery, control plane machines, one by one. After these machines are recreated, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new control plane node using the same method that was used to originally create it.
WarningDo not delete and recreate the machine for the recovery host.
Obtain the machine for one of the lost control plane hosts.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the control plane machine for the lost control plane host,
ip-10-0-131-183.ec2.internal.
Save the machine configuration to a file on your file system:
oc get machine clustername-8qw5l-master-0 \ -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml$ oc get machine clustername-8qw5l-master-0 \1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the lost control plane host.
Edit the
new-master-machine.yamlfile that was created in the previous step to assign a new name and remove unnecessary fields.Remove the entire
statussection:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
metadata.namefield to a new name.It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0is changed toclustername-8qw5l-master-3:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
spec.providerIDfield:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.annotationsandmetadata.generationfields:annotations: machine.openshift.io/instance-state: running ... generation: 2
annotations: machine.openshift.io/instance-state: running ... generation: 2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
metadata.resourceVersionandmetadata.uidfields:resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31a
resourceVersion: "13291" uid: a282eb70-40a2-4e89-8009-d05dd420d31aCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the machine of the lost control plane host:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the lost control plane host.
Verify that the machine was deleted:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine using the
new-master-machine.yamlfile:oc apply -f new-master-machine.yaml
$ oc apply -f new-master-machine.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new machine has been created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3is being created and is ready after the phase changes fromProvisioningtoRunning.
It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
- Repeat these steps for each lost control plane host that is not the recovery host.
In a separate terminal window, log in to the cluster as a user with the
cluster-adminrole by using the following command:oc login -u <cluster_admin>
$ oc login -u <cluster_admin>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- For
<cluster_admin>, specify a user name with thecluster-adminrole.
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.
Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition for etcd to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following commands.Force a new rollout for the Kubernetes API server:
oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the Kubernetes controller manager:
oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the Kubernetes scheduler:
oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.
Verify that all control plane hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd
$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OpenShift Container Platform components such as routers, Operators, and third-party components.
Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted.
5.3.2.3. Issues and workarounds for restoring a persistent storage state Copiar enlaceEnlace copiado en el portapapeles!
If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.
The following are some example scenarios that produce an out-of-date status:
- MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
- Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
- Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from
/dev/disk/by-idor/devdirectories. This situation might cause the local PVs to refer to devices that no longer exist.To fix this problem, an administrator must:
- Manually remove the PVs with invalid devices.
- Remove symlinks from respective nodes.
-
Delete
LocalVolumeorLocalVolumeSetobjects (see Storage → Configuring persistent storage → Persistent storage using local volumes → Deleting the Local Storage Operator Resources).
5.3.3. Recovering from expired control plane certificates Copiar enlaceEnlace copiado en el portapapeles!
5.3.3.1. Recovering from expired control plane certificates Copiar enlaceEnlace copiado en el portapapeles!
The cluster can automatically recover from expired control plane certificates.
However, you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs.
Use the following steps to approve the pending CSRs:
Procedure
Get the list of current CSRs:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapperCSR:oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For user-provisioned installations, approve each valid kubelet serving CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
Copiar enlaceEnlace copiado en el portapapeles!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.