이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Backup and restore
Backing up and restoring your OpenShift Container Platform cluster
Abstract
Chapter 1. Backup and restore 링크 복사링크가 클립보드에 복사되었습니다!
1.1. Control plane backup and restore operations 링크 복사링크가 클립보드에 복사되었습니다!
As a cluster administrator, you might need to stop an OpenShift Container Platform cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In OpenShift Container Platform, you can perform a graceful shutdown of a cluster so that you can easily restart the cluster later.
You must back up etcd data before shutting down a cluster; etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In OpenShift Container Platform, you can also replace an unhealthy etcd member.
When you want to get your cluster running again, restart the cluster gracefully.
A cluster’s certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still approve the certificate signing requests (CSRs).
You might run into several situations where OpenShift Container Platform does not work as expected, such as:
- You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure or network connectivity issues.
- You have deleted something critical in the cluster by mistake.
- You have lost the majority of your control plane hosts, leading to etcd quorum loss.
You can always recover from a disaster situation by restoring your cluster to its previous state using the saved etcd snapshots.
1.2. Application backup and restore operations 링크 복사링크가 클립보드에 복사되었습니다!
As a cluster administrator, you can back up and restore applications running on OpenShift Container Platform by using the OpenShift API for Data Protection (OADP).
OADP backs up and restores Kubernetes resources and internal images, at the granularity of a namespace, by using the version of Velero that is appropriate for the version of OADP you install, according to the table in Downloading the Velero CLI tool. OADP backs up and restores persistent volumes (PVs) by using snapshots or Restic. For details, see OADP features.
1.2.1. OADP requirements 링크 복사링크가 클립보드에 복사되었습니다!
OADP has the following requirements:
-
You must be logged in as a user with a
cluster-adminrole. You must have object storage for storing backups, such as one of the following storage types:
- OpenShift Data Foundation
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- S3-compatible object storage
- IBM Cloud® Object Storage S3
If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1.x.
OADP 1.0.x does not support CSI backup on OCP 4.11 and later. OADP 1.0.x includes Velero 1.7.x and expects the API group snapshot.storage.k8s.io/v1beta1, which is not present on OCP 4.11 and later.
The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
To back up PVs with snapshots, you must have cloud storage that has a native snapshot API or supports Container Storage Interface (CSI) snapshots, such as the following providers:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- CSI snapshot-enabled cloud storage, such as Ceph RBD or Ceph FS
If you do not want to back up PVs by using snapshots, you can use Restic, which is installed by the OADP Operator by default.
1.2.2. Backing up and restoring applications 링크 복사링크가 클립보드에 복사되었습니다!
You back up applications by creating a Backup custom resource (CR). See Creating a Backup CR. You can configure the following backup options:
- Creating backup hooks to run commands before or after the backup operation
- Scheduling backups
- Backing up applications with File System Backup: Kopia or Restic
-
You restore application backups by creating a
Restore(CR). See Creating a Restore CR. - You can configure restore hooks to run commands in init containers or in the application container during the restore operation.
Chapter 2. Shutting down the cluster gracefully 링크 복사링크가 클립보드에 복사되었습니다!
This document describes the process to gracefully shut down your cluster. You might need to temporarily shut down your cluster for maintenance reasons, or to save on resource costs.
2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
Take an etcd backup prior to shutting down the cluster.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues when restarting the cluster.
For example, the following conditions can cause the restarted cluster to malfunction:
- etcd data corruption during shutdown
- Node failure due to hardware
- Network connectivity issues
If your cluster fails to recover, follow the steps to restore to a previous cluster state.
2.2. Shutting down the cluster 링크 복사링크가 클립보드에 복사되었습니다!
You can shut down your cluster in a graceful manner so that it can be restarted at a later date.
You can shut down a cluster until a year from the installation date and expect it to restart gracefully. After a year from the installation date, the cluster certificates expire. However, you might need to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates when the cluster restarts.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - You have taken an etcd backup.
Procedure
If you are shutting the cluster down for an extended period, determine the date on which certificates expire and run the following command:
oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\.openshift\.io/certificate-not-after}'$ oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\.openshift\.io/certificate-not-after}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
2022-08-05T14:37:50Zuser@user:~ $
2022-08-05T14:37:50Zuser@user:~ $1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- To ensure that the cluster can restart gracefully, plan to restart it on or before the specified date. As the cluster restarts, the process might require you to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates.
Mark all the nodes in the cluster as unschedulable. You can do this from your cloud provider’s web console, or by running the following loop:
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo ${node} ; oc adm cordon ${node} ; done$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo ${node} ; oc adm cordon ${node} ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Evacuate the pods using the following method:
for node in $(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo ${node} ; oc adm drain ${node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done$ for node in $(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo ${node} ; oc adm drain ${node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down all of the nodes in the cluster. You can do this from the web console for your cloud provider web console, or by running the following loop. Shutting down the nodes by using one of these methods allows pods to terminate gracefully, which reduces the chance for data corruption.
NoteEnsure that the control plane node with the API VIP assigned is the last node processed in the loop. Otherwise, the shutdown command fails.
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 1; done$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/${node} -- chroot /host shutdown -h 1; done1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
-h 1indicates how long, in minutes, this process lasts before the control plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to-h 10or longer to make sure all the compute nodes have time to shut down first.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is not necessary to drain control plane nodes of the standard pods that ship with OpenShift Container Platform prior to shutdown. Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained control plane nodes prior to shutdown because of custom workloads, you must mark the control plane nodes as schedulable before the cluster will be functional again after restart.
Shut off any cluster dependencies that are no longer needed, such as external storage or an LDAP server. Be sure to consult your vendor’s documentation before doing so.
ImportantIf you deployed your cluster on a cloud-provider platform, do not shut down, suspend, or delete the associated cloud resources. If you delete the cloud resources of a suspended virtual machine, OpenShift Container Platform might not restore successfully.
Chapter 3. Restarting the cluster gracefully 링크 복사링크가 클립보드에 복사되었습니다!
This document describes the process to restart your cluster after a graceful shutdown.
Even though the cluster is expected to be functional after the restart, the cluster might not recover due to unexpected conditions, for example:
- etcd data corruption during shutdown
- Node failure due to hardware
- Network connectivity issues
If your cluster fails to recover, follow the steps to restore to a previous cluster state.
3.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- You have gracefully shut down your cluster.
3.2. Restarting the cluster 링크 복사링크가 클립보드에 복사되었습니다!
You can restart your cluster after it has been shut down gracefully.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - This procedure assumes that you gracefully shut down the cluster.
Procedure
Turn on the control plane nodes.
If you are using the
admin.kubeconfigfrom the cluster installation and the API virtual IP address (VIP) is up, complete the following steps:-
Set the
KUBECONFIGenvironment variable to theadmin.kubeconfigpath. For each control plane node in the cluster, run the following command:
oc adm uncordon <node>
$ oc adm uncordon <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Set the
If you do not have access to your
admin.kubeconfigcredentials, complete the following steps:- Use SSH to connect to a control plane node.
-
Copy the
localhost-recovery.kubeconfigfile to the/rootdirectory. Use that file to run the following command for each control plane node in the cluster:
oc adm uncordon <node>
$ oc adm uncordon <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Power on any cluster dependencies, such as external storage or an LDAP server.
Start all cluster machines.
Use the appropriate method for your cloud environment to start the machines, for example, from your cloud provider’s web console.
Wait approximately 10 minutes before continuing to check the status of control plane nodes.
Verify that all control plane nodes are ready.
oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow The control plane nodes are ready if the status is
Ready, as shown in the following output:NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 75m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 75m v1.28.5
NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 75m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 75m v1.28.5Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the control plane nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After the control plane nodes are ready, verify that all worker nodes are ready.
oc get nodes -l node-role.kubernetes.io/worker
$ oc get nodes -l node-role.kubernetes.io/workerCopy to Clipboard Copied! Toggle word wrap Toggle overflow The worker nodes are ready if the status is
Ready, as shown in the following output:NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 64m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 64m v1.28.5
NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 64m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 64m v1.28.5Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the worker nodes are not ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved.
Get the list of current CSRs:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After the control plane and compute nodes are ready, mark all the nodes in the cluster as schedulable by running the following command:
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo ${node} ; oc adm uncordon ${node} ; done$ for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo ${node} ; oc adm uncordon ${node} ; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the cluster started properly.
Check that there are no degraded cluster Operators.
oc get clusteroperators
$ oc get clusteroperatorsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that there are no cluster Operators with the
DEGRADEDcondition set toTrue.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that all nodes are in the
Readystate:oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the status for all nodes is
Ready.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the cluster did not start properly, you might need to restore your cluster using an etcd backup.
Chapter 4. OADP Application backup and restore 링크 복사링크가 클립보드에 복사되었습니다!
4.1. Introduction to OpenShift API for Data Protection 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs).
However, OADP does not serve as a disaster recovery solution for etcd or {OCP-short} Operators.
OADP support is provided to customer workload namespaces, and cluster scope resources.
Full cluster backup and restore are not supported.
4.1.1. OpenShift API for Data Protection APIs 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources.
OADP provides the following APIs:
4.2. OADP release notes 링크 복사링크가 클립보드에 복사되었습니다!
4.2.1. OADP 1.4 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.
For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs
4.2.1.1. OADP 1.4.6 release notes 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) 1.4.6 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.4.5.
4.2.1.2. OADP 1.4.5 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.4.5 release notes lists new features and resolved issues.
4.2.1.2.1. New features 링크 복사링크가 클립보드에 복사되었습니다!
Collecting logs with the must-gather tool has been improved with a Markdown summary
You can collect logs and information about OpenShift API for Data Protection (OADP) custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in the clusters directory of the must-gather logs. (OADP-5904)
4.2.1.2.2. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
- OADP 1.4.5 fixes the following CVEs
4.2.1.3. OADP 1.4.4 release notes 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) 1.4.4 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.4.3.
4.2.1.3.1. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
Issue with restoring stateful applications
When you restore a stateful application that uses the azurefile-csi storage class, the restore operation remains in the Finalizing phase. (OADP-5508)
4.2.1.4. OADP 1.4.3 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.4.3 release notes lists the following new feature.
4.2.1.4.1. New features 링크 복사링크가 클립보드에 복사되었습니다!
Notable changes in the kubevirt velero plugin in version 0.7.1
With this release, the kubevirt velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features:
- Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded.
- Object graphs now include all extra objects during backup and restore operations.
- Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations.
- Switching VM run strategies during restore operations is now possible.
- Clearing a MAC address by label is now supported.
- The restore-specific checks during the backup operation are now skipped.
-
The
VirtualMachineClusterInstancetypeandVirtualMachineClusterPreferencecustom resource definitions (CRDs) are now supported.
4.2.1.5. OADP 1.4.2 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.4.2 release notes lists new features, resolved issues and bugs, and known issues.
4.2.1.5.1. New features 링크 복사링크가 클립보드에 복사되었습니다!
Backing up different volumes in the same namespace by using the VolumePolicy feature is now possible
With this release, Velero provides resource policies to back up different volumes in the same namespace by using the VolumePolicy feature. The supported VolumePolicy feature to back up different volumes includes skip, snapshot, and fs-backup actions. OADP-1071
File system backup and data mover can now use short-term credentials
File system backup and data mover can now use short-term credentials such as AWS Security Token Service (STS) and Google Cloud WIF. With this support, backup is successfully completed without any PartiallyFailed status. OADP-5095
4.2.1.5.2. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
DPA now reports errors if VSL contains an incorrect provider value
Previously, if the provider of a Volume Snapshot Location (VSL) spec was incorrect, the Data Protection Application (DPA) reconciled successfully. With this update, DPA reports errors and requests for a valid provider value. OADP-5044
Data Mover restore is successful irrespective of using different OADP namespaces for backup and restore
Previously, when backup operation was executed by using OADP installed in one namespace but was restored by using OADP installed in a different namespace, the Data Mover restore failed. With this update, Data Mover restore is now successful. OADP-5460
SSE-C backup works with the calculated MD5 of the secret key
Previously, backup failed with the following error:
Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.
Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.
With this update, missing Server-Side Encryption with Customer-Provided Keys (SSE-C) base64 and MD5 hash are now fixed. As a result, SSE-C backup works with the calculated MD5 of the secret key. In addition, incorrect errorhandling for the customerKey size is also fixed. OADP-5388
For a complete list of all issues resolved in this release, see the list of OADP 1.4.2 resolved issues in Jira.
4.2.1.5.3. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
The nodeSelector spec is not supported for the Data Mover restore action
When a Data Protection Application (DPA) is created with the nodeSelector field set in the nodeAgent parameter, Data Mover restore partially fails instead of completing the restore operation. OADP-5260
The S3 storage does not use proxy environment when TLS skip verify is specified
In the image registry backup, the S3 storage does not use the proxy environment when the insecureSkipTLSVerify parameter is set to true. OADP-3143
Kopia does not delete artifacts after backup expiration
Even after you delete a backup, Kopia does not delete the volume artifacts from the ${bucket_name}/kopia/$openshift-adp on the S3 location after backup expired. For more information, see "About Kopia repository maintenance". OADP-5131
4.2.1.6. OADP 1.4.1 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.4.1 release notes lists new features, resolved issues and bugs, and known issues.
4.2.1.6.1. New features 링크 복사링크가 클립보드에 복사되었습니다!
New DPA fields to update client qps and burst
You can now change Velero Server Kubernetes API queries per second and burst values by using the new Data Protection Application (DPA) fields. The new DPA fields are spec.configuration.velero.client-qps and spec.configuration.velero.client-burst, which both default to 100. OADP-4076
Enabling non-default algorithms with Kopia
With this update, you can now configure the hash, encryption, and splitter algorithms in Kopia to select non-default options to optimize performance for different backup workloads.
To configure these algorithms, set the env variable of a velero pod in the podConfig section of the DataProtectionApplication (DPA) configuration. If this variable is not set, or an unsupported algorithm is chosen, Kopia will default to its standard algorithms. OADP-4640
4.2.1.6.2. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
Restoring a backup without pods is now successful
Previously, restoring a backup without pods and having StorageClass VolumeBindingMode set as WaitForFirstConsumer, resulted in the PartiallyFailed status with an error: fail to patch dynamic PV, err: context deadline exceeded. With this update, patching dynamic PV is skipped and restoring a backup is successful without any PartiallyFailed status. OADP-4231
PodVolumeBackup CR now displays correct message
Previously, the PodVolumeBackup custom resource (CR) generated an incorrect message, which was: get a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed". With this update, the message produced is now:
found a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed".
found a podvolumebackup with status "InProgress" during the server starting,
mark it as "Failed".
Overriding imagePullPolicy is now possible with DPA
Previously, OADP set the imagePullPolicy parameter to Always for all images. With this update, OADP checks if each image contains sha256 or sha512 digest, then it sets imagePullPolicy to IfNotPresent; otherwise imagePullPolicy is set to Always. You can now override this policy by using the new spec.containerImagePullPolicy DPA field. OADP-4172
OADP Velero can now retry updating the restore status if initial update fails
Previously, OADP Velero failed to update the restored CR status. This left the status at InProgress indefinitely. Components which relied on the backup and restore CR status to determine the completion would fail. With this update, the restore CR status for a restore correctly proceeds to the Completed or Failed status. OADP-3227
Restoring BuildConfig Build from a different cluster is successful without any errors
Previously, when performing a restore of the BuildConfig Build resource from a different cluster, the application generated an error on TLS verification to the internal image registry. The resulting error was failed to verify certificate: x509: certificate signed by unknown authority error. With this update, the restore of the BuildConfig build resources to a different cluster can proceed successfully without generating the failed to verify certificate error. OADP-4692
Restoring an empty PVC is successful
Previously, downloading data failed while restoring an empty persistent volume claim (PVC). It failed with the following error:
data path restore failed: Failed to run kopia restore: Unable to load
snapshot : snapshot not found
data path restore failed: Failed to run kopia restore: Unable to load
snapshot : snapshot not found
With this update, the downloading of data proceeds to correct conclusion when restoring an empty PVC and the error message is not generated. OADP-3106
There is no Velero memory leak in CSI and DataMover plugins
Previously, a Velero memory leak was caused by using the CSI and DataMover plugins. When the backup ended, the Velero plugin instance was not deleted and the memory leak consumed memory until an Out of Memory (OOM) condition was generated in the Velero pod. With this update, there is no resulting Velero memory leak when using the CSI and DataMover plugins. OADP-4448
Post-hook operation does not start before the related PVs are released
Previously, due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the Data Mover persistent volume claim (PVC) releases the persistent volumes (PVs) of the related pods. This problem would cause the backup to fail with a PartiallyFailed status. With this update, the post-hook operation is not started until the related PVs are released by the Data Mover PVC, eliminating the PartiallyFailed backup status. OADP-3140
Deploying a DPA works as expected in namespaces with more than 37 characters
When you install the OADP Operator in a namespace with more than 37 characters to create a new DPA, labeling the "cloud-credentials" Secret fails and the DPA reports the following error:
The generated label name is too long.
The generated label name is too long.
With this update, creating a DPA does not fail in namespaces with more than 37 characters in the name. OADP-3960
Restore is successfully completed by overriding the timeout error
Previously, in a large scale environment, the restore operation would result in a Partiallyfailed status with the error: fail to patch dynamic PV, err: context deadline exceeded. With this update, the resourceTimeout Velero server argument is used to override this timeout error resulting in a successful restore. OADP-4344
For a complete list of all issues resolved in this release, see the list of OADP 1.4.1 resolved issues in Jira.
4.2.1.6.3. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning the error CrashLoopBackoff state after restoring OADP. The StatefulSet controller then recreates these pods and it runs normally. OADP-4407
Deployment referencing ImageStream is not restored properly leading to corrupted pod and volume contents
During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely.
During the restore operation, the OpenShift Container Platform controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB along with the post-hook. For more information about image stream trigger, see Triggering updates on image stream changes.
The workaround for this behavior is a two-step restore process:
Perform a restore excluding the
Deploymentresources, for example:velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps
$ velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.appsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once the first restore is successful, perform a second restore by including these resources, for example:
velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps
$ velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.appsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.1.7. OADP 1.4.0 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.4.0 release notes lists resolved issues and known issues.
4.2.1.7.1. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
Restore works correctly in OpenShift Container Platform 4.16
Previously, while restoring the deleted application namespace, the restore operation partially failed with the resource name may not be empty error in OpenShift Container Platform 4.16. With this update, restore works as expected in OpenShift Container Platform 4.16. OADP-4075
Data Mover backups work properly in the OpenShift Container Platform 4.16 cluster
Previously, Velero was using the earlier version of SDK where the Spec.SourceVolumeMode field did not exist. As a consequence, Data Mover backups failed in the OpenShift Container Platform 4.16 cluster on the external snapshotter with version 4.2. With this update, external snapshotter is upgraded to version 7.0 and later. As a result, backups do not fail in the OpenShift Container Platform 4.16 cluster. OADP-3922
For a complete list of all issues resolved in this release, see the list of OADP 1.4.0 resolved issues in Jira.
4.2.1.7.2. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
Backup fails when checksumAlgorithm is not set for MCG
While performing a backup of any application with Noobaa as the backup location, if the checksumAlgorithm configuration parameter is not set, backup fails. To fix this problem, if you do not provide a value for checksumAlgorithm in the Backup Storage Location (BSL) configuration, an empty value is added. The empty value is only added for BSLs that are created using Data Protection Application (DPA) custom resource (CR), and this value is not added if BSLs are created using any other method. OADP-4274
For a complete list of all known issues in this release, see the list of OADP 1.4.0 known issues in Jira.
4.2.1.7.3. Upgrade notes 링크 복사링크가 클립보드에 복사되었습니다!
Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3.
4.2.1.7.3.1. Changes from OADP 1.3 to 1.4 링크 복사링크가 클립보드에 복사되었습니다!
The Velero server has been updated from version 1.12 to 1.14. Note that there are no changes in the Data Protection Application (DPA).
This changes the following:
-
The
velero-plugin-for-csicode is now available in the Velero code, which means aninitcontainer is no longer required for the plugin. - Velero changed client Burst and QPS defaults from 30 and 20 to 100 and 100, respectively.
The
velero-plugin-for-awsplugin updated default value of thespec.config.checksumAlgorithmfield inBackupStorageLocationobjects (BSLs) from""(no checksum calculation) to theCRC32algorithm. For more information, see Velero plugins for AWS Backup Storage Location. The checksum algorithm types are known to work only with AWS. Several S3 providers require themd5sumto be disabled by setting the checksum algorithm to"". Confirmmd5sumalgorithm support and configuration with your storage provider.In OADP 1.4, the default value for BSLs created within DPA for this configuration is
"". This default value means that themd5sumis not checked, which is consistent with OADP 1.3. For BSLs created within DPA, update it by using thespec.backupLocations[].velero.config.checksumAlgorithmfield in the DPA. If your BSLs are created outside DPA, you can update this configuration by usingspec.config.checksumAlgorithmin the BSLs.
4.2.1.7.3.2. Backing up the DPA configuration 링크 복사링크가 클립보드에 복사되었습니다!
You must back up your current DataProtectionApplication (DPA) configuration.
Procedure
Save your current DPA configuration by running the following command:
Example command
oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.1.7.3.3. Upgrading the OADP Operator 링크 복사링크가 클립보드에 복사되었습니다!
Use the following procedure when upgrading the OpenShift API for Data Protection (OADP) Operator.
Procedure
-
Change your subscription channel for the OADP Operator from
stable-1.3tostable-1.4. - Wait for the Operator and containers to update and restart.
4.2.1.7.4. Converting DPA to the new version 링크 복사링크가 클립보드에 복사되었습니다!
To upgrade from OADP 1.3 to 1.4, no Data Protection Application (DPA) changes are required.
4.2.1.7.5. Verifying the upgrade 링크 복사링크가 클립보드에 복사되었습니다!
Use the following procedure to verify the upgrade.
Procedure
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. OADP 1.3 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The release notes for OpenShift API for Data Protection (OADP) 1.3 describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.
4.2.2.1. OADP 1.3.7 release notes 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) 1.3.7 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.6.
The following Common Vulnerabilities and Exposures (CVEs) have been fixed in OADP 1.3.7
4.2.2.1.1. New features 링크 복사링크가 클립보드에 복사되었습니다!
Collecting logs with the must-gather tool has been improved with a Markdown summary
You can collect logs and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in the must-gather logs clusters directory. OADP-5384
4.2.2.2. OADP 1.3.6 release notes 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) 1.3.6 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.5.
4.2.2.3. OADP 1.3.5 release notes 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) 1.3.5 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.4.
4.2.2.4. OADP 1.3.4 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.3.4 release notes list resolved issues and known issues.
4.2.2.4.1. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
The backup spec.resourcepolicy.kind parameter is now case-insensitive
Previously, the backup spec.resourcepolicy.kind parameter was only supported with a lower-level string. With this fix, it is now case-insensitive. OADP-2944
Use olm.maxOpenShiftVersion to prevent cluster upgrade to OCP 4.16 version
The cluster operator-lifecycle-manager operator must not be upgraded between minor OpenShift Container Platform versions. Using the olm.maxOpenShiftVersion parameter prevents upgrading to OpenShift Container Platform 4.16 version when OADP 1.3 is installed. To upgrade to OpenShift Container Platform 4.16 version, upgrade OADP 1.3 on OCP 4.15 version to OADP 1.4. OADP-4803
BSL and VSL are removed from the cluster
Previously, when any Data Protection Application (DPA) was modified to remove the Backup Storage Locations (BSL) or Volume Snapshot Locations (VSL) from the backupLocations or snapshotLocations section, BSL or VSL were not removed from the cluster until the DPA was deleted. With this update, BSL/VSL are removed from the cluster. OADP-3050
DPA reconciles and validates the secret key
Previously, the Data Protection Application (DPA) reconciled successfully on the wrong Volume Snapshot Locations (VSL) secret key name. With this update, DPA validates the secret key name before reconciling on any VSL. OADP-3052
Velero’s cloud credential permissions are now restrictive
Previously, Velero’s cloud credential permissions were mounted with the 0644 permissions. As a consequence, any one could read the /credentials/cloud file apart from the owner and group making it easier to access sensitive information such as storage access keys. With this update, the permissions of this file are updated to 0640, and this file cannot be accessed by other users except the owner and group.
Warning is displayed when ArgoCD managed namespace is included in the backup
A warning is displayed during the backup operation when ArgoCD and Velero manage the same namespace. OADP-4736
The list of security fixes that are included in this release is documented in the RHSA-2024:9960 advisory.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.4 resolved issues in Jira.
4.2.2.4.2. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
Cassandra application pods enter into the CrashLoopBackoff status after restore
After OADP restores, the Cassandra application pods might enter the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767
defaultVolumesToFSBackup and defaultVolumesToFsBackup flags are not identical
The dpa.spec.configuration.velero.defaultVolumesToFSBackup flag is not identical to the backup.spec.defaultVolumesToFsBackup flag, which can lead to confusion. OADP-3692
PodVolumeRestore works even though the restore is marked as failed
The podvolumerestore continues the data transfer even though the restore is marked as failed. OADP-3039
Velero is unable to skip restoring of initContainer spec
Velero might restore the restore-wait init container even though it is not required. OADP-3759
4.2.2.5. OADP 1.3.3 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.3.3 release notes list resolved issues and known issues.
4.2.2.5.1. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
OADP fails when its namespace name is longer than 37 characters
When installing the OADP Operator in a namespace with more than 37 characters and when creating a new DPA, labeling the cloud-credentials secret fails. With this release, the issue has been fixed. OADP-4211
OADP image PullPolicy set to Always
In previous versions of OADP, the image PullPolicy of the adp-controller-manager and Velero pods was set to Always. This was problematic in edge scenarios where there could be limited network bandwidth to the registry, resulting in slow recovery time following a pod restart. In OADP 1.3.3, the image PullPolicy of the openshift-adp-controller-manager and Velero pods is set to IfNotPresent.
The list of security fixes that are included in this release is documented in the RHSA-2024:4982 advisory.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.3 resolved issues in Jira.
4.2.2.5.2. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally.
4.2.2.6. OADP 1.3.2 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.3.2 release notes list resolved issues and known issues.
4.2.2.6.1. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
DPA fails to reconcile if a valid custom secret is used for BSL
DPA fails to reconcile if a valid custom secret is used for Backup Storage Location (BSL), but the default secret is missing. The workaround is to create the required default cloud-credentials initially. When the custom secret is re-created, it can be used and checked for its existence.
CVE-2023-45290: oadp-velero-container: Golang net/http: Memory exhaustion in Request.ParseMultipartForm
A flaw was found in the net/http Golang standard library package, which impacts previous versions of OADP. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2023-45290.
CVE-2023-45289: oadp-velero-container: Golang net/http/cookiejar: Incorrect forwarding of sensitive headers and cookies on HTTP redirect
A flaw was found in the net/http/cookiejar Golang standard library package, which impacts previous versions of OADP. When following an HTTP redirect to a domain that is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as Authorization or Cookie. A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2023-45289.
CVE-2024-24783: oadp-velero-container: Golang crypto/x509: Verify panics on certificates with an unknown public key algorithm
A flaw was found in the crypto/x509 Golang standard library package, which impacts previous versions of OADP. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24783.
CVE-2024-24784: oadp-velero-plugin-container: Golang net/mail: Comments in display names are incorrectly handled
A flaw was found in the net/mail Golang standard library package, which impacts previous versions of OADP. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. Because this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24784.
CVE-2024-24785: oadp-velero-container: Golang: html/template: errors returned from MarshalJSON methods may break template escaping
A flaw was found in the html/template Golang standard library package, which impacts previous versions of OADP. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24785.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.2 resolved issues in Jira.
4.2.2.6.2. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally.
4.2.2.7. OADP 1.3.1 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.3.1 release notes lists new features and resolved issues.
4.2.2.7.1. New features 링크 복사링크가 클립보드에 복사되었습니다!
OADP 1.3.0 Data Mover is now fully supported
The OADP built-in Data Mover, introduced in OADP 1.3.0 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads.
4.2.2.7.2. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
IBM Cloud(R) Object Storage is now supported as a backup storage provider
IBM Cloud® Object Storage is one of the AWS S3 compatible backup storage providers, which was unsupported previously. With this update, IBM Cloud® Object Storage is now supported as an AWS S3 compatible backup storage provider.
OADP operator now correctly reports the missing region error
Previously, when you specified profile:default without specifying the region in the AWS Backup Storage Location (BSL) configuration, the OADP operator failed to report the missing region error on the Data Protection Application (DPA) custom resource (CR). This update corrects validation of DPA BSL specification for AWS. As a result, the OADP Operator reports the missing region error.
Custom labels are not removed from the openshift-adp namespace
Previously, the openshift-adp-controller-manager pod would reset the labels attached to the openshift-adp namespace. This caused synchronization issues for applications requiring custom labels such as Argo CD, leading to improper functionality. With this update, this issue is fixed and custom labels are not removed from the openshift-adp namespace.
OADP must-gather image collects CRDs
Previously, the OADP must-gather image did not collect the custom resource definitions (CRDs) shipped by OADP. Consequently, you could not use the omg tool to extract data in the support shell. With this fix, the must-gather image now collects CRDs shipped by OADP and can use the omg tool to extract data.
Garbage collection has the correct description for the default frequency value
Previously, the garbage-collection-frequency field had a wrong description for the default frequency value. With this update, garbage-collection-frequency has a correct value of one hour for the gc-controller reconciliation default frequency.
FIPS Mode flag is available in OperatorHub
By setting the fips-compliant flag to true, the FIPS mode flag is now added to the OADP Operator listing in OperatorHub. This feature was enabled in OADP 1.3.0 but did not show up in the Red Hat Container catalog as being FIPS enabled.
CSI plugin does not panic with a nil pointer when csiSnapshotTimeout is set to a short duration
Previously, when the csiSnapshotTimeout parameter was set to a short duration, the CSI plugin encountered the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference.
With this fix, the backup fails with the following error: Timed out awaiting reconciliation of volumesnapshot.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.1 resolved issues in Jira.
4.2.2.7.3. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
Backup and storage restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms
Review the following backup and storage related restrictions for Single-node OpenShift clusters that are deployed on IBM Power® and IBM Z® platforms:
- Storage
- Only NFS storage is currently compatible with single-node OpenShift clusters deployed on IBM Power® and IBM Z® platforms.
- Backup
-
Only the backing up applications with File System Backup such as
kopiaandresticare supported for backup and restore operations.
Cassandra application pods enter in the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods with any error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally.
4.2.2.8. OADP 1.3.0 release notes 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) 1.3.0 release notes lists new features, resolved issues and bugs, and known issues.
4.2.2.8.1. New features 링크 복사링크가 클립보드에 복사되었습니다!
Velero built-in DataMover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OADP 1.3 includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and to write to the Unified Repository.
Backing up applications with File System Backup: Kopia or Restic
Velero’s File System Backup (FSB) supports two backup libraries: the Restic path and the Kopia path.
Velero allows users to select between the two paths.
For backup, specify the path during the installation through the uploader-type flag. The valid value is either restic or kopia. This field defaults to kopia if the value is not specified. The selection cannot be changed after the installation.
GCP Cloud authentication
Google Cloud Platform (GCP) authentication enables you to use short-lived Google credentials.
GCP with Workload Identity Federation enables you to use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This eliminates the maintenance and security risks associated with service account keys.
AWS ROSA STS authentication
You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to backup and restore application data.
ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivering of differentiating experiences to your customers.
You can subscribe to the service directly from your AWS account.
After the clusters are created, you can operate your clusters by using the OpenShift web console. The ROSA service also uses OpenShift APIs and command-line interface (CLI) tools.
4.2.2.8.2. Resolved issues 링크 복사링크가 클립보드에 복사되었습니다!
ACM applications were removed and re-created on managed clusters after restore
Applications on managed clusters were deleted and re-created upon restore activation. OpenShift API for Data Protection (OADP 1.2) backup and restore process is faster than the older versions. The OADP performance change caused this behavior when restoring ACM resources. Therefore, some resources were restored before other resources, which caused the removal of the applications from managed clusters. OADP-2686
Restic restore was partially failing due to Pod Security standard
During interoperability testing, OpenShift Container Platform 4.14 had the pod Security mode set to enforce, which caused the pod to be denied. This was caused due to the restore order. The pod was getting created before the security context constraints (SCC) resource, since the pod violated the podSecurity standard, it denied the pod. When setting the restore priority field on the Velero server, restore is successful. OADP-2688
Possible pod volume backup failure if Velero is installed in several namespaces
There was a regression in Pod Volume Backup (PVB) functionality when Velero was installed in several namespaces. The PVB controller was not properly limiting itself to PVBs in its own namespace. OADP-2308
OADP Velero plugins returning "received EOF, stopping recv loop" message
In OADP, Velero plugins were started as separate processes. When the Velero operation completes, either successfully or not, they exit. Therefore, if you see a received EOF, stopping recv loop messages in debug logs, it does not mean an error occurred, it means that a plugin operation has completed. OADP-2176
CVE-2023-39325 Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
In previous releases of OADP, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption.
For more information, see CVE-2023-39325 (Rapid Reset Attack)
For a complete list of all issues resolved in this release, see the list of OADP 1.3.0 resolved issues in Jira.
4.2.2.8.3. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration
The CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration. Sometimes it succeeds to complete the snapshot within a short duration, but often it panics with the backup PartiallyFailed with the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference.
Backup is marked as PartiallyFailed when volumeSnapshotContent CR has an error
If any of the VolumeSnapshotContent CRs have an error related to removing the VolumeSnapshotBeingCreated annotation, it moves the backup to the WaitingForPluginOperationsPartiallyFailed phase. OADP-2871
Performance issues when restoring 30,000 resources for the first time
When restoring 30,000 resources for the first time, without an existing-resource-policy, it takes twice as long to restore them, than it takes during the second and third try with an existing-resource-policy set to update. OADP-3071
Post restore hooks might start running before Datadownload operation has released the related PV
Due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the related pods persistent volumes (PVs) are released by the Data Mover persistent volume claim (PVC).
GCP-Workload Identity Federation VSL backup PartiallyFailed
VSL backup PartiallyFailed when GCP workload identity is configured on GCP.
For a complete list of all known issues in this release, see the list of OADP 1.3.0 known issues in Jira.
4.2.2.8.4. Upgrade notes 링크 복사링크가 클립보드에 복사되었습니다!
Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3.
4.2.2.8.4.1. Changes from OADP 1.2 to 1.3 링크 복사링크가 클립보드에 복사되었습니다!
The Velero server has been updated from version 1.11 to 1.12.
OpenShift API for Data Protection (OADP) 1.3 uses the Velero built-in Data Mover instead of the VolumeSnapshotMover (VSM) or the Volsync Data Mover.
This changes the following:
-
The
spec.features.dataMoverfield and the VSM plugin are not compatible with OADP 1.3, and you must remove the configuration from theDataProtectionApplication(DPA) configuration. - The Volsync Operator is no longer required for Data Mover functionality, and you can remove it.
-
The custom resource definitions
volumesnapshotbackups.datamover.oadp.openshift.ioandvolumesnapshotrestores.datamover.oadp.openshift.ioare no longer required, and you can remove them. - The secrets used for the OADP-1.2 Data Mover are no longer required, and you can remove them.
OADP 1.3 supports Kopia, which is an alternative file system backup tool to Restic.
To employ Kopia, use the new
spec.configuration.nodeAgentfield as shown in the following example:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.configuration.resticfield is deprecated in OADP 1.3 and will be removed in a future version of OADP. To avoid seeing deprecation warnings, remove therestickey and its values, and use the following new syntax:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
In a future OADP release, it is planned that the kopia tool will become the default uploaderType value.
4.2.2.8.4.2. Upgrading from OADP 1.2 Technology Preview Data Mover 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) 1.2 Data Mover backups cannot be restored with OADP 1.3. To prevent a gap in the data protection of your applications, complete the following steps before upgrading to OADP 1.3:
Procedure
- If your cluster backups are sufficient and Container Storage Interface (CSI) storage is available, back up the applications with a CSI backup.
If you require off cluster backups:
-
Back up the applications with a file system backup that uses the
--default-volumes-to-fs-backup=true or backup.spec.defaultVolumesToFsBackupoptions. -
Back up the applications with your object storage plugins, for example,
velero-plugin-for-aws.
-
Back up the applications with a file system backup that uses the
The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours.
To restore OADP 1.2 Data Mover backup, you must uninstall OADP, and install and configure OADP 1.2.
4.2.2.8.4.3. Backing up the DPA configuration 링크 복사링크가 클립보드에 복사되었습니다!
You must back up your current DataProtectionApplication (DPA) configuration.
Procedure
Save your current DPA configuration by running the following command:
Example
oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2.8.4.4. Upgrading the OADP Operator 링크 복사링크가 클립보드에 복사되었습니다!
Use the following sequence when upgrading the OpenShift API for Data Protection (OADP) Operator.
Procedure
-
Change your subscription channel for the OADP Operator from
stable-1.2tostable-1.3. - Allow time for the Operator and containers to update and restart.
4.2.2.8.4.5. Converting DPA to the new version 링크 복사링크가 클립보드에 복사되었습니다!
If you need to move backups off cluster with the Data Mover, reconfigure the DataProtectionApplication (DPA) manifest as follows.
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- In the Provided APIs section, click View more.
- Click Create instance in the DataProtectionApplication box.
Click YAML View to display the current DPA parameters.
Example current DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the DPA parameters:
-
Remove the
features.dataMoverkey and values from the DPA. - Remove the VolumeSnapshotMover (VSM) plugin.
Add the
nodeAgentkey and values.Example updated DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Remove the
- Wait for the DPA to reconcile successfully.
4.2.2.8.4.6. Verifying the upgrade 링크 복사링크가 클립보드에 복사되었습니다!
Use the following procedure to verify the upgrade.
Procedure
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In OADP 1.3 you can start data movement off cluster per backup versus creating a DataProtectionApplication (DPA) configuration.
Example command
velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true
$ velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true
Example configuration file
4.3. OADP performance 링크 복사링크가 클립보드에 복사되었습니다!
4.3.1. OADP recommended network settings 링크 복사링크가 클립보드에 복사되었습니다!
For a supported experience with OpenShift API for Data Protection (OADP), you should have a stable and resilient network across OpenShift nodes, S3 storage, and in supported cloud environments that meet OpenShift network requirement recommendations.
To ensure successful backup and restore operations for deployments with remote S3 buckets located off-cluster with suboptimal data paths, it is recommended that your network settings meet the following minimum requirements in such less optimal conditions:
- Bandwidth (network upload speed to object storage): Greater than 2 Mbps for small backups and 10-100 Mbps depending on the data volume for larger backups.
- Packet loss: 1%
- Packet corruption: 1%
- Latency: 100ms
Ensure that your OpenShift Container Platform network performs optimally and meets OpenShift Container Platform network requirements.
Although Red Hat provides supports for standard backup and restore failures, it does not provide support for failures caused by network settings that do not meet the recommended thresholds.
4.4. OADP features and plugins 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications.
The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources.
4.4.1. OADP features 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) supports the following features:
- Backup
You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label.
OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic.
NoteYou must exclude Operators from the backup of an application for backup and restore to succeed.
- Restore
You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label.
NoteYou must exclude Operators from the backup of an application for backup and restore to succeed.
- Schedule
- You can schedule backups at specified intervals.
- Hooks
-
You can use hooks to run commands in a container on a pod, for example,
fsfreezeto freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container.
4.4.2. OADP plugins 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins.
OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots.
| OADP plugin | Function | Storage location |
|---|---|---|
|
| Backs up and restores Kubernetes objects. | AWS S3 |
| Backs up and restores volumes with snapshots. | AWS EBS | |
|
| Backs up and restores Kubernetes objects. | Microsoft Azure Blob storage |
| Backs up and restores volumes with snapshots. | Microsoft Azure Managed Disks | |
|
| Backs up and restores Kubernetes objects. | Google Cloud Storage |
| Backs up and restores volumes with snapshots. | Google Compute Engine Disks | |
|
| Backs up and restores OpenShift Container Platform resources. [1] | Object store |
|
| Backs up and restores OpenShift Virtualization resources. [2] | Object store |
|
| Backs up and restores volumes with CSI snapshots. [3] | Cloud storage that supports CSI snapshots |
|
| VolumeSnapshotMover relocates snapshots from the cluster into an object store to be used during a restore process to recover stateful applications, in situations such as cluster deletion. [4] | Object store |
- Mandatory.
- Virtual machine disks are backed up with CSI snapshots or Restic.
The
csiplugin uses the Kubernetes CSI snapshot API.-
OADP 1.1 or later uses
snapshot.storage.k8s.io/v1 -
OADP 1.0 uses
snapshot.storage.k8s.io/v1beta1
-
OADP 1.1 or later uses
- OADP 1.2 only.
4.4.3. About OADP Velero plugins 링크 복사링크가 클립보드에 복사되었습니다!
You can configure two types of plugins when you install Velero:
- Default cloud provider plugins
- Custom plugins
Both types of plugin are optional, but most users configure at least one cloud provider plugin.
4.4.3.1. Default Velero cloud provider plugins 링크 복사링크가 클립보드에 복사되었습니다!
You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment:
-
aws(Amazon Web Services) -
gcp(Google Cloud Platform) -
azure(Microsoft Azure) -
openshift(OpenShift Velero plugin) -
csi(Container Storage Interface) -
kubevirt(KubeVirt)
You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the openshift, aws, azure, and gcp plugins:
4.4.3.2. Custom Velero plugins 링크 복사링크가 클립보드에 복사되었습니다!
You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment.
You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the default openshift, azure, and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin:
4.4.3.3. Velero plugins returning "received EOF, stopping recv loop" message 링크 복사링크가 클립보드에 복사되었습니다!
Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred.
4.4.4. Supported architectures for OADP 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) supports the following architectures:
- AMD64
- ARM64
- PPC64le
- s390x
OADP 1.2.0 and later versions support the ARM64 architecture.
4.4.5. OADP support for IBM Power and IBM Z 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power® and to IBM Z®.
- OADP 1.1.7 was tested successfully against OpenShift Container Platform 4.11 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.1.7 in terms of backup locations for these systems.
- OADP 1.2.3 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.2.3 in terms of backup locations for these systems.
- OADP 1.3.7 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.3.7 in terms of backup locations for these systems.
- OADP 1.4.6 was tested successfully against OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.4.6 in terms of backup locations for these systems.
4.4.5.1. OADP support for target backup locations using IBM Power 링크 복사링크가 클립보드에 복사되었습니다!
- IBM Power® running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well.
- IBM Power® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.12, 4.13. 4.14, and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well.
- IBM Power® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.13, 4.14, and 4.15, and OADP 1.3.7 against all S3 backup location targets, which are not AWS, as well.
- IBM Power® running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.6 against all S3 backup location targets, which are not AWS, as well.
4.4.5.2. OADP testing and support for target backup locations using IBM Z 링크 복사링크가 클립보드에 복사되었습니다!
- IBM Z® running with OpenShift Container Platform 4.11 and 4.12, and OpenShift API for Data Protection (OADP) 1.1.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.11 and 4.12, and OADP 1.1.7 against all S3 backup location targets, which are not AWS, as well.
- IBM Z® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.2.3 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.12, 4.13, 4.14 and 4.15, and OADP 1.2.3 against all S3 backup location targets, which are not AWS, as well.
- IBM Z® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.13 4.14, and 4.15, and 1.3.7 against all S3 backup location targets, which are not AWS, as well.
- IBM Z® running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.6 against all S3 backup location targets, which are not AWS, as well.
4.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms 링크 복사링크가 클립보드에 복사되었습니다!
- Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power® and IBM Z® platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue.
4.4.6. OADP plugins known issues 링크 복사링크가 클립보드에 복사되었습니다!
The following section describes known issues in OpenShift API for Data Protection (OADP) plugins:
4.4.6.1. Velero plugin panics during imagestream backups due to a missing secret 링크 복사링크가 클립보드에 복사되었습니다!
When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret.
When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error:
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94…
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
4.4.6.1.1. Workaround to avoid the panic error 링크 복사링크가 클립보드에 복사되었습니다!
To avoid the Velero plugin panic error, perform the following steps:
Label the custom BSL with the relevant label:
oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
$ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bslCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the BSL is labeled, wait until the DPA reconciles.
NoteYou can force the reconciliation by making any minor change to the DPA itself.
When the DPA reconciles, confirm that the relevant
oadp-<bsl_name>-<bsl_provider>-registry-secrethas been created and that the correct registry data has been populated into it:oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
$ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.6.2. OpenShift ADP Controller segmentation fault 링크 복사링크가 클립보드에 복사되었습니다!
If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.
You can have either velero or cloudstorage defined, because they are mutually exclusive fields.
-
If you have both
veleroandcloudstoragedefined, theopenshift-adp-controller-managerfails. -
If you have neither
veleronorcloudstoragedefined, theopenshift-adp-controller-managerfails.
For more information about this issue, see OADP-1054.
4.4.6.2.1. OpenShift ADP Controller segmentation fault workaround 링크 복사링크가 클립보드에 복사되었습니다!
You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault.
4.4.7. OADP and FIPS 링크 복사링크가 클립보드에 복사되었습니다!
Federal Information Processing Standards (FIPS) are a set of computer security standards developed by the United States federal government in line with the Federal Information Security Management Act (FISMA).
OpenShift API for Data Protection (OADP) has been tested and works on FIPS-enabled OpenShift Container Platform clusters.
4.5. OADP use cases 링크 복사링크가 클립보드에 복사되었습니다!
Following is a use case for using OADP and ODF to back up an application.
4.5.1.1. Backing up an application using OADP and ODF 링크 복사링크가 클립보드에 복사되었습니다!
In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF).
- You create an object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
-
You use the NooBaa MCG service with OADP by using the
awsprovider plugin. - You configure the Data Protection Application (DPA) with the backup storage location (BSL).
- You create a backup custom resource (CR) and specify the application namespace to back up.
- You create and verify the backup.
Prerequisites
- You installed the OADP Operator.
- You installed the ODF Operator.
- You have an application with a database running in a separate namespace.
Procedure
Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example:
Example OBC
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OBC by running the following command:
oc create -f <obc_file_name>
$ oc create -f <obc_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the file name of the object bucket claim manifest.
When you create an OBC, ODF creates a
secretand aconfig mapwith the same name as the object bucket claim. Thesecrethas the bucket credentials, and theconfig maphas information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:oc extract --to=- cm/test-obc
$ oc extract --to=- cm/test-obc1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
test-obcis the name of the OBC.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the bucket credentials from the generated
secret, run the following command:oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPymCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the public URL for the S3 endpoint from the s3 route in the
openshift-storagenamespace by running the following command:oc get route s3 -n openshift-storage
$ oc get route s3 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cloud-credentialsfile with the object bucket credentials as shown in the following command:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
cloud-credentialssecret with thecloud-credentialsfile content as shown in the following command:oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials
$ oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Data Protection Application (DPA) as shown in the following example:
Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the DPA by running the following command:
oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the DPA is created successfully by running the following command. In the example output, you can see the
statusobject hastypefield set toReconciled. This means, the DPA is successfully created.oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a backup CR as shown in the following example:
Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the namespace for the application to back up.
Create the backup CR by running the following command:
oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the backup object is in the
Completedphase by running the following command. For more details, see the example output.oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.2. OpenShift API for Data Protection (OADP) restore use case 링크 복사링크가 클립보드에 복사되었습니다!
Following is a use case for using OADP to restore a backup to a different namespace.
4.5.2.1. Restoring an application to a different namespace using OADP 링크 복사링크가 클립보드에 복사되었습니다!
Restore a backup of an application by using OADP to a new target namespace, test-restore-application. To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources.
Prerequisites
- You installed the OADP Operator.
- You have the backup of an application to be restored.
Procedure
Create a restore CR as shown in the following example:
Example restore CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the restore CR by running the following command:
oc apply -f <restore_cr_filename>
$ oc apply -f <restore_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the restore is in the
Completedphase by running the following command:oc describe restores.velero.io <restore_name> -n openshift-adp
$ oc describe restores.velero.io <restore_name> -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the restored namespace
test-restore-applicationby running the following command:oc project test-restore-application
$ oc project test-restore-applicationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command:
oc get pvc,svc,deployment,secret,configmap
$ oc get pvc,svc,deployment,secret,configmapCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.3. Including a self-signed CA certificate during backup 링크 복사링크가 클립보드에 복사되었습니다!
You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by Red Hat OpenShift Data Foundation (ODF).
4.5.3.1. Backing up an application and its self-signed CA certificate 링크 복사링크가 클립보드에 복사되었습니다!
The s3.openshift-storage.svc service, provided by ODF, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA.
To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks:
- Request a NooBaa bucket by creating an object bucket claim (OBC).
- Extract the bucket details.
-
Include a self-signed CA certificate in the
DataProtectionApplicationCR. - Back up an application.
Prerequisites
- You installed the OADP Operator.
- You installed the ODF Operator.
- You have an application with a database running in a separate namespace.
Procedure
Create an OBC manifest to request a NooBaa bucket as shown in the following example:
Example
ObjectBucketClaimCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OBC by running the following command:
oc create -f <obc_file_name>
$ oc create -f <obc_file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you create an OBC, ODF creates a
secretand aConfigMapwith the same name as the object bucket claim. Thesecretobject contains the bucket credentials, and theConfigMapobject contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:oc extract --to=- cm/test-obc
$ oc extract --to=- cm/test-obc1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the OBC is
test-obc.
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the bucket credentials from the
secretobject, run the following command:oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPymCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cloud-credentialsfile with the object bucket credentials by using the following example configuration:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
cloud-credentialssecret with thecloud-credentialsfile content by running the following command:oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials
$ oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the service CA certificate from the
openshift-service-ca.crtconfig map by running the following command. Ensure that you encode the certificate inBase64format and note the value to use in the next step.oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo$ oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0K
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0KCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
DataProtectionApplicationCR manifest file with the bucket name and CA certificate as shown in the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
DataProtectionApplicationCR by running the following command:oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplicationCR is created successfully by running the following command:oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
BackupCR by using the following example:Example
BackupCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the namespace for the application to back up.
Create the
BackupCR by running the following command:oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
Backupobject is in theCompletedphase by running the following command:oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.4. Using the legacy-aws Velero plugin 링크 복사링크가 클립보드에 복사되었습니다!
If you are using an AWS S3-compatible backup storage location, you might get a SignatureDoesNotMatch error while backing up your application. This error occurs because some backup storage locations still use the older versions of the S3 APIs, which are incompatible with the newer AWS SDK for Go V2. To resolve this issue, you can use the legacy-aws Velero plugin in the DataProtectionApplication custom resource (CR). The legacy-aws Velero plugin uses the older AWS SDK for Go V1, which is compatible with the legacy S3 APIs, ensuring successful backups.
4.5.4.1. Using the legacy-aws Velero plugin in the DataProtectionApplication CR 링크 복사링크가 클립보드에 복사되었습니다!
In the following use case, you configure the DataProtectionApplication CR with the legacy-aws Velero plugin and then back up an application.
Depending on the backup storage location you choose, you can use either the legacy-aws or the aws plugin in your DataProtectionApplication CR. If you use both of the plugins in the DataProtectionApplication CR, the following error occurs: aws and legacy-aws can not be both specified in DPA spec.configuration.velero.defaultPlugins.
Prerequisites
- You have installed the OADP Operator.
- You have configured an AWS S3-compatible object storage as a backup location.
- You have an application with a database running in a separate namespace.
Procedure
Configure the
DataProtectionApplicationCR to use thelegacy-awsVelero plugin as shown in the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
DataProtectionApplicationCR by running the following command:oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplicationCR is created successfully by running the following command. In the example output, you can see thestatusobject has thetypefield set toReconciledand thestatusfield set to"True". That status indicates that theDataProtectionApplicationCR is successfully created.oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a
BackupCR as shown in the following example:Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the namespace for the application to back up.
Create the
BackupCR by running the following command:oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the backup object is in the
Completedphase by running the following command. For more details, see the example output.oc describe backups.velero.io test-backup -n openshift-adp
$ oc describe backups.velero.io test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Installing OADP 링크 복사링크가 클립보드에 복사되었습니다!
4.6.1. About installing OADP 링크 복사링크가 클립보드에 복사되었습니다!
As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- Multicloud Object Gateway
- IBM Cloud® Object Storage S3
- AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO
You can configure multiple backup storage locations within the same namespace for each individual OADP deployment.
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation.
The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage.
You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB).
To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
- CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation
If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1.x.
OADP 1.0.x does not support CSI backup on OCP 4.11 and later. OADP 1.0.x includes Velero 1.7.x and expects the API group snapshot.storage.k8s.io/v1beta1, which is not present on OCP 4.11 and later.
If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage.
You create a default Secret and then you install the Data Protection Application.
4.6.1.1. AWS S3 compatible backup storage providers 링크 복사링크가 클립보드에 복사되었습니다!
OADP works with many S3-compatible object storage providers. Several object storage providers are certified and tested with every release of OADP. Various S3 providers are known to work with OADP but are not specifically tested and certified. These providers will be supported on a best-effort basis. Additionally, there are a few S3 object storage providers with known issues and limitations that are listed in this documentation.
Red Hat will provide support for OADP on any S3-compatible storage, but support will stop if the S3 endpoint is determined to be the root cause of an issue.
4.6.1.1.1. Certified backup storage providers 링크 복사링크가 클립보드에 복사되었습니다!
The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations:
- MinIO
- Multicloud Object Gateway (MCG)
- Amazon Web Services (AWS) S3
- IBM Cloud® Object Storage S3
- Ceph RADOS Gateway (Ceph Object Gateway)
- Red Hat Container Storage
- Red Hat OpenShift Data Foundation
- NetApp ONTAP S3 Object Storage
Google Cloud and Microsoft Azure have their own Velero object store plugins.
4.6.1.1.2. Unsupported backup storage providers 링크 복사링크가 클립보드에 복사되었습니다!
The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat:
- Oracle Cloud
- DigitalOcean
- NooBaa, unless installed using Multicloud Object Gateway (MCG)
- Tencent Cloud
- Quobyte
- Cloudian HyperStore
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
4.6.1.1.3. Backup storage providers with known limitations 링크 복사링크가 클립보드에 복사되었습니다!
The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set:
- Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore.
If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store.
Failure to configure MCG as an external object store might lead to backups not being available.
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
Procedure
- Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud.
4.6.1.3. About OADP update channels 링크 복사링크가 클립보드에 복사되었습니다!
When you install an OADP Operator, you choose an update channel. This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time.
The following update channels are available:
-
The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP
ClusterServiceVersionforOADP.v1.1.zand older versions fromOADP.v1.0.z. - The stable-1.0 channel is deprecated and is not supported.
- The stable-1.1 channel is deprecated and is not supported.
- The stable-1.2 channel is deprecated and is not supported.
-
The stable-1.3 channel contains
OADP.v1.3.z, the most recent OADP 1.3ClusterServiceVersion. -
The stable-1.4 channel contains
OADP.v1.4.z, the most recent OADP 1.4ClusterServiceVersion.
For more information, see OpenShift Operator Life Cycles.
Which update channel is right for you?
-
The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from
OADP.v1.1.z. - Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z.
When must you switch update channels?
- If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z.
- If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z.
- If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z.
You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it.
4.6.1.4. Installation of OADP on multiple namespaces 링크 복사링크가 클립보드에 복사되었습니다!
You can install OpenShift API for Data Protection into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI).
You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements:
- All deployments of OADP on the same cluster must be the same version, for example, 1.4.0. Installing different versions of OADP on the same cluster is not supported.
-
Each individual deployment of OADP must have a unique set of credentials and at least one
BackupStorageLocationconfiguration. You can also use multipleBackupStorageLocationconfigurations within the same namespace. - By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to carefully review potential impacts, such as not backing up and restoring to and from the same namespace concurrently.
4.6.1.5. OADP support for backup data immutability 링크 복사링크가 클립보드에 복사되었습니다!
Starting with OADP 1.4, you can store OADP backups in an AWS S3 bucket with enabled versioning. The versioning support is only for AWS S3 buckets and not for S3-compatible buckets.
See the following list for specific cloud provider limitations:
- AWS S3 service supports backups because an S3 object lock applies only to versioned buckets. You can still update the object data for the new version. However, when backups are deleted, old versions of the objects are not deleted.
- OADP backups are not supported and might not work as expected when you enable immutability on Azure Storage Blob.
- GCP Cloud storage policy only supports bucket-level immutability. Therefore, it is not feasible to implement it in the GCP environment.
Depending on your storage provider, the immutability options are called differently:
- S3 object lock
- Object retention
- Bucket versioning
- Write Once Read Many (WORM) buckets
The primary reason for the absence of support for other S3-compatible object storage is that OADP initially saves the state of a backup as finalizing and then verifies whether any asynchronous operations are in progress.
4.6.1.6. Velero CPU and memory requirements based on collected data 링크 복사링크가 클립보드에 복사되었습니다!
The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources.
4.6.1.6.1. CPU and memory requirement for configurations 링크 복사링크가 클립보드에 복사되었습니다!
| Configuration types | [1] Average usage | [2] Large usage | resourceTimeouts |
|---|---|---|---|
| CSI | Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi | Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi | N/A |
| Restic | [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi | [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi | 900m |
| [5] Data Mover | N/A | N/A | 10m - average usage 60m - large usage |
- Average usage - use these settings for most usage situations.
- Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets.
- Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations.
- Increasing the CPU has a significant impact on improving backup and restore times.
- Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m.
The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above.
4.6.1.6.2. NodeAgent CPU for large usage 링크 복사링크가 클립보드에 복사되었습니다!
Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP).
You can tune your OpenShift Container Platform environment based on your performance analysis and preference. Use CPU limits in the workloads when you use Kopia for file system backups.
If you do not use CPU limits on the pods, the pods can use excess CPU when it is available. If you specify CPU limits, the pods might be throttled if they exceed their limits. Therefore, the use of CPU limits on the pods is considered an anti-pattern.
Ensure that you are accurately specifying CPU requests so that pods can take advantage of excess CPU. Resource allocation is guaranteed based on CPU requests rather than CPU limits.
Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications.
In some environments, you might need to adjust Ceph MDS pod resources to avoid pod restarts, which occur when default settings cause resource saturation.
For more information about how to set the pod resources limit in Ceph MDS pods, see Changing the CPU and memory resources on the rook-ceph pods.
4.6.2. Installing the OADP Operator 링크 복사링크가 클립보드에 복사되었습니다!
You can install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.15 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.14.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
- In the OpenShift Container Platform web console, click Operators → OperatorHub.
- Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. - Click Operators → Installed Operators to verify the installation.
4.6.2.1. OADP-Velero-OpenShift Container Platform version relationship 링크 복사링크가 클립보드에 복사되었습니다!
4.7. Configuring OADP with AWS S3 compatible storage 링크 복사링크가 클립보드에 복사되었습니다!
4.7.1. Configuring the OpenShift API for Data Protection with AWS S3 compatible storage 링크 복사링크가 클립보드에 복사되었습니다!
You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) S3 compatible storage by installing the OADP Operator. The Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure AWS for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.7.1.1. About Amazon Simple Storage Service, Identity and Access Management, and GovCloud 링크 복사링크가 클립보드에 복사되었습니다!
Amazon Simple Storage Service (Amazon S3) is a storage solution of Amazon for the internet. As an authorized user, you can use this service to store and retrieve any amount of data whenever you want, from anywhere on the web.
You securely control access to Amazon S3 and other Amazon services by using the AWS Identity and Access Management (IAM) web service.
You can use IAM to manage permissions that control which AWS resources users can access. You use IAM to both authenticate, or verify that a user is who they claim to be, and to authorize, or grant permissions to use resources.
AWS GovCloud (US) is an Amazon storage solution developed to meet the stringent and specific data security requirements of the United States Federal Government. AWS GovCloud (US) works the same as Amazon S3 except for the following:
- You cannot copy the contents of an Amazon S3 bucket in the AWS GovCloud (US) regions directly to or from another AWS region.
If you use Amazon S3 policies, use the AWS GovCloud (US) Amazon Resource Name (ARN) identifier to unambiguously specify a resource across all of AWS, such as in IAM policies, Amazon S3 bucket names, and API calls.
IIn AWS GovCloud (US) regions, ARNs have an identifier that is different from the one in other standard AWS regions,
arn:aws-us-gov. If you need to specify the US-West or US-East region, use one the following ARNs:-
For US-West, use
us-gov-west-1. -
For US-East, use
us-gov-east-1.
-
For US-West, use
-
For all other standard regions, ARNs begin with:
arn:aws.
- In AWS GovCloud (US) regions, use the endpoints listed in the AWS GovCloud (US-East) and AWS GovCloud (US-West) rows of the "Amazon S3 endpoints" table on Amazon Simple Storage Service endpoints and quotas. If you are processing export-controlled data, use one of the SSL/TLS endpoints. If you have FIPS requirements, use a FIPS 140-2 endpoint such as https://s3-fips.us-gov-west-1.amazonaws.com or https://s3-fips.us-gov-east-1.amazonaws.com.
- To find the other AWS-imposed restrictions, see How Amazon Simple Storage Service Differs for AWS GovCloud (US).
4.7.1.2. Configuring Amazon Web Services 링크 복사링크가 클립보드에 복사되었습니다!
You configure Amazon Web Services (AWS) for the OpenShift API for Data Protection (OADP).
Prerequisites
- You must have the AWS CLI installed.
Procedure
Set the
BUCKETvariable:BUCKET=<your_bucket>
$ BUCKET=<your_bucket>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
REGIONvariable:REGION=<your_region>
$ REGION=<your_region>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS S3 bucket:
aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION$ aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
us-east-1does not support aLocationConstraint. If your region isus-east-1, omit--create-bucket-configuration LocationConstraint=$REGION.
Create an IAM user:
aws iam create-user --user-name velero
$ aws iam create-user --user-name velero1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
Create a
velero-policy.jsonfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the minimum necessary permissions:aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json
$ aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an access key for the
velerouser:aws iam create-access-key --user-name velero
$ aws iam create-access-key --user-name veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
credentials-velerofile:cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF
$ cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for AWS before you install the Data Protection Application.
4.7.1.3. About backup and snapshot locations and their secrets 링크 복사링크가 클립보드에 복사되었습니다!
You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR).
Backup locations
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
Snapshot locations
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
Secrets
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.7.1.3.1. Creating a default Secret 링크 복사링크가 클립보드에 복사되었습니다!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.See the following example:
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.7.1.3.2. Creating profiles for different credentials 링크 복사링크가 클립보드에 복사되었습니다!
If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file.
Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR).
Procedure
Create a
credentials-velerofile with separate profiles for the backup and snapshot locations, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretobject with thecredentials-velerofile:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the profiles to the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.3.3. Configuring the backup storage location using AWS 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the AWS backup storage location (BSL) as shown in the following example procedure.
Prerequisites
- You have created an object storage bucket using AWS.
- You have installed the OADP Operator.
Procedure
Configure the BSL custom resource (CR) with values as applicable to your use case.
Backup storage location
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1 1
- The name of the object store plugin. In this example, the plugin is
aws. This field is required. - 2
- The name of the bucket in which to store backups. This field is required.
- 3
- The prefix within the bucket in which to store backups. This field is optional.
- 4
- The credentials for the backup storage location. You can set custom credentials. If custom credentials are not set, the default credentials' secret is used.
- 5
- The
keywithin the secret credentials' data. - 6
- The name of the secret containing the credentials.
- 7
- The AWS region where the bucket is located. Optional if s3ForcePathStyle is false.
- 8
- A boolean flag to decide whether to use path-style addressing instead of virtual hosted bucket addressing. Set to
trueif using a storage service such as MinIO or NooBaa. This is an optional field. The default value isfalse. - 9
- You can specify the AWS S3 URL here for explicitness. This field is primarily for storage services such as MinIO or NooBaa. This is an optional field.
- 10
- This field is primarily used for storage services such as MinIO or NooBaa. This is an optional field.
- 11
- The name of the server-side encryption algorithm to use for uploading objects, for example,
AES256. This is an optional field. - 12
- Specify an AWS KMS key ID. You can format, as shown in the example, as an alias, such as
alias/<KMS-key-alias-name>, or the fullARNto enable encryption of the backups stored in S3. Note thatkmsKeyIdcannot be used in withcustomerKeyEncryptionFile. This is an optional field. - 13
- Specify the file that has the
SSE-Ccustomer key to enable customer key encryption of the backups stored in S3. The file must contain a 32-byte string. ThecustomerKeyEncryptionFilefield points to a mounted secret within thevelerocontainer. Add the following key-value pair to thevelerocloud-credentialssecret:customer-key: <your_b64_encoded_32byte_string>. Note that thecustomerKeyEncryptionFilefield cannot be used with thekmsKeyIdfield. The default value is an empty string (""), which meansSSE-Cis disabled. This is an optional field. - 14
- The version of the signature algorithm used to create signed URLs. You use signed URLs to download the backups, or fetch the logs. Valid values are
1and4. The default version is4. This is an optional field. - 15
- The name of the AWS profile in the credentials file. The default value is
default. This is an optional field. - 16
- Set the
insecureSkipTLSVerifyfield totrueif you do not want to verify the TLS certificate when connecting to the object store, for example, for self-signed certificates with MinIO. Setting totrueis susceptible to man-in-the-middle attacks and is not recommended for production workloads. The default value isfalse. This is an optional field. - 17
- Set the
enableSharedConfigfield totrueif you want to load the credentials file as a shared config file. The default value isfalse. This is an optional field. - 18
- Specify the tags to annotate the AWS S3 objects. Specify the tags in key-value pairs. The default value is an empty string (
""). This is an optional field. - 19
- Specify the checksum algorithm to use for uploading objects to S3. The supported values are:
CRC32,CRC32C,SHA1, andSHA256. If you set the field as an empty string (""), the checksum check will be skipped. The default value isCRC32. This is an optional field.
4.7.1.3.4. Creating an OADP SSE-C encryption key for additional data security 링크 복사링크가 클립보드에 복사되었습니다!
Amazon Web Services (AWS) S3 applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3.
OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption.
The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption.
You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed.
Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key.
Prerequisites
To make OADP mount a secret that contains your SSE-C key to the Velero pod at
/credentials, use the following default secret name for AWS:cloud-credentials, and leave at least one of the following labels empty:-
dpa.spec.backupLocations[].velero.credential dpa.spec.snapshotLocations[].velero.credentialThis is a workaround for a known issue: https://issues.redhat.com/browse/OADP-3971.
-
The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting.
-
If you need the backup location to have credentials with a different name than
cloud-credentials, you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the example does not contain a credential name, the snapshot location will usecloud-credentialsas its secret for taking snapshots.
Example snapshot location in a DPA without credentials specified
Procedure
Create an SSE-C encryption key:
Generate a random number and save it as a file named
sse.keyby running the following command:dd if=/dev/urandom bs=1 count=32 > sse.key
$ dd if=/dev/urandom bs=1 count=32 > sse.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an OpenShift Container Platform secret:
If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command:
oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse.key
$ oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are updating an existing installation, edit the values of the
cloud-credentialsecretblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Edit the value of the
customerKeyEncryptionFileattribute in thebackupLocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningYou must restart the Velero pod to remount the secret credentials properly on an existing installation.
The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key.
Verification
To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it.
Create a test file by running the following command:
echo "encrypt me please" > test.txt
$ echo "encrypt me please" > test.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the test file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Try to download the file. In either the Amazon web console or the terminal, run the following command:
s3cmd get s3://<bucket>/test.txt test.txt
$ s3cmd get s3://<bucket>/test.txt test.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow The download fails because the file is encrypted with an additional key.
Download the file with the additional encryption key by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Read the file contents by running the following command:
cat downloaded.txt
$ cat downloaded.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
encrypt me please
encrypt me pleaseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.3.4.1. Downloading a file with an SSE-C encryption key for files backed up by Velero 링크 복사링크가 클립보드에 복사되었습니다!
When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velero.
Procedure
- Download the file with the additional encryption key for files backed up by Velero by running the following command:
4.7.1.4. Configuring the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.
4.7.1.4.1. Setting Velero CPU and memory resource allocations 링크 복사링크가 클립보드에 복사되었습니다!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
For more details, see Configuring node agents and node labels.
4.7.1.4.2. Enabling self-signed CA certificates 링크 복사링크가 클립보드에 복사되었습니다!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.4.2.1. Using CA certificates with the velero command aliased for Velero deployment 링크 복사링크가 클립보드에 복사되었습니다!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed.To use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
4.7.1.5. Installing the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create a
Secretwith the default name,cloud-credentials, which contains separate profiles for the backup and snapshot location credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace for OADP is
openshift-adp. The namespace is a variable and is configurable. - 2
- The
openshiftplugin is mandatory. - 3
- Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
- 4
- The administrative agent that routes the administrative requests to servers.
- 5
- Set this value to
trueif you want to enablenodeAgentand perform File System Backup. - 6
- Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. - 7
- Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
- 8
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 9
- Specify a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes. - 10
- Specify whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage.
- 11
- Specify the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage.
- 12
- Specify the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the backup location. - 13
- Specify a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs.
- 14
- The snapshot location must be in the same region as the PVs.
- 15
- Specify the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in thecredentials-velerofile.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.5.1. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
4.7.1.6. Configuring the backup storage location with a MD5 checksum algorithm 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA.
-
CRC32 -
CRC32C -
SHA1 -
SHA256
You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check.
If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32.
Prerequisites
- You have installed the OADP Operator.
- You have configured Amazon S3, or S3-compatible object storage as a backup location.
Procedure
Configure the BSL in the DPA as shown in the following example:
Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
checksumAlgorithm. In this example, thechecksumAlgorithmfield is set to an empty value. You can select an option from the following list:CRC32,CRC32C,SHA1,SHA256.
If you are using Noobaa as the object storage provider, and you do not set the spec.backupLocations.velero.config.checksumAlgorithm field in the DPA, an empty value of checksumAlgorithm is added to the BSL configuration.
The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method.
4.7.1.7. Configuring the DPA with client burst and QPS settings 링크 복사링크가 클립보드에 복사되었습니다!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.8. Overriding the imagePullPolicy setting in the DPA 링크 복사링크가 클립보드에 복사되었습니다!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.7.1.9. Configuring the DPA with more than one BSL 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the DataProtectionApplication (DPA) custom resource (CR) with more than one BackupStorageLocation (BSL) CR and specify the credentials provided by the cloud provider.
For example, where you have configured the following two BSLs:
- Configured one BSL in the DPA and set it as the default BSL.
-
Created another BSL independently by using the
BackupStorageLocationCR.
As you have already set the BSL created through the DPA as the default, you cannot set the independently created BSL again as the default. This means, at any given time, you can set only one BSL as the default BSL.
Prerequisites
- You must install the OADP Operator.
- You must create the secrets by using the credentials provided by the cloud provider.
Procedure
Configure the
DataProtectionApplicationCR with more than oneBackupStorageLocationCR. See the following example:Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the first BSL.
- 2
- This parameter indicates that this BSL is the default BSL. If a BSL is not set in the
Backup CR, the default BSL is used. You can set only one BSL as the default. - 3
- Specify the bucket name.
- 4
- Specify a prefix for Velero backups; for example,
velero. - 5
- Specify the AWS region for the bucket.
- 6
- Specify the name of the default
Secretobject that you created. - 7
- Specify a name for the second BSL.
- 8
- Specify the URL of the S3 endpoint.
- 9
- Specify the correct name for the
Secret; for example,custom_secret_name_odf. If you do not specify aSecretname, the default name is used.
Specify the BSL to be used in the backup CR. See the following example.
Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.9.1. Enabling CSI in the DataProtectionApplication CR 링크 복사링크가 클립보드에 복사되었습니다!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the
csidefault plugin.
4.7.1.9.2. Disabling the node agent in DataProtectionApplication 링크 복사링크가 클립보드에 복사되었습니다!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the node agent.
You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".
4.8. Configuring OADP with IBM Cloud 링크 복사링크가 클립보드에 복사되었습니다!
4.8.1. Configuring the OpenShift API for Data Protection with IBM Cloud 링크 복사링크가 클립보드에 복사되었습니다!
You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups.
4.8.1.1. Configuring the COS instance 링크 복사링크가 클립보드에 복사되었습니다!
You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials.
Prerequisites
- You have an IBM Cloud Platform account.
- You installed the IBM Cloud CLI.
- You are logged in to IBM Cloud.
Procedure
Install the IBM Cloud Object Storage (COS) plugin by running the following command:
ibmcloud plugin install cos -f
$ ibmcloud plugin install cos -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bucket name by running the following command:
BUCKET=<bucket_name>
$ BUCKET=<bucket_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bucket region by running the following command:
REGION=<bucket_region>
$ REGION=<bucket_region>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the bucket region, for example,
eu-gb.
Create a resource group by running the following command:
ibmcloud resource group-create <resource_group_name>
$ ibmcloud resource group-create <resource_group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the target resource group by running the following command:
ibmcloud target -g <resource_group_name>
$ ibmcloud target -g <resource_group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the target resource group is correctly set by running the following command:
ibmcloud target
$ ibmcloud targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default
API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: DefaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the example output, the resource group is set to
Default.Set a resource group name by running the following command:
RESOURCE_GROUP=<resource_group>
$ RESOURCE_GROUP=<resource_group>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the resource group name, for example,
"default".
Create an IBM Cloud
service-instanceresource by running the following command:ibmcloud resource service-instance-create \ <service_instance_name> \ <service_name> \ <service_plan> \ <region_name>
$ ibmcloud resource service-instance-create \ <service_instance_name> \1 <service_name> \2 <service_plan> \3 <region_name>4 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ standard \ global \ -d premium-global-deployment
$ ibmcloud resource service-instance-create test-service-instance cloud-object-storage \1 standard \ global \ -d premium-global-deployment2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the service instance ID by running the following command:
SERVICE_INSTANCE_ID=$(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')
$ SERVICE_INSTANCE_ID=$(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a COS bucket by running the following command:
ibmcloud cos bucket-create \//
$ ibmcloud cos bucket-create \// --bucket $BUCKET \// --ibm-service-instance-id $SERVICE_INSTANCE_ID \// --region $REGIONCopy to Clipboard Copied! Toggle word wrap Toggle overflow Variables such as
$BUCKET,$SERVICE_INSTANCE_ID, and$REGIONare replaced by the values you set previously.Create
HMACcredentials by running the following command.ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true}$ ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the access key ID and the secret access key from the
HMACcredentials and save them in thecredentials-velerofile. You can use thecredentials-velerofile to create asecretfor the backup storage location. Run the following command:cat > credentials-velero << __EOF__ [default] aws_access_key_id=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__
$ cat > credentials-velero << __EOF__ [default] aws_access_key_id=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.1.2. Creating a default Secret 링크 복사링크가 클립보드에 복사되었습니다!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
-
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider. Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.8.1.3. Creating secrets for different credentials 링크 복사링크가 클립보드에 복사되었습니다!
If your backup and snapshot locations use different credentials, you must create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
4.8.1.4. Installing the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The provider is
awswhen you use IBM Cloud as a backup storage location. - 2
- Specify the IBM Cloud Object Storage (COS) bucket name.
- 3
- Specify the COS region name, for example,
eu-gb. - 4
- Specify the S3 URL of the COS bucket. For example,
http://s3.eu-gb.cloud-object-storage.appdomain.cloud. Here,eu-gbis the region name. Replace the region name according to your bucket region. - 5
- Defines the name of the secret you created by using the access key and the secret access key from the
HMACcredentials.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.1.5. Setting Velero CPU and memory resource allocations 링크 복사링크가 클립보드에 복사되었습니다!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
4.8.1.6. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
4.8.1.7. Configuring the DPA with client burst and QPS settings 링크 복사링크가 클립보드에 복사되었습니다!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.1.8. Overriding the imagePullPolicy setting in the DPA 링크 복사링크가 클립보드에 복사되었습니다!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.8.1.9. Configuring the DPA with more than one BSL 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the DataProtectionApplication (DPA) custom resource (CR) with more than one BackupStorageLocation (BSL) CR and specify the credentials provided by the cloud provider.
For example, where you have configured the following two BSLs:
- Configured one BSL in the DPA and set it as the default BSL.
-
Created another BSL independently by using the
BackupStorageLocationCR.
As you have already set the BSL created through the DPA as the default, you cannot set the independently created BSL again as the default. This means, at any given time, you can set only one BSL as the default BSL.
Prerequisites
- You must install the OADP Operator.
- You must create the secrets by using the credentials provided by the cloud provider.
Procedure
Configure the
DataProtectionApplicationCR with more than oneBackupStorageLocationCR. See the following example:Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the first BSL.
- 2
- This parameter indicates that this BSL is the default BSL. If a BSL is not set in the
Backup CR, the default BSL is used. You can set only one BSL as the default. - 3
- Specify the bucket name.
- 4
- Specify a prefix for Velero backups; for example,
velero. - 5
- Specify the AWS region for the bucket.
- 6
- Specify the name of the default
Secretobject that you created. - 7
- Specify a name for the second BSL.
- 8
- Specify the URL of the S3 endpoint.
- 9
- Specify the correct name for the
Secret; for example,custom_secret_name_odf. If you do not specify aSecretname, the default name is used.
Specify the BSL to be used in the backup CR. See the following example.
Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.1.10. Disabling the node agent in DataProtectionApplication 링크 복사링크가 클립보드에 복사되었습니다!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the node agent.
You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".
4.9. Configuring OADP with Azure 링크 복사링크가 클립보드에 복사되었습니다!
4.9.1. Configuring the OpenShift API for Data Protection with Microsoft Azure 링크 복사링크가 클립보드에 복사되었습니다!
You install the OpenShift API for Data Protection (OADP) with Microsoft Azure by installing the OADP Operator. The Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure Azure for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.9.1.1. Configuring Microsoft Azure 링크 복사링크가 클립보드에 복사되었습니다!
You configure Microsoft Azure for OpenShift API for Data Protection (OADP).
Prerequisites
- You must have the Azure CLI installed.
Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools.
This identity is used for access to resources.
- Create a service principal
- Sign in using a service principal and password
- Sign in using a service principal and certificate
- Manage service principal roles
- Create an Azure resource using a service principal
- Reset service principal credentials
For more details, see Create an Azure service principal with Azure CLI.
4.9.1.2. About backup and snapshot locations and their secrets 링크 복사링크가 클립보드에 복사되었습니다!
You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR).
Backup locations
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
Snapshot locations
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
Secrets
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.9.1.3. About authenticating OADP with Azure 링크 복사링크가 클립보드에 복사되었습니다!
You can authenticate OADP with Azure by using the following methods:
- A Velero-specific service principal with secret-based authentication.
- A Velero-specific storage account access key with secret-based authentication.
4.9.1.4. Using a service principal or a storage account access key 링크 복사링크가 클립보드에 복사되었습니다!
You create a default Secret object and reference it in the backup storage location custom resource. The credentials file for the Secret object can contain information about the Azure service principal or a storage account access key.
The default name of the Secret is cloud-credentials-azure.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
-
You have access to the OpenShift cluster as a user with
cluster-adminprivileges. - You have an Azure subscription with appropriate permissions.
- You have installed OADP.
- You have configured an object storage for storing the backups.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.You can use one of the following two methods to authenticate OADP with Azure.
Use the service principal with secret-based authentication. See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a storage account access key. See the following example:
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<azure_storage_account_access_key> AZURE_SUBSCRIPTION_ID=<azure_subscription_id> AZURE_RESOURCE_GROUP=<azure_resource_group> AZURE_CLOUD_NAME=<azure_cloud_name>
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<azure_storage_account_access_key> AZURE_SUBSCRIPTION_ID=<azure_subscription_id> AZURE_RESOURCE_GROUP=<azure_resource_group> AZURE_CLOUD_NAME=<azure_cloud_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reference the
Secretin thespec.backupLocations.velero.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.
4.9.1.5. Setting Velero CPU and memory resource allocations 링크 복사링크가 클립보드에 복사되었습니다!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
For more details, see Configuring node agents and node labels.
4.9.1.6. Enabling self-signed CA certificates 링크 복사링크가 클립보드에 복사되었습니다!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.1.6.1. Using CA certificates with the velero command aliased for Velero deployment 링크 복사링크가 클립보드에 복사되었습니다!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed.To use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
4.9.1.7. Installing the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials-azure. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. -
Secretwith another custom name for the snapshot location. You add thisSecretto theDataProtectionApplicationCR.
NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace for OADP is
openshift-adp. The namespace is a variable and is configurable. - 2
- The
openshiftplugin is mandatory. - 3
- Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
- 4
- The administrative agent that routes the administrative requests to servers.
- 5
- Set this value to
trueif you want to enablenodeAgentand perform File System Backup. - 6
- Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. - 7
- Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
- 8
- Specify the Azure resource group.
- 9
- Specify the Azure storage account ID.
- 10
- Specify the Azure subscription ID.
- 11
- If you do not specify this value, the default name,
cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location. - 12
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 13
- Specify a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes. - 14
- You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs.
- 15
- Specify the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.1.8. Configuring the DPA with client burst and QPS settings 링크 복사링크가 클립보드에 복사되었습니다!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.1.9. Overriding the imagePullPolicy setting in the DPA 링크 복사링크가 클립보드에 복사되었습니다!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.9.1.9.1. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
4.9.1.9.2. Enabling CSI in the DataProtectionApplication CR 링크 복사링크가 클립보드에 복사되었습니다!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the
csidefault plugin.
4.9.1.9.3. Disabling the node agent in DataProtectionApplication 링크 복사링크가 클립보드에 복사되었습니다!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the node agent.
You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".
4.10. Configuring OADP with GCP 링크 복사링크가 클립보드에 복사되었습니다!
4.10.1. Configuring the OpenShift API for Data Protection with Google Cloud Platform 링크 복사링크가 클립보드에 복사되었습니다!
You install the OpenShift API for Data Protection (OADP) with Google Cloud Platform (GCP) by installing the OADP Operator. The Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure GCP for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.10.1.1. Configuring Google Cloud Platform 링크 복사링크가 클립보드에 복사되었습니다!
You configure Google Cloud Platform (GCP) for the OpenShift API for Data Protection (OADP).
Prerequisites
-
You must have the
gcloudandgsutilCLI tools installed. See the Google cloud documentation for details.
Procedure
Log in to GCP:
gcloud auth login
$ gcloud auth loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BUCKETvariable:BUCKET=<bucket>
$ BUCKET=<bucket>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify your bucket name.
Create the storage bucket:
gsutil mb gs://$BUCKET/
$ gsutil mb gs://$BUCKET/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
PROJECT_IDvariable to your active project:PROJECT_ID=$(gcloud config get-value project)
$ PROJECT_ID=$(gcloud config get-value project)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account:
gcloud iam service-accounts create velero \ --display-name "Velero service account"$ gcloud iam service-accounts create velero \ --display-name "Velero service account"Copy to Clipboard Copied! Toggle word wrap Toggle overflow List your service accounts:
gcloud iam service-accounts list
$ gcloud iam service-accounts listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
SERVICE_ACCOUNT_EMAILvariable to match itsemailvalue:SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')$ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the minimum necessary permissions:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
velero.servercustom role:gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"$ gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add IAM policy binding to the project:
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server$ gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the IAM service account:
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the IAM service account keys to the
credentials-velerofile in the current directory:gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL$ gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAILCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for GCP before you install the Data Protection Application.
4.10.1.2. About backup and snapshot locations and their secrets 링크 복사링크가 클립보드에 복사되었습니다!
You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR).
Backup locations
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
Snapshot locations
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
Secrets
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.10.1.2.1. Creating a default Secret 링크 복사링크가 클립보드에 복사되었습니다!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials-gcp.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
-
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider. Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.10.1.2.2. Creating secrets for different credentials 링크 복사링크가 클립보드에 복사되었습니다!
If your backup and snapshot locations use different credentials, you must create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials-gcp. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
4.10.1.3. Configuring the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.
4.10.1.3.1. Setting Velero CPU and memory resource allocations 링크 복사링크가 클립보드에 복사되었습니다!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
For more details, see Configuring node agents and node labels.
4.10.1.3.2. Enabling self-signed CA certificates 링크 복사링크가 클립보드에 복사되었습니다!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.10.1.3.2.1. Using CA certificates with the velero command aliased for Velero deployment 링크 복사링크가 클립보드에 복사되었습니다!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed.To use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
4.10.1.4. Google workload identity federation cloud authentication 링크 복사링크가 클립보드에 복사되었습니다!
Applications running outside Google Cloud use service account keys, such as usernames and passwords, to gain access to Google Cloud resources. These service account keys might become a security risk if they are not properly managed.
With Google’s workload identity federation, you can use Identity and Access Management (IAM) to offer IAM roles, including the ability to impersonate service accounts, to external identities. This eliminates the maintenance and security risks associated with service account keys.
Workload identity federation handles encrypting and decrypting certificates, extracting user attributes, and validation. Identity federation externalizes authentication, passing it over to Security Token Services (STS), and reduces the demands on individual developers. Authorization and controlling access to resources remain the responsibility of the application.
Google workload identity federation is available for OADP 1.3.x and later.
When backing up volumes, OADP on GCP with Google workload identity federation authentication only supports CSI snapshots.
OADP on GCP with Google workload identity federation authentication does not support Volume Snapshot Locations (VSL) backups. For more details, see Google workload identity federation known issues.
If you do not use Google workload identity federation cloud authentication, continue to Installing the Data Protection Application.
Prerequisites
- You have installed a cluster in manual mode with GCP Workload Identity configured.
-
You have access to the Cloud Credential Operator utility (
ccoctl) and to the associated workload identity pool.
Procedure
Create an
oadp-credrequestdirectory by running the following command:mkdir -p oadp-credrequest
$ mkdir -p oadp-credrequestCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
CredentialsRequest.yamlfile as following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ccoctlutility to process theCredentialsRequestobjects in theoadp-credrequestdirectory by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
manifests/openshift-adp-cloud-credentials-gcp-credentials.yamlfile is now available to use in the following steps.Create a namespace by running the following command:
oc create namespace <OPERATOR_INSTALL_NS>
$ oc create namespace <OPERATOR_INSTALL_NS>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the credentials to the namespace by running the following command:
oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml
$ oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.10.1.4.1. Google workload identity federation known issues 링크 복사링크가 클립보드에 복사되었습니다!
-
Volume Snapshot Location (VSL) backups finish with a
PartiallyFailedphase when GCP workload identity federation is configured. Google workload identity federation authentication does not support VSL backups.
4.10.1.5. Installing the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials-gcp. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. -
Secretwith another custom name for the snapshot location. You add thisSecretto theDataProtectionApplicationCR.
NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace for OADP is
openshift-adp. The namespace is a variable and is configurable. - 2
- The
openshiftplugin is mandatory. - 3
- Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
- 4
- The administrative agent that routes the administrative requests to servers.
- 5
- Set this value to
trueif you want to enablenodeAgentand perform File System Backup. - 6
- Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. - 7
- Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
- 8
- Secret key that contains credentials. For Google workload identity federation cloud authentication use
service_account.json. - 9
- Secret name that contains credentials. If you do not specify this value, the default name,
cloud-credentials-gcp, is used. - 10
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 11
- Specify a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes. - 12
- Specify a snapshot location, unless you use CSI snapshots or Restic to back up PVs.
- 13
- The snapshot location must be in the same region as the PVs.
- 14
- Specify the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials-gcp, is used. If you specify a custom name, the custom name is used for the backup location. - 15
- Google workload identity federation supports internal image backup. Set this field to
falseif you do not want to use image backup.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.10.1.6. Configuring the DPA with client burst and QPS settings 링크 복사링크가 클립보드에 복사되었습니다!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.10.1.7. Overriding the imagePullPolicy setting in the DPA 링크 복사링크가 클립보드에 복사되었습니다!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.10.1.7.1. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
4.10.1.7.2. Enabling CSI in the DataProtectionApplication CR 링크 복사링크가 클립보드에 복사되었습니다!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the
csidefault plugin.
4.10.1.7.3. Disabling the node agent in DataProtectionApplication 링크 복사링크가 클립보드에 복사되었습니다!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the node agent.
You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".
4.11. Configuring OADP with MCG 링크 복사링크가 클립보드에 복사되었습니다!
4.11.1. Configuring the OpenShift API for Data Protection with Multicloud Object Gateway 링크 복사링크가 클립보드에 복사되었습니다!
Multicloud Object Gateway (MCG) is a component of OpenShift Data Foundation, and you can configure it as a backup location in the DataProtectionApplication custom resource (CR). You can install the OpenShift API for Data Protection (OADP) with MCG by installing the OADP Operator. The Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can create a Secret CR for the backup location and install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks.
4.11.1.1. Retrieving Multicloud Object Gateway credentials 링크 복사링크가 클립보드에 복사되었습니다!
You must retrieve the Multicloud Object Gateway (MCG) bucket credentials to create a Secret custom resource (CR) for OpenShift API for Data Protection (OADP).
Although the MCG Operator is deprecated, the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system.
Prerequisites
- You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide.
Procedure
- Create an MCG bucket. For more information, see Managing hybrid and multicloud resources.
-
Obtain the S3 endpoint,
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY, and the bucket name by running theoc describecommand on the bucket resource. Create a
credentials-velerofile:cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF
$ cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can use the
credentials-velerofile to create aSecretobject when you install the Data Protection Application.
4.11.1.2. About backup and snapshot locations and their secrets 링크 복사링크가 클립보드에 복사되었습니다!
You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR).
Backup locations
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
Snapshot locations
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
Secrets
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.11.1.2.1. Creating a default Secret 링크 복사링크가 클립보드에 복사되었습니다!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.See the following example:
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.11.1.2.2. Creating secrets for different credentials 링크 복사링크가 클립보드에 복사되었습니다!
If your backup and snapshot locations use different credentials, you must create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11.1.3. Configuring the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.
4.11.1.3.1. Setting Velero CPU and memory resource allocations 링크 복사링크가 클립보드에 복사되었습니다!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
For more details, see Configuring node agents and node labels.
4.11.1.3.2. Enabling self-signed CA certificates 링크 복사링크가 클립보드에 복사되었습니다!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11.1.3.2.1. Using CA certificates with the velero command aliased for Velero deployment 링크 복사링크가 클립보드에 복사되었습니다!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed.To use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
4.11.1.4. Installing the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. -
Secretwith another custom name for the snapshot location. You add thisSecretto theDataProtectionApplicationCR.
NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace for OADP is
openshift-adp. The namespace is a variable and is configurable. - 2
- An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is
aws. For Azure and Google Cloud object stores, theazureorgcpplugin is required. - 3
- The
openshiftplugin is mandatory. - 4
- Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
- 5
- The administrative agent that routes the administrative requests to servers.
- 6
- Set this value to
trueif you want to enablenodeAgentand perform File System Backup. - 7
- Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. - 8
- Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
- 9
- Specify the region, following the naming convention of the documentation of your object storage server.
- 10
- Specify the URL of the S3 endpoint.
- 11
- Specify the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the backup location. - 12
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 13
- Specify a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11.1.5. Configuring the DPA with client burst and QPS settings 링크 복사링크가 클립보드에 복사되었습니다!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.11.1.6. Overriding the imagePullPolicy setting in the DPA 링크 복사링크가 클립보드에 복사되었습니다!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.11.1.6.1. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
4.11.1.6.2. Enabling CSI in the DataProtectionApplication CR 링크 복사링크가 클립보드에 복사되었습니다!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the
csidefault plugin.
4.11.1.6.3. Disabling the node agent in DataProtectionApplication 링크 복사링크가 클립보드에 복사되었습니다!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the node agent.
You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".
4.12. Configuring OADP with ODF 링크 복사링크가 클립보드에 복사되었습니다!
4.12.1. Configuring the OpenShift API for Data Protection with OpenShift Data Foundation 링크 복사링크가 클립보드에 복사되었습니다!
You install the OpenShift API for Data Protection (OADP) with OpenShift Data Foundation by installing the OADP Operator and configuring a backup location and a snapshot location. Then, you install the Data Protection Application.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You can configure Multicloud Object Gateway or any AWS S3-compatible object storage as a backup location.
The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can create a Secret CR for the backup location and install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. For details, see Using Operator Lifecycle Manager on restricted networks.
4.12.1.1. About backup and snapshot locations and their secrets 링크 복사링크가 클립보드에 복사되었습니다!
You specify backup and snapshot locations and their secrets in the DataProtectionApplication custom resource (CR).
Backup locations
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
Snapshot locations
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
Secrets
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.12.1.1.1. Creating a default Secret 링크 복사링크가 클립보드에 복사되었습니다!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials, unless your backup storage provider has a default plugin, such as aws, azure, or gcp. In that case, the default name is specified in the provider-specific OADP installation procedure.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.See the following example:
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.
4.12.1.1.2. Creating secrets for different credentials 링크 복사링크가 클립보드에 복사되었습니다!
If your backup and snapshot locations use different credentials, you must create two Secret objects:
-
Backup location
Secretwith a custom name. The custom name is specified in thespec.backupLocationsblock of theDataProtectionApplicationcustom resource (CR). -
Snapshot location
Secretwith the default name,cloud-credentials. ThisSecretis not specified in theDataProtectionApplicationCR.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Backup location
Secretwith custom name.
4.12.1.2. Configuring the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.
4.12.1.2.1. Setting Velero CPU and memory resource allocations 링크 복사링크가 클립보드에 복사되었습니다!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
For more details, see Configuring node agents and node labels.
4.12.1.2.1.1. Adjusting Ceph CPU and memory requirements based on collected data 링크 복사링크가 클립보드에 복사되었습니다!
The following recommendations are based on observations of performance made in the scale and performance lab. The changes are specifically related to Red Hat OpenShift Data Foundation (ODF). If working with ODF, consult the appropriate tuning guides for official recommendations.
4.12.1.2.1.1.1. CPU and memory requirement for configurations 링크 복사링크가 클립보드에 복사되었습니다!
Backup and restore operations require large amounts of CephFS PersistentVolumes (PVs). To avoid Ceph MDS pods restarting with an out-of-memory (OOM) error, the following configuration is suggested:
| Configuration types | Request | Max limit |
|---|---|---|
| CPU | Request changed to 3 | Max limit to 3 |
| Memory | Request changed to 8 Gi | Max limit to 128 Gi |
4.12.1.2.2. Enabling self-signed CA certificates 링크 복사링크가 클립보드에 복사되었습니다!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12.1.2.2.1. Using CA certificates with the velero command aliased for Velero deployment 링크 복사링크가 클립보드에 복사되었습니다!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed.To use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
4.12.1.3. Installing the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. -
Secretwith another custom name for the snapshot location. You add thisSecretto theDataProtectionApplicationCR.
NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.-
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace for OADP is
openshift-adp. The namespace is a variable and is configurable. - 2
- An object store plugin corresponding to your storage locations is required. For all S3 providers, the required plugin is
aws. For Azure and Google Cloud object stores, theazureorgcpplugin is required. - 3
- Optional: The
kubevirtplugin is used with OpenShift Virtualization. - 4
- Specify the
csidefault plugin if you use CSI snapshots to back up PVs. Thecsiplugin uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location. - 5
- The
openshiftplugin is mandatory. - 6
- Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
- 7
- The administrative agent that routes the administrative requests to servers.
- 8
- Set this value to
trueif you want to enablenodeAgentand perform File System Backup. - 9
- Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. - 10
- Specify the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
- 11
- Specify the backup provider.
- 12
- Specify the correct default name for the
Secret, for example,cloud-credentials-gcp, if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify aSecretname, the default name is used. - 13
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 14
- Specify a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12.1.4. Configuring the DPA with client burst and QPS settings 링크 복사링크가 클립보드에 복사되었습니다!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.12.1.5. Overriding the imagePullPolicy setting in the DPA 링크 복사링크가 클립보드에 복사되었습니다!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.12.1.5.1. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
If you use cluster storage for your Multicloud Object Gateway (MCG) bucket backupStorageLocation on OpenShift Data Foundation, create an Object Bucket Claim (OBC) using the OpenShift Web Console.
Failure to configure an Object Bucket Claim (OBC) might lead to backups not being available.
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
Procedure
- Create an Object Bucket Claim (OBC) using the OpenShift web console as described in Creating an Object Bucket Claim using the OpenShift Web Console.
4.12.1.5.3. Enabling CSI in the DataProtectionApplication CR 링크 복사링크가 클립보드에 복사되었습니다!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add the
csidefault plugin.
4.12.1.5.4. Disabling the node agent in DataProtectionApplication 링크 복사링크가 클립보드에 복사되었습니다!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Disables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Enables the node agent.
You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".
4.13. Configuring OADP with OpenShift Virtualization 링크 복사링크가 클립보드에 복사되었습니다!
4.13.1. Configuring the OpenShift API for Data Protection with OpenShift Virtualization 링크 복사링크가 클립보드에 복사되었습니다!
You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. Then, you can install the Data Protection Application.
Back up and restore virtual machines by using the OpenShift API for Data Protection.
OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options:
- Container Storage Interface (CSI) backups
- Container Storage Interface (CSI) backups with DataMover
The following storage options are excluded:
- File system backup and restore
- Volume snapshot backups and restores
For more information, see Backing up applications with File System Backup: Kopia or Restic.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.13.1.1. Installing and configuring OADP with OpenShift Virtualization 링크 복사링크가 클립보드에 복사되었습니다!
As a cluster administrator, you install OADP by installing the OADP Operator.
The latest version of the OADP Operator installs Velero 1.14.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole.
Procedure
- Install the OADP Operator according to the instructions for your storage provider.
-
Install the Data Protection Application (DPA) with the
kubevirtandopenshiftOADP plugins. Back up virtual machines by creating a
Backupcustom resource (CR).WarningRed Hat support is limited to only the following options:
- CSI backups
- CSI backups with DataMover.
You restore the Backup CR by creating a Restore CR.
4.13.1.2. Installing the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
- Click Operators → Installed Operators and select the OADP Operator.
- Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The default namespace for OADP is
openshift-adp. The namespace is a variable and is configurable. - 2
- The
kubevirtplugin is mandatory for OpenShift Virtualization. - 3
- Specify the plugin for the backup provider, for example,
gcp, if it exists. - 4
- The
csiplugin is mandatory for backing up PVs with CSI snapshots. Thecsiplugin uses the Velero CSI beta snapshot APIs. You do not need to configure a snapshot location. - 5
- The
openshiftplugin is mandatory. - 6
- Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m.
- 7
- The administrative agent that routes the administrative requests to servers.
- 8
- Set this value to
trueif you want to enablenodeAgentand perform File System Backup. - 9
- Enter
kopiaas your uploader to use the Built-in DataMover. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. - 10
- Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes.
- 11
- Specify the backup provider.
- 12
- Specify the correct default name for the
Secret, for example,cloud-credentials-gcp, if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify aSecretname, the default name is used. - 13
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 14
- Specify a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If you run a backup of a Microsoft Windows virtual machine (VM) immediately after the VM reboots, the backup might fail with a PartiallyFailed error. This is because, immediately after a VM boots, the Microsoft Windows Volume Shadow Copy Service (VSS) and Guest Agent (GA) service are not ready. The VSS and GA service being unready causes the backup to fail. In such a case, retry the backup a few minutes after the VM boots.
4.13.1.3. Backing up a single VM 링크 복사링크가 클립보드에 복사되었습니다!
If you have a namespace with multiple virtual machines (VMs), and want to back up only one of them, you can use the label selector to filter the VM that needs to be included in the backup. You can filter the VM by using the app: vmname label.
Prerequisites
- You have installed the OADP Operator.
- You have multiple VMs running in a namespace.
-
You have added the
kubevirtplugin in theDataProtectionApplication(DPA) custom resource (CR). -
You have configured the
BackupStorageLocationCR in theDataProtectionApplicationCR andBackupStorageLocationis available.
Procedure
Configure the
BackupCR as shown in the following example:Example
BackupCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow To create a
BackupCR, run the following command:oc apply -f <backup_cr_file_name>
$ oc apply -f <backup_cr_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
BackupCR file.
4.13.1.4. Restoring a single VM 링크 복사링크가 클립보드에 복사되었습니다!
After you have backed up a single virtual machine (VM) by using the label selector in the Backup custom resource (CR), you can create a Restore CR and point it to the backup. This restore operation restores a single VM.
Prerequisites
- You have installed the OADP Operator.
- You have backed up a single VM by using the label selector.
Procedure
Configure the
RestoreCR as shown in the following example:Example
RestoreCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the name of the backup of a single VM.
To restore the single VM, run the following command:
oc apply -f <restore_cr_file_name>
$ oc apply -f <restore_cr_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
RestoreCR file.
4.13.1.5. Restoring a single VM from a backup of multiple VMs 링크 복사링크가 클립보드에 복사되었습니다!
If you have a backup containing multiple virtual machines (VMs), and you want to restore only one VM, you can use the LabelSelectors section in the Restore CR to select the VM to restore. To ensure that the persistent volume claim (PVC) attached to the VM is correctly restored, and the restored VM is not stuck in a Provisioning status, use both the app: <vm_name> and the kubevirt.io/created-by labels. To match the kubevirt.io/created-by label, use the UID of DataVolume of the VM.
Prerequisites
- You have installed the OADP Operator.
- You have labeled the VMs that need to be backed up.
- You have a backup of multiple VMs.
Procedure
Before you take a backup of many VMs, ensure that the VMs are labeled by running the following command:
oc label vm <vm_name> app=<vm_name> -n openshift-adp
$ oc label vm <vm_name> app=<vm_name> -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the label selectors in the
RestoreCR as shown in the following example:Example
RestoreCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow To restore a VM, run the following command:
oc apply -f <restore_cr_file_name>
$ oc apply -f <restore_cr_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the
RestoreCR file.
4.13.1.6. Configuring the DPA with client burst and QPS settings 링크 복사링크가 클립보드에 복사되었습니다!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.13.1.7. Overriding the imagePullPolicy setting in the DPA 링크 복사링크가 클립보드에 복사되었습니다!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.13.1.7.1. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
4.13.1.8. About incremental back up support 링크 복사링크가 클립보드에 복사되었습니다!
OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover:
| Volume mode | FSB - Restic | FSB - Kopia | CSI | CSI Data Mover |
|---|---|---|---|---|
| Filesystem | S [1], I [2] | S [1], I [2] | S [1] | S [1], I [2] |
| Block | N [3] | N [3] | S [1] | S [1], I [2] |
| Volume mode | FSB - Restic | FSB - Kopia | CSI | CSI Data Mover |
|---|---|---|---|---|
| Filesystem | N [3] | N [3] | S [1] | S [1], I [2] |
| Block | N [3] | N [3] | S [1] | S [1], I [2] |
- Backup supported
- Incremental backup supported
- Not supported
The CSI Data Mover backups use Kopia regardless of uploaderType.
Red Hat only supports the combination of OADP versions 1.3.0 and later, and OpenShift Virtualization versions 4.14 and later.
OADP versions before 1.3.0 are not supported for back up and restore of OpenShift Virtualization.
4.14. Configuring OADP with multiple backup storage locations 링크 복사링크가 클립보드에 복사되었습니다!
You can configure one or more backup storage locations (BSLs) in the Data Protection Application (DPA). You can also select the location to store the backup in when you create the backup. With this configuration, you can store your backups in the following ways:
- To different regions
- To a different storage provider
OADP supports multiple credentials for configuring more than one BSL, so that you can specify the credentials to use with any BSL.
4.14.1.1. Configuring the DPA with more than one BSL 링크 복사링크가 클립보드에 복사되었습니다!
You can configure the DataProtectionApplication (DPA) custom resource (CR) with more than one BackupStorageLocation (BSL) CR and specify the credentials provided by the cloud provider.
For example, where you have configured the following two BSLs:
- Configured one BSL in the DPA and set it as the default BSL.
-
Created another BSL independently by using the
BackupStorageLocationCR.
As you have already set the BSL created through the DPA as the default, you cannot set the independently created BSL again as the default. This means, at any given time, you can set only one BSL as the default BSL.
Prerequisites
- You must install the OADP Operator.
- You must create the secrets by using the credentials provided by the cloud provider.
Procedure
Configure the
DataProtectionApplicationCR with more than oneBackupStorageLocationCR. See the following example:Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the first BSL.
- 2
- This parameter indicates that this BSL is the default BSL. If a BSL is not set in the
Backup CR, the default BSL is used. You can set only one BSL as the default. - 3
- Specify the bucket name.
- 4
- Specify a prefix for Velero backups; for example,
velero. - 5
- Specify the AWS region for the bucket.
- 6
- Specify the name of the default
Secretobject that you created. - 7
- Specify a name for the second BSL.
- 8
- Specify the URL of the S3 endpoint.
- 9
- Specify the correct name for the
Secret; for example,custom_secret_name_odf. If you do not specify aSecretname, the default name is used.
Specify the BSL to be used in the backup CR. See the following example.
Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.14.1.2. OADP use case for two BSLs 링크 복사링크가 클립보드에 복사되었습니다!
In this use case, you configure the DPA with two storage locations by using two cloud credentials. You back up an application with a database by using the default BSL. OADP stores the backup resources in the default BSL. You then backup the application again by using the second BSL.
Prerequisites
- You must install the OADP Operator.
- You must configure two backup storage locations: AWS S3 and Multicloud Object Gateway (MCG).
- You must have an application with a database deployed on a Red Hat OpenShift cluster.
Procedure
Create the first
Secretfor the AWS S3 storage provider with the default name by running the following command:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name>
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the cloud credentials file for AWS S3.
Create the second
Secretfor MCG with a custom name by running the following command:oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name>
$ oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the cloud credentials file for MCG. Note the name of the
mcg-secretcustom secret.
Configure the DPA with the two BSLs as shown in the following example.
Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the DPA by running the following command:
oc create -f <dpa_file_name>
$ oc create -f <dpa_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the file name of the DPA you configured.
Verify that the DPA has reconciled by running the following command:
oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the BSLs are available by running the following command:
oc get bsl
$ oc get bslCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s
NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a backup CR with the default BSL.
NoteIn the following example, the
storageLocationfield is not specified in the backup CR.Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the namespace for the application installed in the cluster.
Create a backup by running the following command:
oc apply -f <backup_file_name>
$ oc apply -f <backup_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup CR file.
Verify that the backup completed with the default BSL by running the following command:
oc get backups.velero.io <backup_name> -o yaml
$ oc get backups.velero.io <backup_name> -o yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup.
Create a backup CR by using MCG as the BSL. In the following example, note that the second
storageLocationvalue is specified at the time of backup CR creation.Example backup
CRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a second backup by running the following command:
oc apply -f <backup_file_name>
$ oc apply -f <backup_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup CR file.
Verify that the backup completed with the storage location as MCG by running the following command:
oc get backups.velero.io <backup_name> -o yaml
$ oc get backups.velero.io <backup_name> -o yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup.
4.15. Configuring OADP with multiple Volume Snapshot Locations 링크 복사링크가 클립보드에 복사되었습니다!
You can configure one or more Volume Snapshot Locations (VSLs) to store the snapshots in different cloud provider regions.
4.15.1.1. Configuring the DPA with more than one VSL 링크 복사링크가 클립보드에 복사되었습니다!
You configure the DPA with more than one VSL and specify the credentials provided by the cloud provider. Make sure that you configure the snapshot location in the same region as the persistent volumes. See the following example.
Example DPA
4.16. Uninstalling OADP 링크 복사링크가 클립보드에 복사되었습니다!
4.16.1. Uninstalling the OpenShift API for Data Protection 링크 복사링크가 클립보드에 복사되었습니다!
You uninstall the OpenShift API for Data Protection (OADP) by deleting the OADP Operator. See Deleting Operators from a cluster for details.
4.17. OADP backing up 링크 복사링크가 클립보드에 복사되었습니다!
4.17.1. Backing up applications 링크 복사링크가 클립보드에 복사되었습니다!
Frequent backups might consume storage on the backup storage location. Check the frequency of backups, retention time, and the amount of data of the persistent volumes (PVs) if using non-local backups, for example, S3 buckets. Because all taken backup remains until expired, also check the time to live (TTL) setting of the schedule.
You can back up applications by creating a Backup custom resource (CR). For more information, see Creating a Backup CR. The following are the different backup types for a Backup CR:
-
The
BackupCR creates backup files for Kubernetes resources and internal images on S3 object storage. - If you use Velero’s snapshot feature to back up data stored on the persistent volume, only snapshot related information is stored in the S3 bucket along with the Openshift object data.
-
If your cloud provider has a native snapshot API or supports CSI snapshots, the
BackupCR backs up persistent volumes (PVs) by creating snapshots. For more information about working with CSI snapshots, see Backing up persistent volumes with CSI snapshots.
If the underlying storage or the backup bucket are part of the same cluster, then the data might be lost in case of disaster.
For more information about CSI volume snapshots, see CSI volume snapshots.
The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation.
The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage.
- If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using Kopia or Restic. See Backing up applications with File System Backup: Kopia or Restic.
…/.snapshot: read-only file system error
The …/.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory.
Do not give Velero write access to the .snapshot directory, and disable client access to this directory.
The OpenShift API for Data Protection (OADP) does not support backing up volume snapshots that were created by other software.
4.17.1.1. Previewing resources before running backup and restore 링크 복사링크가 클립보드에 복사되었습니다!
OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations.
Prerequisites
- You have installed the OADP Operator.
Procedure
To preview the resources included in the backup before running the actual backup, run the following command:
velero backup create <backup-name> --snapshot-volumes false
$ velero backup create <backup-name> --snapshot-volumes false1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value of
--snapshot-volumesparameter asfalse.
To know more details about the backup resources, run the following command:
velero describe backup <backup_name> --details
$ velero describe backup <backup_name> --details1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup.
To preview the resources included in the restore before running the actual restore, run the following command:
velero restore create --from-backup <backup-name>
$ velero restore create --from-backup <backup-name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup created to review the backup resources.
ImportantThe
velero restore createcommand creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources.To know more details about the restore resources, run the following command:
velero describe restore <restore_name> --details
$ velero describe restore <restore_name> --details1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the restore.
You can create backup hooks to run commands before or after the backup operation. See Creating backup hooks.
You can schedule backups by creating a Schedule CR instead of a Backup CR. See Scheduling backups using Schedule CR.
4.17.1.2. Known issues 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift Container Platform 4.15 enforces a pod security admission (PSA) policy that can hinder the readiness of pods during a Restic restore process.
This issue has been resolved in the OADP 1.1.6 and OADP 1.2.2 releases, therefore it is recommended that users upgrade to these releases.
For more information, see Restic restore partially failing on OCP 4.15 due to changed PSA policy.
4.17.2. Creating a Backup CR 링크 복사링크가 클립보드에 복사되었습니다!
You back up Kubernetes images, internal images, and persistent volumes (PVs) by creating a Backup custom resource (CR).
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
The
DataProtectionApplicationCR must be in aReadystate. Backup location prerequisites:
- You must have S3 object storage configured for Velero.
-
You must have a backup location configured in the
DataProtectionApplicationCR.
Snapshot location prerequisites:
- Your cloud provider must have a native snapshot API or support Container Storage Interface (CSI) snapshots.
-
For CSI snapshots, you must create a
VolumeSnapshotClassCR to register the CSI driver. -
You must have a volume location configured in the
DataProtectionApplicationCR.
Procedure
Retrieve the
backupStorageLocationsCRs by entering the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m
NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
BackupCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify an array of namespaces to back up.
- 2
- Optional: Specify an array of resources to include in the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified. If unspecified, all resources are included.
- 3
- Optional: Specify an array of resources to exclude from the backup. Resources might be shortcuts (for example, 'po' for 'pods') or fully-qualified.
- 4
- Specify the name of the
backupStorageLocationsCR. - 5
- The
ttlfield defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out. - 6
- Map of {key,value} pairs of backup resources that have all the specified labels.
- 7
- Map of {key,value} pairs of backup resources that have one or more of the specified labels.
Verify that the status of the
BackupCR isCompleted:oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'$ oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.17.3. Backing up persistent volumes with CSI snapshots 링크 복사링크가 클립보드에 복사되었습니다!
You back up persistent volumes with Container Storage Interface (CSI) snapshots by editing the VolumeSnapshotClass custom resource (CR) of the cloud storage before you create the Backup CR, see CSI volume snapshots.
For more information, see Creating a Backup CR.
Prerequisites
- The cloud provider must support CSI snapshots.
-
You must enable CSI in the
DataProtectionApplicationCR.
Procedure
Add the
metadata.labels.velero.io/csi-volumesnapshot-class: "true"key-value pair to theVolumeSnapshotClassCR:Example configuration file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Must be set to
true. - 2
- If you are restoring this volume in another cluster with the same driver, make sure that you set the
snapshot.storage.kubernetes.io/is-default-classparameter tofalseinstead of setting it totrue. Otherwise, the restore will partially fail. - 3
- OADP supports the
RetainandDeletedeletion policy types for CSI and Data Mover backup and restore.
Next steps
-
You can now create a
BackupCR.
4.17.4. Backing up applications with File System Backup: Kopia or Restic 링크 복사링크가 클립보드에 복사되었습니다!
You can use OADP to back up and restore Kubernetes volumes attached to pods from the file system of the volumes. This process is called File System Backup (FSB) or Pod Volume Backup (PVB). It is accomplished by using modules from the open source backup tools Restic or Kopia.
If your cloud provider does not support snapshots or if your applications are on NFS data volumes, you can create backups by using FSB.
FSB integration with OADP provides a solution for backing up and restoring almost any type of Kubernetes volumes. This integration is an additional capability of OADP and is not a replacement for existing functionality.
You back up Kubernetes resources, internal images, and persistent volumes with Kopia or Restic by editing the Backup custom resource (CR).
You do not need to specify a snapshot location in the DataProtectionApplication CR.
In OADP version 1.3 and later, you can use either Kopia or Restic for backing up applications.
For the Built-in DataMover, you must use Kopia.
In OADP version 1.2 and earlier, you can only use Restic for backing up applications.
FSB does not support backing up hostPath volumes. For more information, see FSB limitations.
…/.snapshot: read-only file system error
The …/.snapshot directory is a snapshot copy directory, which is used by several NFS servers. This directory has read-only access by default, so Velero cannot restore to this directory.
Do not give Velero write access to the .snapshot directory, and disable client access to this directory.
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
You must not disable the default
nodeAgentinstallation by settingspec.configuration.nodeAgent.enabletofalsein theDataProtectionApplicationCR. -
You must select Kopia or Restic as the uploader by setting
spec.configuration.nodeAgent.uploaderTypetokopiaorresticin theDataProtectionApplicationCR. -
The
DataProtectionApplicationCR must be in aReadystate.
Procedure
Create the
BackupCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In OADP version 1.2 and later, add the
defaultVolumesToFsBackup: truesetting within thespecblock. In OADP version 1.1, adddefaultVolumesToRestic: true.
4.17.5. Creating backup hooks 링크 복사링크가 클립보드에 복사되었습니다!
When performing a backup, it is possible to specify one or more commands to execute in a container within a pod, based on the pod being backed up.
The commands can be configured to performed before any custom action processing (Pre hooks), or after all custom actions have been completed and any additional items specified by the custom action have been backed up (Post hooks).
You create backup hooks to run commands in a container in a pod by editing the Backup custom resource (CR).
Procedure
Add a hook to the
spec.hooksblock of theBackupCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: You can specify namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces.
- 2
- Optional: You can specify namespaces to which the hook does not apply.
- 3
- Currently, pods are the only supported resource that hooks can apply to.
- 4
- Optional: You can specify resources to which the hook does not apply.
- 5
- Optional: This hook only applies to objects matching the label. If this value is not specified, the hook applies to all objects.
- 6
- Array of hooks to run before the backup.
- 7
- Optional: If the container is not specified, the command runs in the first container in the pod.
- 8
- This is the entry point for the
initcontainer being added. - 9
- Allowed values for error handling are
FailandContinue. The default isFail. - 10
- Optional: How long to wait for the commands to run. The default is
30s. - 11
- This block defines an array of hooks to run after the backup, with the same parameters as the pre-backup hooks.
4.17.6. Scheduling backups using Schedule CR 링크 복사링크가 클립보드에 복사되었습니다!
The schedule operation allows you to create a backup of your data at a particular time, specified by a Cron expression.
You schedule backups by creating a Schedule custom resource (CR) instead of a Backup CR.
Leave enough time in your backup schedule for a backup to finish before another backup is created.
For example, if a backup of a namespace typically takes 10 minutes, do not schedule backups more frequently than every 15 minutes.
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
The
DataProtectionApplicationCR must be in aReadystate.
Procedure
Retrieve the
backupStorageLocationsCRs:oc get backupStorageLocations -n openshift-adp
$ oc get backupStorageLocations -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m
NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ScheduleCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo schedule a backup at specific intervals, enter the
<duration_in_minutes>in the following format:schedule: "*/10 * * * *"
schedule: "*/10 * * * *"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the minutes value between quotation marks (
" ").- 1
cronexpression to schedule the backup, for example,0 7 * * *to perform a backup every day at 7:00.- 2
- Array of namespaces to back up.
- 3
- Name of the
backupStorageLocationsCR. - 4
- Optional: In OADP version 1.2 and later, add the
defaultVolumesToFsBackup: truekey-value pair to your configuration when performing backups of volumes with Restic. In OADP version 1.1, add thedefaultVolumesToRestic: truekey-value pair when you back up volumes with Restic. - 5
- The
ttlfield defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out.
Verify that the status of the
ScheduleCR isCompletedafter the scheduled backup runs:oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'$ oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.17.7. Deleting backups 링크 복사링크가 클립보드에 복사되었습니다!
You can delete a backup by creating the DeleteBackupRequest custom resource (CR) or by running the velero backup delete command as explained in the following procedures.
The volume backup artifacts are deleted at different times depending on the backup method:
- Restic: The artifacts are deleted in the next full maintenance cycle, after the backup is deleted.
- Container Storage Interface (CSI): The artifacts are deleted immediately when the backup is deleted.
- Kopia: The artifacts are deleted after three full maintenance cycles of the Kopia repository, after the backup is deleted.
4.17.7.1. Deleting a backup by creating a DeleteBackupRequest CR 링크 복사링크가 클립보드에 복사되었습니다!
You can delete a backup by creating a DeleteBackupRequest custom resource (CR).
Prerequisites
- You have run a backup of your application.
Procedure
Create a
DeleteBackupRequestCR manifest file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup.
Apply the
DeleteBackupRequestCR to delete the backup:oc apply -f <deletebackuprequest_cr_filename>
$ oc apply -f <deletebackuprequest_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.17.7.2. Deleting a backup by using the Velero CLI 링크 복사링크가 클립보드에 복사되었습니다!
You can delete a backup by using the Velero CLI.
Prerequisites
- You have run a backup of your application.
- You downloaded the Velero CLI and can access the Velero binary in your cluster.
Procedure
To delete the backup, run the following Velero command:
velero backup delete <backup_name> -n openshift-adp
$ velero backup delete <backup_name> -n openshift-adp1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup.
4.17.7.3. About Kopia repository maintenance 링크 복사링크가 클립보드에 복사되었습니다!
There are two types of Kopia repository maintenance:
- Quick maintenance
- Runs every hour to keep the number of index blobs (n) low. A high number of indexes negatively affects the performance of Kopia operations.
- Does not delete any metadata from the repository without ensuring that another copy of the same metadata exists.
- Full maintenance
- Runs every 24 hours to perform garbage collection of repository contents that are no longer needed.
-
snapshot-gc, a full maintenance task, finds all files and directory listings that are no longer accessible from snapshot manifests and marks them as deleted. - A full maintenance is a resource-costly operation, as it requires scanning all directories in all snapshots that are active in the cluster.
4.17.7.3.1. Kopia maintenance in OADP 링크 복사링크가 클립보드에 복사되었습니다!
The repo-maintain-job jobs are executed in the namespace where OADP is installed, as shown in the following example:
pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m
pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m
pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m
pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m
You can check the logs of the repo-maintain-job for more details about the cleanup and the removal of artifacts in the backup object storage. You can find a note, as shown in the following example, in the repo-maintain-job when the next full cycle maintenance is due:
not due for full maintenance cycle until 2024-00-00 18:29:4
not due for full maintenance cycle until 2024-00-00 18:29:4
Three successful executions of a full maintenance cycle are required for the objects to be deleted from the backup object storage. This means you can expect up to 72 hours for all the artifacts in the backup object storage to be deleted.
4.17.7.4. Deleting a backup repository 링크 복사링크가 클립보드에 복사되었습니다!
After you delete the backup, and after the Kopia repository maintenance cycles to delete the related artifacts are complete, the backup is no longer referenced by any metadata or manifest objects. You can then delete the backuprepository custom resource (CR) to complete the backup deletion process.
Prerequisites
- You have deleted the backup of your application.
- You have waited up to 72 hours after the backup is deleted. This time frame allows Kopia to run the repository maintenance cycles.
Procedure
To get the name of the backup repository CR for a backup, run the following command:
oc get backuprepositories.velero.io -n openshift-adp
$ oc get backuprepositories.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow To delete the backup repository CR, run the following command:
oc delete backuprepository <backup_repository_name> -n openshift-adp
$ oc delete backuprepository <backup_repository_name> -n openshift-adp1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup repository from the earlier step.
4.17.8. About Kopia 링크 복사링크가 클립보드에 복사되었습니다!
Kopia is a fast and secure open-source backup and restore tool that allows you to create encrypted snapshots of your data and save the snapshots to remote or cloud storage of your choice.
Kopia supports network and local storage locations, and many cloud or remote storage locations, including:
- Amazon S3 and any cloud storage that is compatible with S3
- Azure Blob Storage
- Google Cloud Storage platform
Kopia uses content-addressable storage for snapshots:
- Snapshots are always incremental; data that is already included in previous snapshots is not re-uploaded to the repository. A file is only uploaded to the repository again if it is modified.
- Stored data is deduplicated; if multiple copies of the same file exist, only one of them is stored.
- If files are moved or renamed, Kopia can recognize that they have the same content and does not upload them again.
4.17.8.1. OADP integration with Kopia 링크 복사링크가 클립보드에 복사되었습니다!
OADP 1.3 supports Kopia as the backup mechanism for pod volume backup in addition to Restic. You must choose one or the other at installation by setting the uploaderType field in the DataProtectionApplication custom resource (CR). The possible values are restic or kopia. If you do not specify an uploaderType, OADP 1.3 defaults to using Kopia as the backup mechanism. The data is written to and read from a unified repository.
Using the Kopia client to modify the Kopia backup repositories is not supported and can affect the integrity of Kopia backups. OADP does not support directly connecting to the Kopia repository and can offer support only on a best-effort basis.
The following example shows a DataProtectionApplication CR configured for using Kopia:
4.18. OADP restoring 링크 복사링크가 클립보드에 복사되었습니다!
4.18.1. Restoring applications 링크 복사링크가 클립보드에 복사되었습니다!
You restore application backups by creating a Restore custom resource (CR). See Creating a Restore CR.
You can create restore hooks to run commands in a container in a pod by editing the Restore CR. See Creating restore hooks.
4.18.1.1. Previewing resources before running backup and restore 링크 복사링크가 클립보드에 복사되었습니다!
OADP backs up application resources based on the type, namespace, or label. This means that you can view the resources after the backup is complete. Similarly, you can view the restored objects based on the namespace, persistent volume (PV), or label after a restore operation is complete. To preview the resources in advance, you can do a dry run of the backup and restore operations.
Prerequisites
- You have installed the OADP Operator.
Procedure
To preview the resources included in the backup before running the actual backup, run the following command:
velero backup create <backup-name> --snapshot-volumes false
$ velero backup create <backup-name> --snapshot-volumes false1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the value of
--snapshot-volumesparameter asfalse.
To know more details about the backup resources, run the following command:
velero describe backup <backup_name> --details
$ velero describe backup <backup_name> --details1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup.
To preview the resources included in the restore before running the actual restore, run the following command:
velero restore create --from-backup <backup-name>
$ velero restore create --from-backup <backup-name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup created to review the backup resources.
ImportantThe
velero restore createcommand creates restore resources in the cluster. You must delete the resources created as part of the restore, after you review the resources.To know more details about the restore resources, run the following command:
velero describe restore <restore_name> --details
$ velero describe restore <restore_name> --details1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the restore.
4.18.1.2. Creating a Restore CR 링크 복사링크가 클립보드에 복사되었습니다!
You restore a Backup custom resource (CR) by creating a Restore CR.
When you restore a stateful application that uses the azurefile-csi storage class, the restore operation remains in the Finalizing phase.
Prerequisites
- You must install the OpenShift API for Data Protection (OADP) Operator.
-
The
DataProtectionApplicationCR must be in aReadystate. -
You must have a Velero
BackupCR. - The persistent volume (PV) capacity must match the requested size at backup time. Adjust the requested size if needed.
Procedure
Create a
RestoreCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Name of the
BackupCR. - 2
- Optional: Specify an array of resources to include in the restore process. Resources might be shortcuts (for example,
poforpods) or fully-qualified. If unspecified, all resources are included. - 3
- Optional: The
restorePVsparameter can be set tofalseto turn off restore ofPersistentVolumesfromVolumeSnapshotof Container Storage Interface (CSI) snapshots or from native snapshots whenVolumeSnapshotLocationis configured.
Verify that the status of the
RestoreCR isCompletedby entering the following command:oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'$ oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup resources have been restored by entering the following command:
oc get all -n <namespace>
$ oc get all -n <namespace>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Namespace that you backed up.
If you restore
DeploymentConfigwith volumes or if you use post-restore hooks, run thedc-post-restore.shcleanup script by entering the following command:bash dc-restic-post-restore.sh -> dc-post-restore.sh
$ bash dc-restic-post-restore.sh -> dc-post-restore.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDuring the restore process, the OADP Velero plug-ins scale down the
DeploymentConfigobjects and restore the pods as standalone pods. This is done to prevent the cluster from deleting the restoredDeploymentConfigpods immediately on restore and to allow the restore and post-restore hooks to complete their actions on the restored pods. The cleanup script shown below removes these disconnected pods and scales anyDeploymentConfigobjects back up to the appropriate number of replicas.Example 4.1.
dc-restic-post-restore.sh → dc-post-restore.shcleanup scriptCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.18.1.3. Creating restore hooks 링크 복사링크가 클립보드에 복사되었습니다!
You create restore hooks to run commands in a container in a pod by editing the Restore custom resource (CR).
You can create two types of restore hooks:
An
inithook adds an init container to a pod to perform setup tasks before the application container starts.If you restore a Restic backup, the
restic-waitinit container is added before the restore hook init container.-
An
exechook runs commands or scripts in a container of a restored pod.
Procedure
Add a hook to the
spec.hooksblock of theRestoreCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Optional: Array of namespaces to which the hook applies. If this value is not specified, the hook applies to all namespaces.
- 2
- Currently, pods are the only supported resource that hooks can apply to.
- 3
- Optional: This hook only applies to objects matching the label selector.
- 4
- Optional: Timeout specifies the maximum length of time Velero waits for
initContainersto complete. - 5
- Optional: If the container is not specified, the command runs in the first container in the pod.
- 6
- This is the entrypoint for the init container being added.
- 7
- Optional: How long to wait for a container to become ready. This should be long enough for the container to start and for any preceding hooks in the same container to complete. If not set, the restore process waits indefinitely.
- 8
- Optional: How long to wait for the commands to run. The default is
30s. - 9
- Allowed values for error handling are
FailandContinue:-
Continue: Only command failures are logged. -
Fail: No more restore hooks run in any container in any pod. The status of theRestoreCR will bePartiallyFailed.
-
During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely.
This happens because, during the restore operation, OpenShift controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB and the post restore hook. For more information about image stream trigger, see "Triggering updates on image stream changes".
The workaround for this behavior is a two-step restore process:
First, perform a restore excluding the
Deploymentresources, for example:velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps
$ velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.appsCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the first restore is successful, perform a second restore by including these resources, for example:
velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps
$ velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.appsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.19. OADP and ROSA 링크 복사링크가 클립보드에 복사되었습니다!
4.19.1. Backing up applications on ROSA clusters using OADP 링크 복사링크가 클립보드에 복사되었습니다!
You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to back up and restore application data.
ROSA is a fully-managed, turnkey application platform that allows you to deliver value to your customers by building and deploying applications.
ROSA provides seamless integration with a wide range of Amazon Web Services (AWS) compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivery of differentiating experiences to your customers.
You can subscribe to the service directly from your AWS account.
After you create your clusters, you can operate your clusters with the OpenShift Container Platform web console or through Red Hat OpenShift Cluster Manager. You can also use ROSA with OpenShift APIs and command-line interface (CLI) tools.
For additional information about ROSA installation, see Installing Red Hat OpenShift Service on AWS (ROSA) interactive walkthrough.
Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API.
This process is performed in the following two stages:
- Prepare AWS credentials
- Install the OADP Operator and give it an IAM role
4.19.1.1. Preparing AWS credentials for OADP 링크 복사링크가 클립보드에 복사되었습니다!
An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation.
Procedure
Create the following environment variables by running the following commands:
ImportantChange the cluster name to match your ROSA cluster, and ensure you are logged into the cluster as an administrator. Ensure that all fields are outputted correctly before continuing.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
my-clusterwith your ROSA cluster name.
On the AWS account, create an IAM policy to allow access to AWS S3:
Check to see if the policy exists by running the following command:
POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text)$ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}" --output text)1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
RosaOadpwith your policy name.
Enter the following command to create the policy JSON file and then create the policy in ROSA:
NoteIf the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the
ifstatement intentionally skips the policy creation.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
SCRATCHis a name for a temporary directory created for the environment variables.
View the policy ARN by running the following command:
echo ${POLICY_ARN}$ echo ${POLICY_ARN}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an IAM role trust policy for the cluster:
Create the trust policy file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the role by running the following command:
ROLE_ARN=$(aws iam create-role --role-name \ "${ROLE_NAME}" \ --assume-role-policy-document file://${SCRATCH}/trust-policy.json \ --tags Key=rosa_cluster_id,Value=${ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \ --query Role.Arn --output text)$ ROLE_ARN=$(aws iam create-role --role-name \ "${ROLE_NAME}" \ --assume-role-policy-document file://${SCRATCH}/trust-policy.json \ --tags Key=rosa_cluster_id,Value=${ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=${CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp \ --query Role.Arn --output text)Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the role ARN by running the following command:
echo ${ROLE_ARN}$ echo ${ROLE_ARN}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Attach the IAM policy to the IAM role by running the following command:
aws iam attach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn ${POLICY_ARN}$ aws iam attach-role-policy --role-name "${ROLE_NAME}" \ --policy-arn ${POLICY_ARN}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.19.1.2. Installing the OADP Operator and providing the IAM role 링크 복사링크가 클립보드에 복사되었습니다!
AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. OpenShift Container Platform (ROSA) with STS is the recommended credential mode for ROSA clusters. This document describes how to install OpenShift API for Data Protection (OADP) on ROSA with AWS STS.
Restic is unsupported.
Kopia file system backup (FSB) is supported when backing up file systems that do not have Container Storage Interface (CSI) snapshotting support.
Example file systems include the following:
- Amazon Elastic File System (EFS)
- Network File System (NFS)
-
emptyDirvolumes - Local volumes
For backing up volumes, OADP on ROSA with AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots.
In an Amazon ROSA cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported.
The Data Mover feature is not currently supported in ROSA clusters. You can use native AWS S3 tools for moving data.
Prerequisites
-
An OpenShift Container Platform ROSA cluster with the required access and tokens. For instructions, see the previous procedure Preparing AWS credentials for OADP. If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including
ROLE_ARN, for each cluster.
Procedure
Create an OpenShift Container Platform secret from your AWS token file by entering the following commands:
Create the credentials file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
<aws_region>with the AWS region to use for the STS endpoint.
Create a namespace for OADP:
oc create namespace openshift-adp
$ oc create namespace openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OpenShift Container Platform secret:
oc -n openshift-adp create secret generic cloud-credentials \ --from-file=${SCRATCH}/credentials$ oc -n openshift-adp create secret generic cloud-credentials \ --from-file=${SCRATCH}/credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console.
The preceding secret is created automatically by CCO.
Install the OADP Operator:
- In the OpenShift Container Platform web console, browse to Operators → OperatorHub.
- Search for the OADP Operator.
- In the role_ARN field, paste the role_arn that you created previously and click Install.
Create AWS cloud storage using your AWS credentials by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check your application’s storage default storage class by entering the following command:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the storage class by running the following command:
oc get storageclass
$ oc get storageclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21hCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe following storage classes will work:
- gp3-csi
- gp2-csi
- gp3
- gp2
If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration.
Create the
DataProtectionApplicationresource to configure the connection to the storage where the backups and volume snapshots are stored:If you are using only CSI volumes, deploy a Data Protection Application by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- ROSA supports internal image backup. Set this field to
falseif you do not want to use image backup. - 2
- See the important note regarding the
nodeAgentattribute. - 3
- The type of uploader. The possible values are
resticorkopia. The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of theuploaderTypefield.
If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- ROSA supports internal image backup. Set this field to false if you do not want to use image backup.
- 2
- See the important note regarding the
nodeAgentattribute. - 3
- The
credentialsFilefield is the mounted location of the bucket credential on the pod. - 4
- The
enableSharedConfigfield allows thesnapshotLocationsto share or reuse the credential defined for the bucket. - 5
- Use the profile name set in the AWS credentials file.
- 6
- Specify
regionas your AWS region. This must be the same as the cluster region.
You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications.
The enable parameter of restic is set to false in this configuration, because OADP does not support Restic in ROSA environments.
If you use OADP 1.2, replace this configuration:
nodeAgent: enable: false uploaderType: restic
nodeAgent:
enable: false
uploaderType: restic
with the following configuration:
restic: enable: false
restic:
enable: false
If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration.
4.19.1.3. Updating the IAM role ARN in the OADP Operator subscription 링크 복사링크가 클립보드에 복사되었습니다!
While installing the OADP Operator on a ROSA Security Token Service (STS) cluster, if you provide an incorrect IAM role Amazon Resource Name (ARN), the openshift-adp-controller pod gives an error. The credential requests that are generated contain the wrong IAM role ARN. To update the credential requests object with the correct IAM role ARN, you can edit the OADP Operator subscription and patch the IAM role ARN with the correct value. By editing the OADP Operator subscription, you do not have to uninstall and reinstall OADP to update the IAM role ARN.
Prerequisites
- You have a Red Hat OpenShift Service on AWS STS cluster with the required access and tokens.
- You have installed OADP on the ROSA STS cluster.
Procedure
To verify that the OADP subscription has the wrong IAM role ARN environment variable set, run the following command:
oc get sub -o yaml redhat-oadp-operator
$ oc get sub -o yaml redhat-oadp-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example subscription
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Verify the value of
ROLEARNyou want to update.
Update the
ROLEARNfield of the subscription with the correct role ARN by running the following command:oc patch subscription redhat-oadp-operator -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "<role_arn>"}]}}}' --type='merge'$ oc patch subscription redhat-oadp-operator -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "<role_arn>"}]}}}' --type='merge'Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<role_arn>-
Specifies the IAM role ARN to be updated. For example,
arn:aws:iam::160…..6956:role/oadprosa…..8wlf.
Verify that the
secretobject is updated with correct role ARN value by running the following command:oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d$ oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/tokenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
DataProtectionApplicationcustom resource (CR) manifest file as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the
CloudStorageCR.
Create the
DataProtectionApplicationCR by running the following command:oc create -f <dpa_manifest_file>
$ oc create -f <dpa_manifest_file>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplicationCR is reconciled and thestatusis set to"True"by running the following command:oc get dpa -n openshift-adp -o yaml
$ oc get dpa -n openshift-adp -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
DataProtectionApplicationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
BackupStorageLocationCR is in an available state by running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
BackupStorageLocationNAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true
NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.19.1.4. Example: Backing up workload on OADP ROSA STS, with an optional cleanup 링크 복사링크가 클립보드에 복사되었습니다!
4.19.1.4.1. Performing a backup with OADP and ROSA STS 링크 복사링크가 클립보드에 복사되었습니다!
The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) STS.
Either Data Protection Application (DPA) configuration will work.
Create a workload to back up by running the following commands:
oc create namespace hello-world
$ oc create namespace hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the route by running the following command:
oc expose service/hello-openshift -n hello-world
$ oc expose service/hello-openshift -n hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the application is working by running the following command:
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!Copy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the workload by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the backup is completed and then run the following command:
watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the demo workload by running the following command:
oc delete ns hello-world
$ oc delete ns hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the workload from the backup by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Restore to finish by running the following command:
watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the workload is restored by running the following command:
oc -n hello-world get pods
$ oc -n hello-world get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the JSONPath by running the following command:
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For troubleshooting tips, see the OADP team’s troubleshooting documentation.
4.19.1.4.2. Cleaning up a cluster after a backup with OADP and ROSA STS 링크 복사링크가 클립보드에 복사되었습니다!
If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions.
Procedure
Delete the workload by running the following command:
oc delete ns hello-world
$ oc delete ns hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Data Protection Application (DPA) by running the following command:
oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa$ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cloud storage by running the following command:
oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp$ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadpCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningIf this command hangs, you might need to delete the finalizer by running the following command:
oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge$ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Operator is no longer required, remove it by running the following command:
oc -n openshift-adp delete subscription oadp-operator
$ oc -n openshift-adp delete subscription oadp-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the namespace from the Operator:
oc delete ns openshift-adp
$ oc delete ns openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the backup and restore resources are no longer required, remove them from the cluster by running the following command:
oc delete backups.velero.io hello-world
$ oc delete backups.velero.io hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow To delete backup, restore and remote objects in AWS S3 run the following command:
velero backup delete hello-world
$ velero backup delete hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command:
for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done$ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the AWS S3 bucket by running the following commands:
aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive$ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursiveCopy to Clipboard Copied! Toggle word wrap Toggle overflow aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp$ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Detach the policy from the role by running the following command:
aws iam detach-role-policy --role-name "${ROLE_NAME}" --policy-arn "${POLICY_ARN}"$ aws iam detach-role-policy --role-name "${ROLE_NAME}" --policy-arn "${POLICY_ARN}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the role by running the following command:
aws iam delete-role --role-name "${ROLE_NAME}"$ aws iam delete-role --role-name "${ROLE_NAME}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.20. OADP and AWS STS 링크 복사링크가 클립보드에 복사되었습니다!
4.20.1. Backing up applications on AWS STS using OADP 링크 복사링크가 클립보드에 복사되었습니다!
You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) by installing the OADP Operator. The Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure AWS for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
You can install OADP on an AWS Security Token Service (STS) (AWS STS) cluster manually. Amazon AWS provides AWS STS as a web service that enables you to request temporary, limited-privilege credentials for users. You use STS to provide trusted users with temporary access to resources via API calls, your AWS console, or the AWS command-line interface (CLI).
Before installing OpenShift API for Data Protection (OADP), you must set up role and policy credentials for OADP so that it can use the Amazon Web Services API.
This process is performed in the following two stages:
- Prepare AWS credentials.
- Install the OADP Operator and give it an IAM role.
4.20.1.1. Preparing AWS STS credentials for OADP 링크 복사링크가 클립보드에 복사되었습니다!
An Amazon Web Services account must be prepared and configured to accept an OpenShift API for Data Protection (OADP) installation. Prepare the AWS credentials by using the following procedure.
Procedure
Define the
cluster_nameenvironment variable by running the following command:export CLUSTER_NAME= <AWS_cluster_name>
$ export CLUSTER_NAME= <AWS_cluster_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The variable can be set to any value.
Retrieve all of the details of the
clustersuch as theAWS_ACCOUNT_ID, OIDC_ENDPOINTby running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary directory to store all of the files by running the following command:
export SCRATCH="/tmp/${CLUSTER_NAME}/oadp"$ export SCRATCH="/tmp/${CLUSTER_NAME}/oadp" mkdir -p ${SCRATCH}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display all of the gathered details by running the following command:
echo "Cluster ID: ${AWS_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint:$ echo "Cluster ID: ${AWS_CLUSTER_ID}, Region: ${REGION}, OIDC Endpoint: ${OIDC_ENDPOINT}, AWS Account ID: ${AWS_ACCOUNT_ID}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow On the AWS account, create an IAM policy to allow access to AWS S3:
Check to see if the policy exists by running the following commands:
export POLICY_NAME="OadpVer1"
$ export POLICY_NAME="OadpVer1"1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The variable can be set to any value.
POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='$POLICY_NAME'].{ARN:Arn}" --output text)$ POLICY_ARN=$(aws iam list-policies --query "Policies[?PolicyName=='$POLICY_NAME'].{ARN:Arn}" --output text)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to create the policy JSON file and then create the policy:
NoteIf the policy ARN is not found, the command creates the policy. If the policy ARN already exists, the
ifstatement intentionally skips the policy creation.Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
SCRATCHis a name for a temporary directory created for storing the files.
View the policy ARN by running the following command:
echo ${POLICY_ARN}$ echo ${POLICY_ARN}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an IAM role trust policy for the cluster:
Create the trust policy file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an IAM role trust policy for the cluster by running the following command:
ROLE_ARN=$(aws iam create-role --role-name \ "${ROLE_NAME}" \ --assume-role-policy-document file://${SCRATCH}/trust-policy.json \ --tags Key=cluster_id,Value=${AWS_CLUSTER_ID} Key=openshift_version,Value=${CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)$ ROLE_ARN=$(aws iam create-role --role-name \ "${ROLE_NAME}" \ --assume-role-policy-document file://${SCRATCH}/trust-policy.json \ --tags Key=cluster_id,Value=${AWS_CLUSTER_ID} Key=openshift_version,Value=${CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the role ARN by running the following command:
echo ${ROLE_ARN}$ echo ${ROLE_ARN}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Attach the IAM policy to the IAM role by running the following command:
aws iam attach-role-policy --role-name "${ROLE_NAME}" --policy-arn ${POLICY_ARN}$ aws iam attach-role-policy --role-name "${ROLE_NAME}" --policy-arn ${POLICY_ARN}Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.20.1.1.1. Setting Velero CPU and memory resource allocations 링크 복사링크가 클립보드에 복사되었습니다!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
4.20.1.2. Installing the OADP Operator and providing the IAM role 링크 복사링크가 클립보드에 복사되었습니다!
AWS Security Token Service (AWS STS) is a global web service that provides short-term credentials for IAM or federated users. This document describes how to install OpenShift API for Data Protection (OADP) on an AWS STS cluster manually.
Restic and Kopia are not supported in the OADP AWS STS environment. Verify that the Restic and Kopia node agent is disabled. For backing up volumes, OADP on AWS STS supports only native snapshots and Container Storage Interface (CSI) snapshots.
In an AWS cluster that uses STS authentication, restoring backed-up data in a different AWS region is not supported.
The Data Mover feature is not currently supported in AWS STS clusters. You can use native AWS S3 tools for moving data.
Prerequisites
-
An OpenShift Container Platform AWS STS cluster with the required access and tokens. For instructions, see the previous procedure Preparing AWS credentials for OADP. If you plan to use two different clusters for backing up and restoring, you must prepare AWS credentials, including
ROLE_ARN, for each cluster.
Procedure
Create an OpenShift Container Platform secret from your AWS token file by entering the following commands:
Create the credentials file:
cat <<EOF > ${SCRATCH}/credentials [default] role_arn = ${ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF$ cat <<EOF > ${SCRATCH}/credentials [default] role_arn = ${ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a namespace for OADP:
oc create namespace openshift-adp
$ oc create namespace openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OpenShift Container Platform secret:
oc -n openshift-adp create secret generic cloud-credentials \ --from-file=${SCRATCH}/credentials$ oc -n openshift-adp create secret generic cloud-credentials \ --from-file=${SCRATCH}/credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn OpenShift Container Platform versions 4.14 and later, the OADP Operator supports a new standardized STS workflow through the Operator Lifecycle Manager (OLM) and Cloud Credentials Operator (CCO). In this workflow, you do not need to create the above secret, you only need to supply the role ARN during the installation of OLM-managed operators using the OpenShift Container Platform web console, for more information see Installing from OperatorHub using the web console.
The preceding secret is created automatically by CCO.
Install the OADP Operator:
- In the OpenShift Container Platform web console, browse to Operators → OperatorHub.
- Search for the OADP Operator.
- In the role_ARN field, paste the role_arn that you created previously and click Install.
Create AWS cloud storage using your AWS credentials by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check your application’s storage default storage class by entering the following command:
oc get pvc -n <namespace>
$ oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the storage class by running the following command:
oc get storageclass
$ oc get storageclassCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21hCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe following storage classes will work:
- gp3-csi
- gp2-csi
- gp3
- gp2
If the application or applications that are being backed up are all using persistent volumes (PVs) with Container Storage Interface (CSI), it is advisable to include the CSI plugin in the OADP DPA configuration.
Create the
DataProtectionApplicationresource to configure the connection to the storage where the backups and volume snapshots are stored:If you are using only CSI volumes, deploy a Data Protection Application by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set this field to
falseif you do not want to use image backup.
If you are using CSI or non-CSI volumes, deploy a Data Protection Application by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set this field to
falseif you do not want to use image backup. - 2
- See the important note regarding the
nodeAgentattribute. - 3
- The
credentialsFilefield is the mounted location of the bucket credential on the pod. - 4
- The
enableSharedConfigfield allows thesnapshotLocationsto share or reuse the credential defined for the bucket. - 5
- Use the profile name set in the AWS credentials file.
- 6
- Specify
regionas your AWS region. This must be the same as the cluster region.
You are now ready to back up and restore OpenShift Container Platform applications, as described in Backing up applications.
If you use OADP 1.2, replace this configuration:
nodeAgent: enable: false uploaderType: restic
nodeAgent:
enable: false
uploaderType: restic
with the following configuration:
restic: enable: false
restic:
enable: false
If you want to use two different clusters for backing up and restoring, the two clusters must have the same AWS S3 storage names in both the cloud storage CR and the OADP DataProtectionApplication configuration.
4.20.1.3. Backing up workload on OADP AWS STS, with an optional cleanup 링크 복사링크가 클립보드에 복사되었습니다!
4.20.1.3.1. Performing a backup with OADP and AWS STS 링크 복사링크가 클립보드에 복사되었습니다!
The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup with OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) (AWS STS).
Either Data Protection Application (DPA) configuration will work.
Create a workload to back up by running the following commands:
oc create namespace hello-world
$ oc create namespace hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the route by running the following command:
oc expose service/hello-openshift -n hello-world
$ oc expose service/hello-openshift -n hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the application is working by running the following command:
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!Copy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the workload by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the backup has completed and then run the following command:
watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the demo workload by running the following command:
oc delete ns hello-world
$ oc delete ns hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the workload from the backup by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Restore to finish by running the following command:
watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the workload is restored by running the following command:
oc -n hello-world get pods
$ oc -n hello-world get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the JSONPath by running the following command:
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Hello OpenShift!
Hello OpenShift!Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For troubleshooting tips, see the OADP team’s troubleshooting documentation.
4.20.1.3.2. Cleaning up a cluster after a backup with OADP and AWS STS 링크 복사링크가 클립보드에 복사되었습니다!
If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions.
Procedure
Delete the workload by running the following command:
oc delete ns hello-world
$ oc delete ns hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Data Protection Application (DPA) by running the following command:
oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa$ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cloud storage by running the following command:
oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp$ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadpCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantIf this command hangs, you might need to delete the finalizer by running the following command:
oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge$ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Operator is no longer required, remove it by running the following command:
oc -n openshift-adp delete subscription oadp-operator
$ oc -n openshift-adp delete subscription oadp-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the namespace from the Operator by running the following command:
oc delete ns openshift-adp
$ oc delete ns openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the backup and restore resources are no longer required, remove them from the cluster by running the following command:
oc delete backups.velero.io hello-world
$ oc delete backups.velero.io hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow To delete backup, restore and remote objects in AWS S3, run the following command:
velero backup delete hello-world
$ velero backup delete hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command:
for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done$ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the AWS S3 bucket by running the following commands:
aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive$ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursiveCopy to Clipboard Copied! Toggle word wrap Toggle overflow aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp$ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Detach the policy from the role by running the following command:
aws iam detach-role-policy --role-name "${ROLE_NAME}" --policy-arn "${POLICY_ARN}"$ aws iam detach-role-policy --role-name "${ROLE_NAME}" --policy-arn "${POLICY_ARN}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the role by running the following command:
aws iam delete-role --role-name "${ROLE_NAME}"$ aws iam delete-role --role-name "${ROLE_NAME}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21. OADP and 3scale 링크 복사링크가 클립보드에 복사되었습니다!
4.21.1. Backing up and restoring 3scale API Management by using OADP 링크 복사링크가 클립보드에 복사되었습니다!
With Red Hat 3scale API Management, you can manage your APIs for internal or external users. You can deploy 3scale components on-premise, in the cloud, as a managed service, or in any combination based on your requirements.
With OpenShift API for Data Protection (OADP), you can safeguard 3scale API Management deployments by backing up application resources, persistent volumes, and configurations.
You can use the OpenShift API for Data Protection (OADP) Operator to back up and restore your 3scale API Management on-cluster storage databases without affecting your running services
You can configure OADP to perform the following operations with 3scale API Management:
- Create a backup of 3scale components by following the steps in Backing up 3scale API Management.
- Restore the components to scale up the 3scale operator and deployment by following the steps in Restoring 3scale API Management.
4.21.2. Backing up 3scale API Management by using OADP 링크 복사링크가 클립보드에 복사되었습니다!
You can back up Red Hat 3scale API Management components by backing up the 3scale operator, and databases such as MySQL and Redis.
Prerequisites
- You installed and configured Red Hat 3scale API Management. For more information, see Installing 3scale API Management on OpenShift and Red Hat 3scale API Management.
4.21.2.1. Creating the Data Protection Application 링크 복사링크가 클립보드에 복사되었습니다!
You can create a Data Protection Application (DPA) custom resource (CR) for Red Hat 3scale API Management.
Procedure
Create a YAML file with the following configuration:
Example
dpa.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
- 2
- Specify a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes.
- 3
- Specify a region for backup storage location.
- 4
- Specify the URL of the object store that you are using to store backups.
Create the DPA CR by running the following command:
oc create -f dpa.yaml
$ oc create -f dpa.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21.2.2. Backing up the 3scale API Management operator, secret, and APIManager 링크 복사링크가 클립보드에 복사되었습니다!
You can back up the Red Hat 3scale API Management operator resources, and both the Secret and APIManager custom resource (CR).
Prerequisites
- You created the Data Protection Application (DPA).
Procedure
Back up your 3scale operator CRs, such as
operatorgroup,namespaces, andsubscriptions, by creating a YAML file with the following configuration:Example
backup.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also back up and restore
ReplicationControllers,Deployment, andPodobjects to ensure that all manually set environments are backed up and restored. This does not affect the flow of restoration.Create a backup CR by running the following command:
oc create -f backup.yaml
$ oc create -f backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
backup.velero.io/operator-install-backup created
backup.velero.io/operator-install-backup createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the
SecretCR by creating a YAML file with the following configuration:Example
backup-secret.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The value of the
metadata.nameparameter in the backup is the same value used in themetadata.backupNameparameter used when restoring theSecret.
Create the
Secretbackup CR by running the following command:oc create -f backup-secret.yaml
$ oc create -f backup-secret.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
backup.velero.io/operator-resources-secrets created
backup.velero.io/operator-resources-secrets createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the APIManager CR by creating a YAML file with the following configuration:
Example backup-apimanager.yaml file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The value of the
metadata.nameparameter in the backup is the same value used in themetadata.backupNameparameter used when restoring the APIManager.
Create the APIManager CR by running the following command:
oc create -f backup-apimanager.yaml
$ oc create -f backup-apimanager.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
backup.velero.io/operator-resources-apim created
backup.velero.io/operator-resources-apim createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21.2.3. Backing up a MySQL database 링크 복사링크가 클립보드에 복사되었습니다!
You can back up a MySQL database by creating and attaching a persistent volume claim (PVC) to include the dumped data in the specified path.
Prerequisites
- You have backed up the Red Hat 3scale API Management operator.
Procedure
Create a YAML file with the following configuration for adding an additional PVC:
Example
ts_pvc.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the additional PVC by running the following command:
oc create -f ts_pvc.yml
$ oc create -f ts_pvc.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the PVC to the system database pod by editing the
system-mysqldeployment to use the MySQL dump:oc edit deployment system-mysql -n threescale
$ oc edit deployment system-mysql -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The PVC that contains the dumped data.
Create a YAML file with following configuration to back up the MySQL database:
Example
mysql.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the MySQL database by running the following command:
oc create -f mysql.yaml
$ oc create -f mysql.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
backup.velero.io/mysql-backup created
backup.velero.io/mysql-backup createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the MySQL backup is completed by running the following command:
oc get backups.velero.io mysql-backup -o yaml
$ oc get backups.velero.io mysql-backup -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21.2.4. Backing up the back-end Redis database 링크 복사링크가 클립보드에 복사되었습니다!
You can back up the Redis database by adding the required annotations and by listing which resources to back up using the includedResources parameter.
Prerequisites
- You backed up the Red Hat 3scale API Management operator.
- You backed up your MySQL database.
- The Redis queues have been drained before performing the backup.
Procedure
Edit the annotations on the
backend-redisdeployment by running the following command:oc edit deployment backend-redis -n threescale
$ oc edit deployment backend-redis -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file with the following configuration to back up the Redis database:
Example
redis-backup.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The value of the
metadata.nameparameter in the backup is the same value used in themetadata.backupNameparameter used when restoring the restoring the Redis database.
Back up the Redis database by running the following command:
oc create -f redis-backup.yaml
$ oc create -f redis-backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
backup.velero.io/redis-backup created
backup.velero.io/redis-backup createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the Redis backup is completed by running the following command:
oc get backups.velero.io redis-backup -o yaml
$ oc get backups.velero.io redis-backup -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21.3. Restoring 3scale API Management by using OADP 링크 복사링크가 클립보드에 복사되었습니다!
You can restore Red Hat 3scale API Management components by restoring the backed up 3scale operator resources. You can also restore databases such as MySQL and Redis.
After the data has been restored, you can scale up the 3scale operator and deployment.
Prerequisites
- You installed and configured Red Hat 3scale API Management. For more information, see Installing 3scale API Management on OpenShift and Red Hat 3scale API Management.
- You backed up the 3scale operator, and databases such as MySQL and Redis.
- Ensure that you are restoring 3scale on the same cluster where it was backed up from.
- If you want to restore 3scale on a different cluster, ensure that the original backed-up cluster and the cluster you want to restore the operator on are using the same custom domain.
4.21.3.1. Restoring the 3scale API Management operator, secrets, and APIManager 링크 복사링크가 클립보드에 복사되었습니다!
You can restore the Red Hat 3scale API Management operator resources, and both the Secret and APIManager custom resources (CRs) by using the following procedure.
Prerequisites
- You backed up the 3scale operator.
- You backed up the MySQL and Redis databases.
You are restoring the database on the same cluster, where it was backed up.
If you are restoring the operator to a different cluster that you backed up from, install and configure OADP with
nodeAgentenabled on the destination cluster. Ensure that the OADP configuration is same as it was on the source cluster.
Procedure
Delete the 3scale operator custom resource definitions (CRDs) along with the
threescalenamespace by running the following command:oc delete project threescale
$ oc delete project threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
"threescale" project deleted successfully
"threescale" project deleted successfullyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file with the following configuration to restore the 3scale operator:
Example
restore.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Restoring the 3scale operator’s backup
Restore the 3scale operator by running the following command:
oc create -f restore.yaml
$ oc create -f restore.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
restore.velerio.io/operator-installation-restore created
restore.velerio.io/operator-installation-restore createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Manually create the
s3-credentialsSecretobject by running the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the 3scale operator by running the following command:
oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
deployment.apps/threescale-operator-controller-manager-v2 scaled
deployment.apps/threescale-operator-controller-manager-v2 scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file with the following configuration to restore the
Secret:Example
restore-secret.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Restoring the
Secretbackup.
Restore the
Secretby running the following command:oc create -f restore-secrets.yaml
$ oc create -f restore-secrets.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
restore.velerio.io/operator-resources-secrets created
restore.velerio.io/operator-resources-secrets createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file with the following configuration to restore APIManager:
Example
restore-apimanager.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the APIManager by running the following command:
oc create -f restore-apimanager.yaml
$ oc create -f restore-apimanager.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
restore.velerio.io/operator-resources-apim created
restore.velerio.io/operator-resources-apim createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale up the 3scale operator by running the following command:
oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
deployment.apps/threescale-operator-controller-manager-v2 scaled
deployment.apps/threescale-operator-controller-manager-v2 scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21.3.2. Restoring a MySQL database 링크 복사링크가 클립보드에 복사되었습니다!
Restoring a MySQL database re-creates the following resources:
-
The
Pod,ReplicationController, andDeploymentobjects. - The additional persistent volumes (PVs) and associated persistent volume claims (PVCs).
-
The MySQL dump, which the
example-claimPVC contains.
Do not delete the default PV and PVC associated with the database. If you do, your backups are deleted.
Prerequisites
-
You restored the
Secretand APIManager custom resources (CRs).
Procedure
Scale down the Red Hat 3scale API Management operator by running the following command:
oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
deployment.apps/threescale-operator-controller-manager-v2 scaled
deployment.apps/threescale-operator-controller-manager-v2 scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following script to scale down the 3scale operator:
vi ./scaledowndeployment.sh
$ vi ./scaledowndeployment.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example script:
for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/$deployment --replicas=0 -n threescale donefor deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/$deployment --replicas=0 -n threescale doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down all the deployment 3scale components by running the following script:
./scaledowndeployment.sh
$ ./scaledowndeployment.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
system-mysqlDeploymentobject by running the following command:oc delete deployment system-mysql -n threescale
$ oc delete deployment system-mysql -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "system-mysql" deleted
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "system-mysql" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following YAML file to restore the MySQL database:
Example
restore-mysql.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the MySQL database by running the following command:
oc create -f restore-mysql.yaml
$ oc create -f restore-mysql.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
restore.velerio.io/restore-mysql created
restore.velerio.io/restore-mysql createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
PodVolumeRestorerestore is completed by running the following command:oc get podvolumerestores.velero.io -n openshift-adp
$ oc get podvolumerestores.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the additional PVC has been restored by running the following command:
oc get pvc -n threescale
$ oc get pvc -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21.3.3. Restoring the back-end Redis database 링크 복사링크가 클립보드에 복사되었습니다!
You can restore the back-end Redis database by deleting the deployment and specifying which resources you do not want to restore.
Prerequisites
-
You restored the Red Hat 3scale API Management operator resources,
Secret, and APIManager custom resources. - You restored the MySQL database.
Procedure
Delete the
backend-redisdeployment by running the following command:oc delete deployment backend-redis -n threescale
$ oc delete deployment backend-redis -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "backend-redis" deleted
Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io "backend-redis" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a YAML file with the following configuration to restore the Redis database:
Example
restore-backend.yamlfileCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Restoring the Redis backup.
Restore the Redis database by running the following command:
oc create -f restore-backend.yaml
$ oc create -f restore-backend.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
restore.velerio.io/restore-backend created
restore.velerio.io/restore-backend createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
PodVolumeRestorerestore is completed by running the following command:oc get podvolumerestores.velero.io -n openshift-adp
$ oc get podvolumerestores.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m
NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21mCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.21.3.4. Scaling up the 3scale API Management operator and deployment 링크 복사링크가 클립보드에 복사되었습니다!
You can scale up the Red Hat 3scale API Management operator and any deployment that was manually scaled down. After a few minutes, 3scale installation should be fully functional, and its state should match the backed-up state.
Prerequisites
-
You restored the 3scale operator resources, and both the
Secretand APIManager custom resources (CRs). - You restored the MySQL and back-end Redis databases.
-
Ensure that there are no scaled up deployments or no extra pods running. There might be some
system-mysqlorbackend-redispods running detached from deployments after restoration, which can be removed after the restoration is successful.
Procedure
Scale up the 3scale operator by running the following command:
oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale
$ oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
deployment.apps/threescale-operator-controller-manager-v2 scaled
deployment.apps/threescale-operator-controller-manager-v2 scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the 3scale pod is running to verify if the 3scale operator was deployed by running the following command:
oc get pods -n threescale
$ oc get pods -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE threescale-operator-controller-manager-v2-79546bd8c-b4qbh 1/1 Running 0 2m5s
NAME READY STATUS RESTARTS AGE threescale-operator-controller-manager-v2-79546bd8c-b4qbh 1/1 Running 0 2m5sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the following script to scale up the deployments:
vi ./scaledeployment.sh
$ vi ./scaledeployment.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example script file:
for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/$deployment --replicas=1 -n threescale donefor deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/$deployment --replicas=1 -n threescale doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale up the deployments by running the following script:
./scaledeployment.sh
$ ./scaledeployment.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the
3scale-adminroute to log in to the 3scale UI by running the following command:oc get routes -n threescale
$ oc get routes -n threescaleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example,
3scale-admin.apps.custom-cluster-name.openshift.comis the 3scale-admin URL.- Use the URL from this output to log in to the 3scale operator as an administrator. You can verify that the data, when you took backup, is available.
4.22. OADP Data Mover 링크 복사링크가 클립보드에 복사되었습니다!
4.22.1. About the OADP Data Mover 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and write to the unified repository.
OADP supports CSI snapshots on the following:
- Red Hat OpenShift Data Foundation
- Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API
4.22.1.1. Data Mover support 링크 복사링크가 클립보드에 복사되었습니다!
The OADP built-in Data Mover, which was introduced in OADP 1.3 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads.
Supported
The Data Mover backups taken with OADP 1.3 can be restored using OADP 1.3, 1.4, and later. This is supported.
Not supported
Backups taken with OADP 1.1 or OADP 1.2 using the Data Mover feature cannot be restored using OADP 1.3 and later. Therefore, it is not supported.
OADP 1.1 and OADP 1.2 are no longer supported. The DataMover feature in OADP 1.1 or OADP 1.2 was a Technology Preview and was never supported. DataMover backups taken with OADP 1.1 or OADP 1.2 cannot be restored on later versions of OADP.
4.22.1.2. Enabling the built-in Data Mover 링크 복사링크가 클립보드에 복사되었습니다!
To enable the built-in Data Mover, you must include the CSI plugin and enable the node agent in the DataProtectionApplication custom resource (CR). The node agent is a Kubernetes daemonset that hosts data movement modules. These include the Data Mover controller, uploader, and the repository.
Example DataProtectionApplication manifest
- 1
- The flag to enable the node agent.
- 2
- The type of uploader. The possible values are
resticorkopia. The built-in Data Mover uses Kopia as the default uploader mechanism regardless of the value of theuploaderTypefield. - 3
- The CSI plugin included in the list of default plugins.
- 4
- In OADP 1.3.1 and later, set to
trueif you use Data Mover only for volumes that opt out offs-backup. Set tofalseif you use Data Mover by default for volumes.
4.22.1.3. Built-in Data Mover controller and custom resource definitions (CRDs) 링크 복사링크가 클립보드에 복사되었습니다!
The built-in Data Mover feature introduces three new API objects defined as CRDs for managing backup and restore:
-
DataDownload: Represents a data download of a volume snapshot. The CSI plugin creates oneDataDownloadobject per volume to be restored. TheDataDownloadCR includes information about the target volume, the specified Data Mover, the progress of the current data download, the specified backup repository, and the result of the current data download after the process is complete. -
DataUpload: Represents a data upload of a volume snapshot. The CSI plugin creates oneDataUploadobject per CSI snapshot. TheDataUploadCR includes information about the specified snapshot, the specified Data Mover, the specified backup repository, the progress of the current data upload, and the result of the current data upload after the process is complete. -
BackupRepository: Represents and manages the lifecycle of the backup repositories. OADP creates a backup repository per namespace when the first CSI snapshot backup or restore for a namespace is requested.
4.22.1.4. About incremental back up support 링크 복사링크가 클립보드에 복사되었습니다!
OADP supports incremental backups of block and Filesystem persistent volumes for both containerized, and OpenShift Virtualization workloads. The following table summarizes the support for File System Backup (FSB), Container Storage Interface (CSI), and CSI Data Mover:
| Volume mode | FSB - Restic | FSB - Kopia | CSI | CSI Data Mover |
|---|---|---|---|---|
| Filesystem | S [1], I [2] | S [1], I [2] | S [1] | S [1], I [2] |
| Block | N [3] | N [3] | S [1] | S [1], I [2] |
| Volume mode | FSB - Restic | FSB - Kopia | CSI | CSI Data Mover |
|---|---|---|---|---|
| Filesystem | N [3] | N [3] | S [1] | S [1], I [2] |
| Block | N [3] | N [3] | S [1] | S [1], I [2] |
- Backup supported
- Incremental backup supported
- Not supported
The CSI Data Mover backups use Kopia regardless of uploaderType.
4.22.2. Backing up and restoring CSI snapshots data movement 링크 복사링크가 클립보드에 복사되었습니다!
You can back up and restore persistent volumes by using the OADP 1.3 Data Mover.
4.22.2.1. Backing up persistent volumes with CSI snapshots 링크 복사링크가 클립보드에 복사되었습니다!
You can use the OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store.
Prerequisites
-
You have access to the cluster with the
cluster-adminrole. - You have installed the OADP Operator.
-
You have included the CSI plugin and enabled the node agent in the
DataProtectionApplicationcustom resource (CR). - You have an application with persistent volumes running in a separate namespace.
-
You have added the
metadata.labels.velero.io/csi-volumesnapshot-class: "true"key-value pair to theVolumeSnapshotClassCR.
Procedure
Create a YAML file for the
Backupobject, as in the following example:Example
BackupCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Set to
trueif you use Data Mover only for volumes that opt out offs-backup. Set tofalseif you use Data Mover by default for volumes. - 2
- Set to
trueto enable movement of CSI snapshots to remote object storage. - 3
- The
ttlfield defines the retention time of the created backup and the backed up data. For example, if you are using Restic as the backup tool, the backed up data items and data contents of the persistent volumes (PVs) are stored until the backup expires. But storing this data consumes more space in the target backup locations. An additional storage is consumed with frequent backups, which are created even before other unexpired completed backups might have timed out.
NoteIf you format the volume by using XFS filesystem and the volume is at 100% capacity, the backup fails with a
no space left on deviceerror. For example:Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ \ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ \ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: \ no space left on device
Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ \ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ \ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: \ no space left on deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this scenario, consider resizing the volume or using a different filesystem type, for example,
ext4, so that the backup completes successfully.Apply the manifest:
oc create -f backup.yaml
$ oc create -f backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow A
DataUploadCR is created after the snapshot creation is complete.
Verification
Verify that the snapshot data is successfully transferred to the remote object store by monitoring the
status.phasefield of theDataUploadCR. Possible values areIn Progress,Completed,Failed, orCanceled. The object store is configured in thebackupLocationsstanza of theDataProtectionApplicationCR.Run the following command to get a list of all
DataUploadobjects:oc get datauploads -A
$ oc get datauploads -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal
NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the value of the
status.phasefield of the specificDataUploadobject by running the following command:oc get datauploads <dataupload_name> -o yaml
$ oc get datauploads <dataupload_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Indicates that snapshot data is successfully transferred to the remote object store.
4.22.2.2. Restoring CSI volume snapshots 링크 복사링크가 클립보드에 복사되었습니다!
You can restore a volume snapshot by creating a Restore CR.
You cannot restore Volsync backups from OADP 1.2 with the OAPD 1.3 built-in Data Mover. It is recommended to do a file system backup of all of your workloads with Restic prior to upgrading to OADP 1.3.
Prerequisites
-
You have access to the cluster with the
cluster-adminrole. -
You have an OADP
BackupCR from which to restore the data.
Procedure
Create a YAML file for the
RestoreCR, as in the following example:Example
RestoreCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the manifest:
oc create -f restore.yaml
$ oc create -f restore.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow A
DataDownloadCR is created when the restore starts.
Verification
You can monitor the status of the restore process by checking the
status.phasefield of theDataDownloadCR. Possible values areIn Progress,Completed,Failed, orCanceled.To get a list of all
DataDownloadobjects, run the following command:oc get datadownloads -A
$ oc get datadownloads -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal
NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to check the value of the
status.phasefield of the specificDataDownloadobject:oc get datadownloads <datadownload_name> -o yaml
$ oc get datadownloads <datadownload_name> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Indicates that the CSI snapshot data is successfully restored.
4.22.2.3. Deletion policy for OADP 1.3 링크 복사링크가 클립보드에 복사되었습니다!
The deletion policy determines rules for removing data from a system, specifying when and how deletion occurs based on factors such as retention periods, data sensitivity, and compliance requirements. It manages data removal effectively while meeting regulations and preserving valuable information.
4.22.2.3.1. Deletion policy guidelines for OADP 1.3 링크 복사링크가 클립보드에 복사되었습니다!
Review the following deletion policy guidelines for the OADP 1.3:
-
In OADP 1.3.x, when using any type of backup and restore methods, you can set the
deletionPolicyfield toRetainorDeletein theVolumeSnapshotClasscustom resource (CR).
4.22.3. Overriding Kopia hashing, encryption, and splitter algorithms 링크 복사링크가 클립보드에 복사되었습니다!
You can override the default values of Kopia hashing, encryption, and splitter algorithms by using specific environment variables in the Data Protection Application (DPA).
4.22.3.1. Configuring the DPA to override Kopia hashing, encryption, and splitter algorithms 링크 복사링크가 클립보드에 복사되었습니다!
You can use an OpenShift API for Data Protection (OADP) option to override the default Kopia algorithms for hashing, encryption, and splitter to improve Kopia performance or to compare performance metrics. You can set the following environment variables in the spec.configuration.velero.podConfig.env section of the DPA:
-
KOPIA_HASHING_ALGORITHM -
KOPIA_ENCRYPTION_ALGORITHM -
KOPIA_SPLITTER_ALGORITHM
Prerequisites
- You have installed the OADP Operator.
- You have created the secret by using the credentials provided by the cloud provider.
The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later.
To use different Kopia algorithms, ensure that the object storage does not contain any previous Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration.
Procedure
Configure the DPA with the environment variables for hashing, encryption, and splitter as shown in the following example.
Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.22.3.2. Use case for overriding Kopia hashing, encryption, and splitter algorithms 링크 복사링크가 클립보드에 복사되었습니다!
The use case example demonstrates taking a backup of an application by using Kopia environment variables for hashing, encryption, and splitter. You store the backup in an AWS S3 bucket. You then verify the environment variables by connecting to the Kopia repository.
Prerequisites
- You have installed the OADP Operator.
- You have an AWS S3 bucket configured as the backup storage location.
- You have created the secret by using the credentials provided by the cloud provider.
- You have installed the Kopia client.
- You have an application with persistent volumes running in a separate namespace.
Procedure
Configure the Data Protection Application (DPA) as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a name for the DPA.
- 2
- Specify the region for the backup storage location.
- 3
- Specify the name of the default
Secretobject. - 4
- Specify the AWS S3 bucket name.
- 5
- Include the
csiplugin. - 6
- Specify the hashing algorithm as
BLAKE3-256. - 7
- Specify the encryption algorithm as
CHACHA20-POLY1305-HMAC-SHA256. - 8
- Specify the splitter algorithm as
DYNAMIC-8M-RABINKARP.
Create the DPA by running the following command:
oc create -f <dpa_file_name>
$ oc create -f <dpa_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the file name of the DPA you configured.
Verify that the DPA has reconciled by running the following command:
oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a backup CR as shown in the following example:
Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the namespace for the application installed in the cluster.
Create a backup by running the following command:
oc apply -f <backup_file_name>
$ oc apply -f <backup_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup CR file.
Verify that the backup completed by running the following command:
oc get backups.velero.io <backup_name> -o yaml
$ oc get backups.velero.io <backup_name> -o yaml1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the backup.
Verification
Connect to the Kopia repository by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are using a storage provider other than AWS S3, you will need to add
--endpoint, the bucket endpoint URL parameter, to the command.Verify that Kopia uses the environment variables that are configured in the DPA for the backup by running the following command:
kopia repository status
$ kopia repository statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.22.3.3. Benchmarking Kopia hashing, encryption, and splitter algorithms 링크 복사링크가 클립보드에 복사되었습니다!
You can run Kopia commands to benchmark the hashing, encryption, and splitter algorithms. Based on the benchmarking results, you can select the most suitable algorithm for your workload. In this procedure, you run the Kopia benchmarking commands from a pod on the cluster. The benchmarking results can vary depending on CPU speed, available RAM, disk speed, current I/O load, and so on.
Prerequisites
- You have installed the OADP Operator.
- You have an application with persistent volumes running in a separate namespace.
- You have run a backup of the application with Container Storage Interface (CSI) snapshots.
The configuration of the Kopia algorithms for splitting, hashing, and encryption in the Data Protection Application (DPA) apply only during the initial Kopia repository creation, and cannot be changed later.
To use different Kopia algorithms, ensure that the object storage does not contain any previous Kopia repositories of backups. Configure a new object storage in the Backup Storage Location (BSL) or specify a unique prefix for the object storage in the BSL configuration.
Procedure
Configure the
must-gatherpod as shown in the following example. Make sure you are using theoadp-mustgatherimage for OADP version 1.3 and later.Example pod configuration
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Kopia client is available in the
oadp-mustgatherimage.Create the pod by running the following command:
oc apply -f <pod_config_file_name>
$ oc apply -f <pod_config_file_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the YAML file for the pod configuration.
Verify that the Security Context Constraints (SCC) on the pod is
anyuid, so that Kopia can connect to the repository.oc describe pod/oadp-mustgather-pod | grep scc
$ oc describe pod/oadp-mustgather-pod | grep sccCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
openshift.io/scc: anyuid
openshift.io/scc: anyuidCopy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the pod via SSH by running the following command:
oc -n openshift-adp rsh pod/oadp-mustgather-pod
$ oc -n openshift-adp rsh pod/oadp-mustgather-podCopy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the Kopia repository by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the object storage provider bucket name.
- 2
- Specify the namespace for the application.
- 3
- This is the Kopia password to connect to the repository.
- 4
- Specify the object storage provider access key.
- 5
- Specify the object storage provider secret access key.
- 6
- Specify the bucket endpoint. You do not need to specify the bucket endpoint, if you are using AWS S3 as the storage provider.
NoteThis is an example command. The command can vary based on the object storage provider.
To benchmark the hashing algorithm, run the following command:
kopia benchmark hashing
sh-5.1# kopia benchmark hashingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To benchmark the encryption algorithm, run the following command:
kopia benchmark encryption
sh-5.1# kopia benchmark encryptionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To benchmark the splitter algorithm, run the following command:
kopia benchmark splitter
sh-5.1# kopia benchmark splitterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.23. APIs used with OADP 링크 복사링크가 클립보드에 복사되었습니다!
The document provides information about the following APIs that you can use with OADP:
- Velero API
- OADP API
4.23.1. Velero API 링크 복사링크가 클립보드에 복사되었습니다!
Velero API documentation is maintained by Velero, not by Red Hat. It can be found at Velero API types.
4.23.2. OADP API 링크 복사링크가 클립보드에 복사되었습니다!
The following tables provide the structure of the OADP API:
| Property | Type | Description |
|---|---|---|
|
|
Defines the list of configurations to use for | |
|
|
Defines the list of configurations to use for | |
|
| map [ UnsupportedImageKey ] string |
Can be used to override the deployed dependent images for development. Options are |
|
| Used to add annotations to pods deployed by Operators. | |
|
| Defines the configuration of the DNS of a pod. | |
|
|
Defines the DNS parameters of a pod in addition to those generated from | |
|
| *bool | Used to specify whether or not you want to deploy a registry for enabling backup and restore of images. |
|
| Used to define the data protection application’s server configuration. | |
|
| Defines the configuration for the DPA to enable the Technology Preview features. |
Complete schema definitions for the OADP API.
| Property | Type | Description |
|---|---|---|
|
| Location to store volume snapshots, as described in Backup Storage Location. | |
|
| [Technology Preview] Automates creation of a bucket at some cloud storage providers for use as a backup storage location. |
The bucket parameter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Complete schema definitions for the type BackupLocation.
| Property | Type | Description |
|---|---|---|
|
| Location to store volume snapshots, as described in Volume Snapshot Location. |
Complete schema definitions for the type SnapshotLocation.
| Property | Type | Description |
|---|---|---|
|
| Defines the configuration for the Velero server. | |
|
| Defines the configuration for the Restic server. |
Complete schema definitions for the type ApplicationConfig.
| Property | Type | Description |
|---|---|---|
|
| [] string | Defines the list of features to enable for the Velero instance. |
|
| [] string |
The following types of default Velero plugins can be installed: |
|
| Used for installation of custom Velero plugins. Default and custom plugins are described in OADP plugins | |
|
|
Represents a config map that is created if defined for use in conjunction with the | |
|
|
To install Velero without a default backup storage location, you must set the | |
|
|
Defines the configuration of the | |
|
|
Velero server’s log level (use |
Complete schema definitions for the type VeleroConfig.
| Property | Type | Description |
|---|---|---|
|
| Name of custom plugin. | |
|
| Image of custom plugin. |
Complete schema definitions for the type CustomPlugin.
| Property | Type | Description |
|---|---|---|
|
| *bool |
If set to |
|
| []int64 |
Defines the Linux groups to be applied to the |
|
|
A user-supplied duration string that defines the Restic timeout. Default value is | |
|
|
Defines the configuration of the |
Complete schema definitions for the type ResticConfig.
| Property | Type | Description |
|---|---|---|
|
|
Defines the | |
|
|
Defines the list of tolerations to be applied to a Velero deployment or a Restic | |
|
|
Set specific resource | |
|
| Labels to add to pods. |
4.23.2.1. Configuring node agents and node labels 링크 복사링크가 클립보드에 복사되었습니다!
The DPA of OADP uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint.
Any label specified must match the labels on each node.
The correct way to run the node agent on any node you choose is for you to label the nodes with a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector, which you used for labeling nodes. For example:
The following example is an anti-pattern of nodeSelector and does not work unless both labels, 'node-role.kubernetes.io/infra: ""' and 'node-role.kubernetes.io/worker: ""', are on the node:
Complete schema definitions for the type PodConfig.
| Property | Type | Description |
|---|---|---|
|
| Defines the configuration of the Data Mover. |
Complete schema definitions for the type Features.
| Property | Type | Description |
|---|---|---|
|
|
If set to | |
|
|
User-supplied Restic | |
|
|
A user-supplied duration string for |
The OADP API is more fully detailed in OADP Operator.
4.24. Advanced OADP features and functionalities 링크 복사링크가 클립보드에 복사되었습니다!
This document provides information about advanced features and functionalities of OpenShift API for Data Protection (OADP).
4.24.1. Working with different Kubernetes API versions on the same cluster 링크 복사링크가 클립보드에 복사되었습니다!
4.24.1.1. Listing the Kubernetes API group versions on a cluster 링크 복사링크가 클립보드에 복사되었습니다!
A source cluster might offer multiple versions of an API, where one of these versions is the preferred API version. For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups.
If you use Velero to back up and restore such a source cluster, Velero backs up only the version of that resource that uses the preferred version of its Kubernetes API.
To return to the above example, if example.com/v1 is the preferred API, then Velero only backs up the version of a resource that uses example.com/v1. Moreover, the target cluster needs to have example.com/v1 registered in its set of available API resources in order for Velero to restore the resource on the target cluster.
Therefore, you need to generate a list of the Kubernetes API group versions on your target cluster to be sure the preferred API version is registered in its set of available API resources.
Procedure
- Enter the following command:
oc api-resources
$ oc api-resources
4.24.1.2. About Enable API Group Versions 링크 복사링크가 클립보드에 복사되었습니다!
By default, Velero only backs up resources that use the preferred version of the Kubernetes API. However, Velero also includes a feature, Enable API Group Versions, that overcomes this limitation. When enabled on the source cluster, this feature causes Velero to back up all Kubernetes API group versions that are supported on the cluster, not only the preferred one. After the versions are stored in the backup .tar file, they are available to be restored on the destination cluster.
For example, a source cluster with an API named Example might be available in the example.com/v1 and example.com/v1beta2 API groups, with example.com/v1 being the preferred API.
Without the Enable API Group Versions feature enabled, Velero backs up only the preferred API group version for Example, which is example.com/v1. With the feature enabled, Velero also backs up example.com/v1beta2.
When the Enable API Group Versions feature is enabled on the destination cluster, Velero selects the version to restore on the basis of the order of priority of API group versions.
Enable API Group Versions is still in beta.
Velero uses the following algorithm to assign priorities to API versions, with 1 as the top priority:
- Preferred version of the destination cluster
- Preferred version of the source_ cluster
- Common non-preferred supported version with the highest Kubernetes version priority
4.24.1.3. Using Enable API Group Versions 링크 복사링크가 클립보드에 복사되었습니다!
You can use Velero’s Enable API Group Versions feature to back up all Kubernetes API group versions that are supported on a cluster, not only the preferred one.
Enable API Group Versions is still in beta.
Procedure
-
Configure the
EnableAPIGroupVersionsfeature flag:
4.24.2. Backing up data from one cluster and restoring it to another cluster 링크 복사링크가 클립보드에 복사되었습니다!
4.24.2.1. About backing up data from one cluster and restoring it on another cluster 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift API for Data Protection (OADP) is designed to back up and restore application data in the same OpenShift Container Platform cluster. Migration Toolkit for Containers (MTC) is designed to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster.
You can use OADP to back up application data from one OpenShift Container Platform cluster and restore it on another cluster. However, doing so is more complicated than using MTC or using OADP to back up and restore on the same cluster.
To successfully use OADP to back up data from one cluster and restore it to another cluster, you must take into account the following factors, in addition to the prerequisites and procedures that apply to using OADP to back up and restore data on the same cluster:
- Operators
- Use of Velero
- UID and GID ranges
4.24.2.1.1. Operators 링크 복사링크가 클립보드에 복사되었습니다!
You must exclude Operators from the backup of an application for backup and restore to succeed.
4.24.2.1.2. Use of Velero 링크 복사링크가 클립보드에 복사되었습니다!
Velero, which OADP is built upon, does not natively support migrating persistent volume snapshots across cloud providers. To migrate volume snapshot data between cloud platforms, you must either enable the Velero Restic file system backup option, which backs up volume contents at the file system level, or use the OADP Data Mover for CSI snapshots.
In OADP 1.1 and earlier, the Velero Restic file system backup option is called restic. In OADP 1.2 and later, the Velero Restic file system backup option is called file-system-backup.
- You must also use Velero’s File System Backup to migrate data between AWS regions or between Microsoft Azure regions.
- Velero does not support restoring data to a cluster with an earlier Kubernetes version than the source cluster.
- It is theoretically possible to migrate workloads to a destination with a later Kubernetes version than the source, but you must consider the compatibility of API groups between clusters for each custom resource. If a Kubernetes version upgrade breaks the compatibility of core or native API groups, you must first update the impacted custom resources.
4.24.2.2. About determining which pod volumes to back up 링크 복사링크가 클립보드에 복사되었습니다!
Before you start a backup operation by using File System Backup (FSB), you must specify which pods contain a volume that you want to back up. Velero refers to this process as "discovering" the appropriate pod volumes.
Velero supports two approaches for determining pod volumes. Use the opt-in or the opt-out approach to allow Velero to decide between an FSB, a volume snapshot, or a Data Mover backup.
- Opt-in approach: With the opt-in approach, volumes are backed up using snapshot or Data Mover by default. FSB is used on specific volumes that are opted-in by annotations.
- Opt-out approach: With the opt-out approach, volumes are backed up using FSB by default. Snapshots or Data Mover is used on specific volumes that are opted-out by annotations.
4.24.2.2.1. Limitations 링크 복사링크가 클립보드에 복사되었습니다!
-
FSB does not support backing up and restoring
hostpathvolumes. However, FSB does support backing up and restoring local volumes. - Velero uses a static, common encryption key for all backup repositories it creates. This static key means that anyone who can access your backup storage can also decrypt your backup data. It is essential that you limit access to backup storage.
For PVCs, every incremental backup chain is maintained across pod reschedules.
For pod volumes that are not PVCs, such as
emptyDirvolumes, if a pod is deleted or recreated, for example, by aReplicaSetor a deployment, the next backup of those volumes will be a full backup and not an incremental backup. It is assumed that the lifecycle of a pod volume is defined by its pod.- Even though backup data can be kept incrementally, backing up large files, such as a database, can take a long time. This is because FSB uses deduplication to find the difference that needs to be backed up.
- FSB reads and writes data from volumes by accessing the file system of the node on which the pod is running. For this reason, FSB can only back up volumes that are mounted from a pod and not directly from a PVC. Some Velero users have overcome this limitation by running a staging pod, such as a BusyBox or Alpine container with an infinite sleep, to mount these PVC and PV pairs before performing a Velero backup..
-
FSB expects volumes to be mounted under
<hostPath>/<pod UID>, with<hostPath>being configurable. Some Kubernetes systems, for example, vCluster, do not mount volumes under the<pod UID>subdirectory, and VFSB does not work with them as expected.
4.24.2.2.2. Backing up pod volumes by using the opt-in method 링크 복사링크가 클립보드에 복사되었습니다!
You can use the opt-in method to specify which volumes need to be backed up by File System Backup (FSB). You can do this by using the backup.velero.io/backup-volumes command.
Procedure
On each pod that contains one or more volumes that you want to back up, enter the following command:
oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n>
$ oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<your_volume_name_x>- specifies the name of the xth volume in the pod specification.
4.24.2.2.3. Backing up pod volumes by using the opt-out method 링크 복사링크가 클립보드에 복사되었습니다!
When using the opt-out approach, all pod volumes are backed up by using File System Backup (FSB), although there are some exceptions:
- Volumes that mount the default service account token, secrets, and configuration maps.
-
hostPathvolumes
You can use the opt-out method to specify which volumes not to back up. You can do this by using the backup.velero.io/backup-volumes-excludes command.
Procedure
On each pod that contains one or more volumes that you do not want to back up, run the following command:
oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n>
$ oc -n <your_pod_namespace> annotate pod/<your_pod_name> \ backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \ <your_volume_name_2>>,...,<your_volume_name_n>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<your_volume_name_x>- specifies the name of the xth volume in the pod specification.
You can enable this behavior for all Velero backups by running the velero install command with the --default-volumes-to-fs-backup flag.
4.24.2.3. UID and GID ranges 링크 복사링크가 클립보드에 복사되었습니다!
If you back up data from one cluster and restore it to another cluster, problems might occur with UID (User ID) and GID (Group ID) ranges. The following section explains these potential issues and mitigations:
- Summary of the issues
- The namespace UID and GID ranges might change depending on the destination cluster. OADP does not back up and restore OpenShift UID range metadata. If the backed up application requires a specific UID, ensure the range is availableupon restore. For more information about OpenShift’s UID and GID ranges, see A Guide to OpenShift and UIDs.
- Detailed description of the issues
When you create a namespace in OpenShift Container Platform by using the shell command
oc create namespace, OpenShift Container Platform assigns the namespace a unique User ID (UID) range from its available pool of UIDs, a Supplemental Group (GID) range, and unique SELinux MCS labels. This information is stored in themetadata.annotationsfield of the cluster. This information is part of the Security Context Constraints (SCC) annotations, which comprise of the following components:-
openshift.io/sa.scc.mcs -
openshift.io/sa.scc.supplemental-groups -
openshift.io/sa.scc.uid-range
-
When you use OADP to restore the namespace, it automatically uses the information in metadata.annotations without resetting it for the destination cluster. As a result, the workload might not have access to the backed up data if any of the following is true:
- There is an existing namespace with other SCC annotations, for example, on another cluster. In this case, OADP uses the existing namespace during the backup instead of the namespace you want to restore.
A label selector was used during the backup, but the namespace in which the workloads are executed does not have the label. In this case, OADP does not back up the namespace, but creates a new namespace during the restore that does not contain the annotations of the backed up namespace. This results in a new UID range being assigned to the namespace.
This can be an issue for customer workloads if OpenShift Container Platform assigns a pod a
securityContextUID to a pod based on namespace annotations that have changed since the persistent volume data was backed up.- The UID of the container no longer matches the UID of the file owner.
An error occurs because OpenShift Container Platform has not changed the UID range of the destination cluster to match the backup cluster data. As a result, the backup cluster has a different UID than the destination cluster, which means that the application cannot read or write data on the destination cluster.
- Mitigations
- You can use one or more of the following mitigations to resolve the UID and GID range issues:
Simple mitigations:
-
If you use a label selector in the
BackupCR to filter the objects to include in the backup, be sure to add this label selector to the namespace that contains the workspace. - Remove any pre-existing version of a namespace on the destination cluster before attempting to restore a namespace with the same name.
-
If you use a label selector in the
Advanced mitigations:
- Fix UID ranges after migration by Resolving overlapping UID ranges in OpenShift namespaces after migration. Step 1 is optional.
For an in-depth discussion of UID and GID ranges in OpenShift Container Platform with an emphasis on overcoming issues in backing up data on one cluster and restoring it on another, see A Guide to OpenShift and UIDs.
4.24.2.4. Backing up data from one cluster and restoring it to another cluster 링크 복사링크가 클립보드에 복사되었습니다!
In general, you back up data from one OpenShift Container Platform cluster and restore it on another OpenShift Container Platform cluster in the same way that you back up and restore data to the same cluster. However, there are some additional prerequisites and differences in the procedure when backing up data from one OpenShift Container Platform cluster and restoring it on another.
Prerequisites
- All relevant prerequisites for backing up and restoring on your platform (for example, AWS, Microsoft Azure, GCP, and so on), especially the prerequisites for the Data Protection Application (DPA), are described in the relevant sections of this guide.
Procedure
Make the following additions to the procedures given for your platform:
- Ensure that the backup store location (BSL) and volume snapshot location have the same names and paths to restore resources to another cluster.
- Share the same object storage location credentials across the clusters.
- For best results, use OADP to create the namespace on the destination cluster.
If you use the Velero
file-system-backupoption, enable the--default-volumes-to-fs-backupflag for use during backup by running the following command:velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>
$ velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn OADP 1.2 and later, the Velero Restic option is called
file-system-backup.
Before restoring a CSI back up, edit the VolumeSnapshotClass custom resource (CR), and set the snapshot.storage.kubernetes.io/is-default-class parameter to false. Otherwise, the restore will partially fail due to the same value in the VolumeSnapshotClass in the target cluster for the same drive.
4.24.3. OADP storage class mapping 링크 복사링크가 클립보드에 복사되었습니다!
4.24.3.1. Storage class mapping 링크 복사링크가 클립보드에 복사되었습니다!
Storage class mapping allows you to define rules or policies specifying which storage class should be applied to different types of data. This feature automates the process of determining storage classes based on access frequency, data importance, and cost considerations. It optimizes storage efficiency and cost-effectiveness by ensuring that data is stored in the most suitable storage class for its characteristics and usage patterns.
You can use the change-storage-class-config field to change the storage class of your data objects, which lets you optimize costs and performance by moving data between different storage tiers, such as from standard to archival storage, based on your needs and access patterns.
4.24.3.1.1. Storage class mapping with Migration Toolkit for Containers 링크 복사링크가 클립보드에 복사되었습니다!
You can use the Migration Toolkit for Containers (MTC) to migrate containers, including application data, from one OpenShift Container Platform cluster to another cluster and for storage class mapping and conversion. You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the MTC web console.
4.24.3.1.2. Mapping storage classes with OADP 링크 복사링크가 클립보드에 복사되었습니다!
You can use OpenShift API for Data Protection (OADP) with the Velero plugin v1.1.0 and later to change the storage class of a persistent volume (PV) during restores, by configuring a storage class mapping in the config map in the Velero namespace.
To deploy ConfigMap with OADP, use the change-storage-class-config field. You must change the storage class mapping based on your cloud provider.
Procedure
Change the storage class mapping by running the following command:
cat change-storageclass.yaml
$ cat change-storageclass.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a config map in the Velero namespace as shown in the following example:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save your storage class mapping preferences by running the following command:
oc create -f change-storage-class-config
$ oc create -f change-storage-class-configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25. OADP troubleshooting 링크 복사링크가 클립보드에 복사되었습니다!
4.25.1. Troubleshooting 링크 복사링크가 클립보드에 복사되었습니다!
You can troubleshoot OADP issues by using the following methods:
- Debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool. The Velero CLI tool provides more detailed logs and information.
- Debug Velero or Restic pod crashes, which are caused due to a lack of memory or CPU by using Pods crash or restart due to lack of memory or CPU.
- Debug issues with Velero and admission webhooks by using Issues with Velero and admission webhooks.
- Check OADP installation issues, OADP Operator issues, backup and restore CR issues, and Restic issues.
- Use the available OADP timeouts to reduce errors, retries, or failures.
-
Collect logs and CR information by using the
must-gathertool. - Monitor and analyze the workload performance with the help of OADP monitoring.
4.25.2. Velero CLI tool 링크 복사링크가 클립보드에 복사되었습니다!
You can obtain the velero CLI tool by using the following options:
-
Downloading the
veleroCLI tool -
Accessing the
velerobinary in the Velero deployment in the cluster
4.25.2.1. Downloading the Velero CLI tool 링크 복사링크가 클립보드에 복사되었습니다!
You can download and install the Velero CLI tool by following the instructions on the Velero documentation page.
The page includes instructions for:
- macOS by using Homebrew
- GitHub
- Windows by using Chocolatey
Prerequisites
- You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.
-
You have installed
kubectllocally.
Procedure
- Open a browser and navigate to "Install the CLI" on the Velero website.
- Follow the appropriate procedure for macOS, GitHub, or Windows.
- Download the Velero version appropriate for your version of OADP and OpenShift Container Platform.
4.25.2.1.1. OADP-Velero-OpenShift Container Platform version relationship 링크 복사링크가 클립보드에 복사되었습니다!
4.25.2.2. Accessing the Velero binary in the Velero deployment in the cluster 링크 복사링크가 클립보드에 복사되었습니다!
You can use a shell command to access the Velero binary in the Velero deployment in the cluster.
Prerequisites
-
Your
DataProtectionApplicationcustom resource has a status ofReconcile complete.
Procedure
Enter the following command to set the needed alias:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.2.3. Debugging Velero resources with the OpenShift CLI tool 링크 복사링크가 클립보드에 복사되었습니다!
You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool.
Velero CRs
Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR:
oc describe <velero_cr> <cr_name>
$ oc describe <velero_cr> <cr_name>
Velero pod logs
Use the oc logs command to retrieve the Velero pod logs:
oc logs pod/<velero>
$ oc logs pod/<velero>
Velero pod debug logs
You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example.
This option is available starting from OADP 1.0.3.
The following logLevel values are available:
-
trace -
debug -
info -
warning -
error -
fatal -
panic
It is recommended to use the info logLevel value for most logs.
4.25.2.4. Debugging Velero resources with the Velero CLI tool 링크 복사링크가 클립보드에 복사되었습니다!
You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool.
The Velero CLI tool provides more detailed information than the OpenShift CLI tool.
Syntax
Use the oc exec command to run a Velero CLI command:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name>
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
<backup_restore_cr> <command> <cr_name>
Example
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
Help option
Use the velero --help option to list all Velero CLI commands:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ --help
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
--help
Describe command
Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name>
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
<backup_restore_cr> describe <cr_name>
Example
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql
The following types of restore errors and warnings are shown in the output of a velero describe request:
-
Velero: A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on -
Cluster: A list of messages related to backing up or restoring cluster-scoped resources -
Namespaces: A list of list of messages related to backing up or restoring resources stored in namespaces
One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed. Warnings do not lead to a change in the completion status.
-
For resource-specific errors, that is,
ClusterandNamespaceserrors, therestore describe --detailsoutput includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are
Veleroerrors, but no resource-specific errors, in the output of adescribecommand, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications.For example, if the output contains
PodVolumeRestoreor node agent-related errors, check the status ofPodVolumeRestoresandDataDownloads. If none of these are failed or still running, then volume data might have been fully restored.
Logs command
Use the velero logs command to retrieve the logs of a Backup or Restore CR:
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name>
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
<backup_restore_cr> logs <cr_name>
Example
oc -n openshift-adp exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf
$ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf
4.25.3. Pods crash or restart due to lack of memory or CPU 링크 복사링크가 클립보드에 복사되었습니다!
If a Velero or Restic pod crashes due to a lack of memory or CPU, you can set specific resource requests for either of those resources.
The values for the resource request fields must follow the same format as Kubernetes resource requirements. If you do not specify configuration.velero.podConfig.resourceAllocations or configuration.restic.podConfig.resourceAllocations, see the following default resources specification configuration for a Velero or Restic pod:
requests: cpu: 500m memory: 128Mi
requests:
cpu: 500m
memory: 128Mi
4.25.3.1. Setting resource requests for a Velero pod 링크 복사링크가 클립보드에 복사되었습니다!
You can use the configuration.velero.podConfig.resourceAllocations specification field in the oadp_v1alpha1_dpa.yaml file to set specific resource requests for a Velero pod.
Procedure
Set the
cpuandmemoryresource requests in the YAML file:Example Velero file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
resourceAllocationslisted are for average usage.
4.25.3.2. Setting resource requests for a Restic pod 링크 복사링크가 클립보드에 복사되었습니다!
You can use the configuration.restic.podConfig.resourceAllocations specification field to set specific resource requests for a Restic pod.
Procedure
Set the
cpuandmemoryresource requests in the YAML file:Example Restic file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
resourceAllocationslisted are for average usage.
4.25.4. Issues with Velero and admission webhooks 링크 복사링크가 클립보드에 복사되었습니다!
Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload.
Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources.
For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use.
4.25.4.1. Restoring workarounds for Velero backups that use admission webhooks 링크 복사링크가 클립보드에 복사되었습니다!
You need additional steps to restore resources for several types of Velero backups that use admission webhooks.
4.25.4.1.1. Restoring Knative resources 링크 복사링크가 클립보드에 복사되었습니다!
You might encounter problems using Velero to back up Knative resources that use admission webhooks.
You can avoid such problems by restoring the top level Service resource first whenever you back up and restore Knative resources that use admission webhooks.
Procedure
Restore the top level
service.serving.knavtive.dev Serviceresource:velero restore <restore_name> \ --from-backup=<backup_name> --include-resources \ service.serving.knavtive.dev
$ velero restore <restore_name> \ --from-backup=<backup_name> --include-resources \ service.serving.knavtive.devCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.4.1.2. Restoring IBM AppConnect resources 링크 복사링크가 클립보드에 복사되었습니다!
If you experience issues when you use Velero to a restore an IBM® AppConnect resource that has an admission webhook, you can run the checks in this procedure.
Procedure
Check if you have any mutating admission plugins of
kind: MutatingWebhookConfigurationin the cluster:oc get mutatingwebhookconfigurations
$ oc get mutatingwebhookconfigurationsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Examine the YAML file of each
kind: MutatingWebhookConfigurationto ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation. -
Check that any
spec.versionintype: Configuration.appconnect.ibm.com/v1beta1used at backup time is supported by the installed Operator.
4.25.4.2. OADP plugins known issues 링크 복사링크가 클립보드에 복사되었습니다!
The following section describes known issues in OpenShift API for Data Protection (OADP) plugins:
4.25.4.2.1. Velero plugin panics during imagestream backups due to a missing secret 링크 복사링크가 클립보드에 복사되었습니다!
When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret.
When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error:
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94…
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
4.25.4.2.1.1. Workaround to avoid the panic error 링크 복사링크가 클립보드에 복사되었습니다!
To avoid the Velero plugin panic error, perform the following steps:
Label the custom BSL with the relevant label:
oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
$ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bslCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the BSL is labeled, wait until the DPA reconciles.
NoteYou can force the reconciliation by making any minor change to the DPA itself.
When the DPA reconciles, confirm that the relevant
oadp-<bsl_name>-<bsl_provider>-registry-secrethas been created and that the correct registry data has been populated into it:oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
$ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.4.2.2. OpenShift ADP Controller segmentation fault 링크 복사링크가 클립보드에 복사되었습니다!
If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.
You can have either velero or cloudstorage defined, because they are mutually exclusive fields.
-
If you have both
veleroandcloudstoragedefined, theopenshift-adp-controller-managerfails. -
If you have neither
veleronorcloudstoragedefined, theopenshift-adp-controller-managerfails.
For more information about this issue, see OADP-1054.
4.25.4.2.2.1. OpenShift ADP Controller segmentation fault workaround 링크 복사링크가 클립보드에 복사되었습니다!
You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault.
4.25.4.3. Velero plugins returning "received EOF, stopping recv loop" message 링크 복사링크가 클립보드에 복사되었습니다!
Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred.
4.25.5. OADP installation issues 링크 복사링크가 클립보드에 복사되었습니다!
You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application.
4.25.5.1. Backup storage contains invalid directories 링크 복사링크가 클립보드에 복사되었습니다!
The Velero pod log displays the following error message: Backup storage contains invalid top-level directories.
Cause
The object storage contains top-level directories that are not Velero directories.
Solution
If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest.
4.25.5.2. Incorrect AWS credentials 링크 복사링크가 클립보드에 복사되었습니다!
The oadp-aws-registry pod log displays the following error message: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
The Velero pod log displays the following error message: NoCredentialProviders: no valid providers in chain.
Cause
The credentials-velero file used to create the Secret object is incorrectly formatted.
Solution
Ensure that the credentials-velero file is correctly formatted, as in the following example:
Example credentials-velero file
[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
4.25.6. OADP Operator issues 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift API for Data Protection (OADP) Operator might encounter issues caused by problems it is not able to resolve.
4.25.6.1. OADP Operator fails silently 링크 복사링크가 클립보드에 복사되었습니다!
The S3 buckets of an OADP Operator might be empty, but when you run the command oc get po -n <oadp_operator_namespace>, you see that the Operator has a status of Running. In such a case, the Operator is said to have failed silently because it incorrectly reports that it is running.
Cause
The problem is caused when cloud credentials provide insufficient permissions.
Solution
Retrieve a list of backup storage locations (BSLs) and check the manifest of each BSL for credential issues.
Procedure
Retrieve a list of BSLs by using either the OpenShift or Velero command-line interface (CLI):
Retrieve a list of BSLs by using the OpenShift CLI (
oc):oc get backupstoragelocations.velero.io -A
$ oc get backupstoragelocations.velero.io -ACopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve a list of BSLs by using the
veleroCLI:velero backup-location get -n <oadp_operator_namespace>
$ velero backup-location get -n <oadp_operator_namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the list of BSLs from the previous step and run the following command to examine the manifest of each BSL for an error:
oc get backupstoragelocations.velero.io -n <namespace> -o yaml
$ oc get backupstoragelocations.velero.io -n <namespace> -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example result
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.7. OADP timeouts 링크 복사링크가 클립보드에 복사되었습니다!
Extending a timeout allows complex or resource-intensive processes to complete successfully without premature termination. This configuration can reduce errors, retries, or failures.
Ensure that you balance timeout extensions in a logical manner so that you do not configure excessively long timeouts that might hide underlying issues in the process. Consider and monitor an appropriate timeout value that meets the needs of the process and the overall system performance.
The following OADP timeouts show instructions of how and when to implement these parameters:
4.25.7.1. Restic timeout 링크 복사링크가 클립보드에 복사되었습니다!
The spec.configuration.nodeAgent.timeout parameter defines the Restic timeout. The default value is 1h.
Use the Restic timeout parameter in the nodeAgent section for the following scenarios:
- For Restic backups with total PV data usage that is greater than 500GB.
If backups are timing out with the following error:
level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete"
level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Edit the values in the
spec.configuration.nodeAgent.timeoutblock of theDataProtectionApplicationcustom resource (CR) manifest, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.7.2. Velero resource timeout 링크 복사링크가 클립보드에 복사되었습니다!
resourceTimeout defines how long to wait for several Velero resources before timeout occurs, such as Velero custom resource definition (CRD) availability, volumeSnapshot deletion, and repository availability. The default is 10m.
Use the resourceTimeout for the following scenarios:
For backups with total PV data usage that is greater than 1TB. This parameter is used as a timeout value when Velero tries to clean up or delete the Container Storage Interface (CSI) snapshots, before marking the backup as complete.
- A sub-task of this cleanup tries to patch VSC and this timeout can be used for that task.
- To create or ensure a backup repository is ready for filesystem based backups for Restic or Kopia.
- To check if the Velero CRD is available in the cluster before restoring the custom resource (CR) or resource from the backup.
Procedure
Edit the values in the
spec.configuration.velero.resourceTimeoutblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.7.2.1. Velero default item operation timeout 링크 복사링크가 클립보드에 복사되었습니다!
defaultItemOperationTimeout defines how long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. The default value is 1h.
Use the defaultItemOperationTimeout for the following scenarios:
- Only with Data Mover 1.2.x.
- To specify the amount of time a particular backup or restore should wait for the Asynchronous actions to complete. In the context of OADP features, this value is used for the Asynchronous actions involved in the Container Storage Interface (CSI) Data Mover feature.
-
When
defaultItemOperationTimeoutis defined in the Data Protection Application (DPA) using thedefaultItemOperationTimeout, it applies to both backup and restore operations. You can useitemOperationTimeoutto define only the backup or only the restore of those CRs, as described in the following "Item operation timeout - restore", and "Item operation timeout - backup" sections.
Procedure
Edit the values in the
spec.configuration.velero.defaultItemOperationTimeoutblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.7.3. Data Mover timeout 링크 복사링크가 클립보드에 복사되었습니다!
timeout is a user-supplied timeout to complete VolumeSnapshotBackup and VolumeSnapshotRestore. The default value is 10m.
Use the Data Mover timeout for the following scenarios:
-
If creation of
VolumeSnapshotBackups(VSBs) andVolumeSnapshotRestores(VSRs), times out after 10 minutes. -
For large scale environments with total PV data usage that is greater than 500GB. Set the timeout for
1h. -
With the
VolumeSnapshotMover(VSM) plugin. - Only with OADP 1.1.x.
Procedure
Edit the values in the
spec.features.dataMover.timeoutblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.7.4. CSI snapshot timeout 링크 복사링크가 클립보드에 복사되었습니다!
CSISnapshotTimeout specifies the time during creation to wait until the CSI VolumeSnapshot status becomes ReadyToUse, before returning error as timeout. The default value is 10m.
Use the CSISnapshotTimeout for the following scenarios:
- With the CSI plugin.
- For very large storage volumes that may take longer than 10 minutes to snapshot. Adjust this timeout if timeouts are found in the logs.
Typically, the default value for CSISnapshotTimeout does not require adjustment, because the default setting can accommodate large storage volumes.
Procedure
Edit the values in the
spec.csiSnapshotTimeoutblock of theBackupCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.7.5. Item operation timeout - restore 링크 복사링크가 클립보드에 복사되었습니다!
ItemOperationTimeout specifies the time that is used to wait for RestoreItemAction operations. The default value is 1h.
Use the restore ItemOperationTimeout for the following scenarios:
- Only with Data Mover 1.2.x.
-
For Data Mover uploads and downloads to or from the
BackupStorageLocation. If the restore action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased.
Procedure
Edit the values in the
Restore.spec.itemOperationTimeoutblock of theRestoreCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.7.6. Item operation timeout - backup 링크 복사링크가 클립보드에 복사되었습니다!
ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations. The default value is 1h.
Use the backup ItemOperationTimeout for the following scenarios:
- Only with Data Mover 1.2.x.
-
For Data Mover uploads and downloads to or from the
BackupStorageLocation. If the backup action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased.
Procedure
Edit the values in the
Backup.spec.itemOperationTimeoutblock of theBackupCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.8. Backup and Restore CR issues 링크 복사링크가 클립보드에 복사되었습니다!
You might encounter these common issues with Backup and Restore custom resources (CRs).
4.25.8.1. Backup CR cannot retrieve volume 링크 복사링크가 클립보드에 복사되었습니다!
The Backup CR displays the following error message: InvalidVolume.NotFound: The volume ‘vol-xxxx’ does not exist.
Cause
The persistent volume (PV) and the snapshot locations are in different regions.
Solution
-
Edit the value of the
spec.snapshotLocations.velero.config.regionkey in theDataProtectionApplicationmanifest so that the snapshot location is in the same region as the PV. -
Create a new
BackupCR.
4.25.8.2. Backup CR status remains in progress 링크 복사링크가 클립보드에 복사되었습니다!
The status of a Backup CR remains in the InProgress phase and does not complete.
Cause
If a backup is interrupted, it cannot be resumed.
Solution
Retrieve the details of the
BackupCR by running the following command:oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup>$ oc -n {namespace} exec deployment/velero -c velero -- ./velero \ backup describe <backup>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
BackupCR by running the following command:oc delete backups.velero.io <backup> -n openshift-adp
$ oc delete backups.velero.io <backup> -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow You do not need to clean up the backup location because an in progress
BackupCR has not uploaded files to object storage.-
Create a new
BackupCR. View the Velero backup details by running the following command:
velero backup describe <backup-name> --details
$ velero backup describe <backup-name> --detailsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.8.3. Backup CR status remains in PartiallyFailed 링크 복사링크가 클립보드에 복사되었습니다!
The status of a Backup CR without Restic in use remains in the PartiallyFailed phase and is not completed. A snapshot of the affiliated PVC is not created.
Cause
If the backup created based on the CSI snapshot class is missing a label, the CSI snapshot plugin fails to create a snapshot. As a result, the Velero pod logs an error similar to the following message:
time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq
time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq
Solution
Delete the
BackupCR by running the following command::oc delete backups.velero.io <backup> -n openshift-adp
$ oc delete backups.velero.io <backup> -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If required, clean up the stored data on the
BackupStorageLocationto free up space. Apply the label
velero.io/csi-volumesnapshot-class=trueto theVolumeSnapshotClassobject by running the following command:oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true
$ oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a new
BackupCR.
4.25.9. Restic issues 링크 복사링크가 클립보드에 복사되었습니다!
You might encounter these issues when you back up applications with Restic.
4.25.9.1. Restic permission error for NFS data volumes with root_squash enabled 링크 복사링크가 클립보드에 복사되었습니다!
The Restic pod log displays the following error message: controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied".
Cause
If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups.
Solution
You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest:
-
Create a supplemental group for
Resticon the NFS data volume. -
Set the
setgidbit on the NFS directories so that group ownership is inherited. Add the
spec.configuration.nodeAgent.supplementalGroupsparameter and the group ID to theDataProtectionApplicationmanifest, as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the supplemental group ID.
-
Wait for the
Resticpods to restart so that the changes are applied.
4.25.9.2. Restic Backup CR cannot be recreated after bucket is emptied 링크 복사링크가 클립보드에 복사되었습니다!
If you create a Restic Backup CR for a namespace, empty the object storage bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails.
The velero pod log displays the following error message: stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?.
Cause
Velero does not recreate or update the Restic repository from the ResticRepository manifest if the Restic directories are deleted from object storage. See Velero issue 4421 for more information.
Solution
Remove the related Restic repository from the namespace by running the following command:
oc delete resticrepository openshift-adp <name_of_the_restic_repository>
$ oc delete resticrepository openshift-adp <name_of_the_restic_repository>Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following error log,
mysql-persistentis the problematic Restic repository. The name of the repository appears in italics for clarity.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.9.3. Restic restore partially failing on OCP 4.14 due to changed PSA policy 링크 복사링크가 클립보드에 복사되었습니다!
OpenShift Container Platform 4.14 enforces a Pod Security Admission (PSA) policy that can hinder the readiness of pods during a Restic restore process.
If a SecurityContextConstraints (SCC) resource is not found when a pod is created, and the PSA policy on the pod is not set up to meet the required standards, pod admission is denied.
This issue arises due to the resource restore order of Velero.
Sample error
Solution
In your DPA custom resource (CR), check or set the
restore-resource-prioritiesfield on the Velero server to ensure thatsecuritycontextconstraintsis listed in order beforepodsin the list of resources:oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example DPA CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If you have an existing restore resource priority list, ensure you combine that existing list with the complete list.
- Ensure that the security standards for the application pods are aligned, as provided in Fixing PodSecurity Admission warnings for deployments, to prevent deployment warnings. If the application is not aligned with security standards, an error can occur regardless of the SCC.
This solution is temporary, and ongoing discussions are in progress to address it.
4.25.10. Using the must-gather tool 링크 복사링크가 클립보드에 복사되었습니다!
You can collect logs and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases.
The must-gather tool is a container and does not run all the time. The tool runs for a few minutes only after you invoke the tool by running the must-gather command.
4.25.10.1. Using the must-gather tool 링크 복사링크가 클립보드에 복사되었습니다!
You can run the must-gather tool with the following options. To use an option, you can add a flag corresponding to that option in the must-gather command.
- Default configuration
-
This configuration collects pod logs, OADP and
Velerocustom resource (CR) information for all namespaces where the OADP Operator is installed. - Timeout
-
Data collection can take a long time if there are many failed
BackupCRs. You can improve performance by setting a timeout value. - Insecure TLS connections
-
If a custom CA certificate is used, use the
must-gathertool with insecure TLS connections.
The must-gather tool generates a Markdown output file with the collected information. The Markdown file is located in a cluster directory.
Prerequisites
-
You have logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc). - You are using OADP 1.3 or 1.4.
Procedure
-
Navigate to the directory where you want to store the
must-gatherdata. Run the
oc adm must-gathercommand for one of the following data collection options:To use the default configuration of the
must-gathertool, run one of the following commands:For OADP 1.3, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3Copy to Clipboard Copied! Toggle word wrap Toggle overflow For OADP 1.4, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To use the timeout flag with the
must-gathertool, run one of the following commands:For OADP 1.3, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather --request-timeout <timeout>
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather --request-timeout <timeout>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify a timeout value.
For OADP 1.4, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather --request-timeout 1m
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather --request-timeout 1m1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the timeout is 1 minute.
To use the insecure TLS connection flag with the
must-gathertool, run one of the following commands:For OADP 1.3, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather --skip-tls
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather --skip-tlsCopy to Clipboard Copied! Toggle word wrap Toggle overflow For OADP 1.4, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather --skip-tls
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather --skip-tlsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To use a combination of the insecure TLS connection, and the timeout flags with the
must-gathertool, run one of the following commands:For OADP 1.3, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather --request-timeout 15s --skip-tls=true
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather --request-timeout 15s --skip-tls=true1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- By default, the
--skip-tlsflag value isfalse. Set the value totrueto allow insecure TLS connections. Specify a timeout value.
For OADP 1.4, run the following command:
oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather --request-timeout 15s --skip-tls
$ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather --request-timeout 15s --skip-tls1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the timeout is 15 seconds. By default, the
--skip-tlsflag value isfalse. Set the value totrueto allow insecure TLS connections.
Verification
-
Verify that the Markdown output file is generated at the following location:
must-gather.local.89…054550/registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.5-sha256-0…84/clusters/a4…86/oadp-must-gather-summary.md Review the
must-gatherdata in the Markdown file by opening the file in a Markdown previewer. For an example output, refer to the following image. You can upload this output file to a support case on the Red Hat Customer Portal.Figure 4.1. Example markdown output of must-gather tool
4.25.11. OADP monitoring 링크 복사링크가 클립보드에 복사되었습니다!
By using the OpenShift Container Platform monitoring stack, users and administrators can effectively perform the following tasks:
- Monitor and manage clusters
- Analyze the workload performance of user applications
- Monitor services running on the clusters
- Receive alerts if an event occurs
4.25.11.1. OADP monitoring setup 링크 복사링크가 클립보드에 복사되었습니다!
The OADP Operator leverages an OpenShift User Workload Monitoring provided by the OpenShift Monitoring Stack for retrieving metrics from the Velero service endpoint. The monitoring stack allows creating user-defined Alerting Rules or querying metrics by using the OpenShift Metrics query front end.
With enabled User Workload Monitoring, it is possible to configure and use any Prometheus-compatible third-party UI, such as Grafana, to visualize Velero metrics.
Monitoring metrics requires enabling monitoring for the user-defined projects and creating a ServiceMonitor resource to scrape those metrics from the already enabled OADP service endpoint that resides in the openshift-adp namespace.
The OADP support for Prometheus metrics is offered on a best-effort basis and is not fully supported.
For more information about setting up the monitoring stack, see Configuring user workload monitoring.
Prerequisites
-
You have access to an OpenShift Container Platform cluster using an account with
cluster-adminpermissions. - You have created a cluster monitoring config map.
Procedure
Edit the
cluster-monitoring-configConfigMapobject in theopenshift-monitoringnamespace:oc edit configmap cluster-monitoring-config -n openshift-monitoring
$ oc edit configmap cluster-monitoring-config -n openshift-monitoringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add or enable the
enableUserWorkloadoption in thedatasection’sconfig.yamlfield:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Add this option or set to
true
Wait a short period of time to verify the User Workload Monitoring Setup by checking if the following components are up and running in the
openshift-user-workload-monitoringnamespace:oc get pods -n openshift-user-workload-monitoring
$ oc get pods -n openshift-user-workload-monitoringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the existence of the
user-workload-monitoring-configConfigMap in theopenshift-user-workload-monitoring. If it exists, skip the remaining steps in this procedure.oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring
$ oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoringCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Error from server (NotFound): configmaps "user-workload-monitoring-config" not found
Error from server (NotFound): configmaps "user-workload-monitoring-config" not foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
user-workload-monitoring-configConfigMapobject for the User Workload Monitoring, and save it under the2_configure_user_workload_monitoring.yamlfile name:Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
2_configure_user_workload_monitoring.yamlfile:oc apply -f 2_configure_user_workload_monitoring.yaml
$ oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.25.11.2. Creating OADP service monitor 링크 복사링크가 클립보드에 복사되었습니다!
OADP provides an openshift-adp-velero-metrics-svc service which is created when the DPA is configured. The service monitor used by the user workload monitoring must point to the defined service.
Get details about the service by running the following commands:
Procedure
Ensure the
openshift-adp-velero-metrics-svcservice exists. It should containapp.kubernetes.io/name=velerolabel, which will be used as selector for theServiceMonitorobject.oc get svc -n openshift-adp -l app.kubernetes.io/name=velero
$ oc get svc -n openshift-adp -l app.kubernetes.io/name=veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
ServiceMonitorYAML file that matches the existing service label, and save the file as3_create_oadp_service_monitor.yaml. The service monitor is created in theopenshift-adpnamespace where theopenshift-adp-velero-metrics-svcservice resides.Example
ServiceMonitorobjectCopy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
3_create_oadp_service_monitor.yamlfile:oc apply -f 3_create_oadp_service_monitor.yaml
$ oc apply -f 3_create_oadp_service_monitor.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
servicemonitor.monitoring.coreos.com/oadp-service-monitor created
servicemonitor.monitoring.coreos.com/oadp-service-monitor createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Confirm that the new service monitor is in an Up state by using the Administrator perspective of the OpenShift Container Platform web console. Wait a few minutes for the service monitor to reach the Up state.
- Navigate to the Observe → Targets page.
-
Ensure the Filter is unselected or that the User source is selected and type
openshift-adpin theTextsearch field. Verify that the status for the Status for the service monitor is Up.
Figure 4.2. OADP metrics targets
4.25.11.3. Creating an alerting rule 링크 복사링크가 클립보드에 복사되었습니다!
The OpenShift Container Platform monitoring stack allows to receive Alerts configured using Alerting Rules. To create an Alerting rule for the OADP project, use one of the Metrics which are scraped with the user workload monitoring.
Procedure
Create a
PrometheusRuleYAML file with the sampleOADPBackupFailingalert and save it as4_create_oadp_alert_rule.yaml.Sample
OADPBackupFailingalertCopy to Clipboard Copied! Toggle word wrap Toggle overflow In this sample, the Alert displays under the following conditions:
- There is an increase of new failing backups during the 2 last hours that is greater than 0 and the state persists for at least 5 minutes.
-
If the time of the first increase is less than 5 minutes, the Alert will be in a
Pendingstate, after which it will turn into aFiringstate.
Apply the
4_create_oadp_alert_rule.yamlfile, which creates thePrometheusRuleobject in theopenshift-adpnamespace:oc apply -f 4_create_oadp_alert_rule.yaml
$ oc apply -f 4_create_oadp_alert_rule.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
prometheusrule.monitoring.coreos.com/sample-oadp-alert created
prometheusrule.monitoring.coreos.com/sample-oadp-alert createdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
After the Alert is triggered, you can view it in the following ways:
- In the Developer perspective, select the Observe menu.
In the Administrator perspective under the Observe → Alerting menu, select User in the Filter box. Otherwise, by default only the Platform Alerts are displayed.
Figure 4.3. OADP backup failing alert
4.25.11.4. List of available metrics 링크 복사링크가 클립보드에 복사되었습니다!
Refer to the following table for a list of Velero metrics provided by OADP together with their Types:
| Metric name | Description | Type |
|---|---|---|
|
| Size, in bytes, of a backup | Gauge |
|
| Current number of existent backups | Gauge |
|
| Total number of attempted backups | Counter |
|
| Total number of successful backups | Counter |
|
| Total number of partially failed backups | Counter |
|
| Total number of failed backups | Counter |
|
| Total number of validation failed backups | Counter |
|
| Time taken to complete backup, in seconds | Histogram |
|
|
Total count of observations for a bucket in the histogram for the metric | Counter |
|
|
Total count of observations for the metric | Counter |
|
|
Total sum of observations for the metric | Counter |
|
| Total number of attempted backup deletions | Counter |
|
| Total number of successful backup deletions | Counter |
|
| Total number of failed backup deletions | Counter |
|
| Last time a backup ran successfully, Unix timestamp in seconds | Gauge |
|
| Total number of items backed up | Gauge |
|
| Total number of errors encountered during backup | Gauge |
|
| Total number of warned backups | Counter |
|
| Last status of the backup. A value of 1 is success, 0 is failure | Gauge |
|
| Current number of existent restores | Gauge |
|
| Total number of attempted restores | Counter |
|
| Total number of failed restores failing validations | Counter |
|
| Total number of successful restores | Counter |
|
| Total number of partially failed restores | Counter |
|
| Total number of failed restores | Counter |
|
| Total number of attempted volume snapshots | Counter |
|
| Total number of successful volume snapshots | Counter |
|
| Total number of failed volume snapshots | Counter |
|
| Total number of CSI attempted volume snapshots | Counter |
|
| Total number of CSI successful volume snapshots | Counter |
|
| Total number of CSI failed volume snapshots | Counter |
4.25.11.5. Viewing metrics using the Observe UI 링크 복사링크가 클립보드에 복사되었습니다!
You can view metrics in the OpenShift Container Platform web console from the Administrator or Developer perspective, which must have access to the openshift-adp project.
Procedure
Navigate to the Observe → Metrics page:
If you are using the Developer perspective, follow these steps:
- Select Custom query, or click on the Show PromQL link.
- Type the query and click Enter.
If you are using the Administrator perspective, type the expression in the text field and select Run Queries.
Figure 4.4. OADP metrics query
Chapter 5. Control plane backup and restore 링크 복사링크가 클립보드에 복사되었습니다!
5.1. Backing up etcd 링크 복사링크가 클립보드에 복사되었습니다!
etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects.
Back up your cluster’s etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
Be sure to take an etcd backup before you update your cluster. Taking a backup before you update is important because when you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.17.5 cluster must use an etcd backup that was taken from 4.17.5.
Back up your cluster’s etcd data by performing a single invocation of the backup script on a control plane host. Do not take a backup for each control plane host.
After you have an etcd backup, you can restore to a previous cluster state.
5.1.1. Backing up etcd data 링크 복사링크가 클립보드에 복사되었습니다!
Follow these steps to back up etcd data by creating an etcd snapshot and backing up the resources for the static pods. This backup can be saved and used at a later time if you need to restore etcd.
Only save a backup from a single control plane host. Do not take a backup from each control plane host in the cluster.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. You have checked whether the cluster-wide proxy is enabled.
TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml. The proxy is enabled if thehttpProxy,httpsProxy, andnoProxyfields have values set.
Procedure
Start a debug session as root for a control plane node:
oc debug --as-root node/<node_name>
$ oc debug --as-root node/<node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change your root directory to
/hostin the debug shell:chroot /host
sh-4.4# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the cluster-wide proxy is enabled, export the
NO_PROXY,HTTP_PROXY, andHTTPS_PROXYenvironment variables by running the following commands:export HTTP_PROXY=http://<your_proxy.example.com>:8080
$ export HTTP_PROXY=http://<your_proxy.example.com>:8080Copy to Clipboard Copied! Toggle word wrap Toggle overflow export HTTPS_PROXY=https://<your_proxy.example.com>:8080
$ export HTTPS_PROXY=https://<your_proxy.example.com>:8080Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NO_PROXY=<example.com>
$ export NO_PROXY=<example.com>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
cluster-backup.shscript in the debug shell and pass in the location to save the backup to.TipThe
cluster-backup.shscript is maintained as a component of the etcd Cluster Operator and is a wrapper around theetcdctl snapshot savecommand./usr/local/bin/cluster-backup.sh /home/core/assets/backup
sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this example, two files are created in the
/home/core/assets/backup/directory on the control plane host:-
snapshot_<datetimestamp>.db: This file is the etcd snapshot. Thecluster-backup.shscript confirms its validity. static_kuberesources_<datetimestamp>.tar.gz: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot.NoteIf etcd encryption is enabled, it is recommended to store this second file separately from the etcd snapshot for security reasons. However, this file is required to restore from the etcd snapshot.
Keep in mind that etcd encryption only encrypts values, not keys. This means that resource types, namespaces, and object names are unencrypted.
-
5.1.3. Creating automated etcd backups 링크 복사링크가 클립보드에 복사되었습니다!
The automated backup feature for etcd supports both recurring and single backups. Recurring backups create a cron job that starts a single backup each time the job triggers.
Automating etcd backups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Follow these steps to enable automated backups for etcd.
Enabling the TechPreviewNoUpgrade feature set on your cluster prevents minor version updates. The TechPreviewNoUpgrade feature set cannot be disabled. Do not enable this feature set on production clusters.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have access to the OpenShift CLI (
oc).
Procedure
Create a
FeatureGatecustom resource (CR) file namedenable-tech-preview-no-upgrade.yamlwith the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the CR and enable automated backups:
oc apply -f enable-tech-preview-no-upgrade.yaml
$ oc apply -f enable-tech-preview-no-upgrade.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow It takes time to enable the related APIs. Verify the creation of the custom resource definition (CRD) by running the following command:
oc get crd | grep backup
$ oc get crd | grep backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z
backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04ZCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.3.1. Creating a single etcd backup 링크 복사링크가 클립보드에 복사되었습니다!
Follow these steps to create a single etcd backup by creating and applying a custom resource (CR).
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have access to the OpenShift CLI (
oc).
Procedure
If dynamically-provisioned storage is available, complete the following steps to create a single automated etcd backup:
Create a persistent volume claim (PVC) named
etcd-backup-pvc.yamlwith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
Apply the PVC by running the following command:
oc apply -f etcd-backup-pvc.yaml
$ oc apply -f etcd-backup-pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the creation of the PVC by running the following command:
oc get pvc
$ oc get pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51sCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDynamic PVCs stay in the
Pendingstate until they are mounted.Create a CR file named
etcd-single-backup.yamlwith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the PVC to save the backup to. Adjust this value according to your environment.
Apply the CR to start a single backup:
oc apply -f etcd-single-backup.yaml
$ oc apply -f etcd-single-backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If dynamically-provisioned storage is not available, complete the following steps to create a single automated etcd backup:
Create a
StorageClassCR file namedetcd-backup-local-storage.yamlwith the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
StorageClassCR by running the following command:oc apply -f etcd-backup-local-storage.yaml
$ oc apply -f etcd-backup-local-storage.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PV named
etcd-backup-pv-fs.yamlwith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the creation of the PV by running the following command:
oc get pv
$ oc get pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PVC named
etcd-backup-pvc.yamlwith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
Apply the PVC by running the following command:
oc apply -f etcd-backup-pvc.yaml
$ oc apply -f etcd-backup-pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a CR file named
etcd-single-backup.yamlwith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The name of the persistent volume claim (PVC) to save the backup to. Adjust this value according to your environment.
Apply the CR to start a single backup:
oc apply -f etcd-single-backup.yaml
$ oc apply -f etcd-single-backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.3.2. Creating recurring etcd backups 링크 복사링크가 클립보드에 복사되었습니다!
Follow these steps to create automated recurring backups of etcd.
Use dynamically-provisioned storage to keep the created etcd backup data in a safe, external location if possible. If dynamically-provisioned storage is not available, consider storing the backup data on an NFS share to make backup recovery more accessible.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have access to the OpenShift CLI (
oc).
Procedure
If dynamically-provisioned storage is available, complete the following steps to create automated recurring backups:
Create a persistent volume claim (PVC) named
etcd-backup-pvc.yamlwith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
NoteEach of the following providers require changes to the
accessModesandstorageClassNamekeys:Expand Provider accessModesvaluestorageClassNamevalueAWS with the
versioned-installer-efc_operator-ciprofile- ReadWriteManyefs-scGoogle Cloud Platform
- ReadWriteManyfilestore-csiMicrosoft Azure
- ReadWriteManyazurefile-csiApply the PVC by running the following command:
oc apply -f etcd-backup-pvc.yaml
$ oc apply -f etcd-backup-pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the creation of the PVC by running the following command:
oc get pvc
$ oc get pvcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51sCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDynamic PVCs stay in the
Pendingstate until they are mounted.
If dynamically-provisioned storage is unavailable, create a local storage PVC by completing the following steps:
WarningIf you delete or otherwise lose access to the node that contains the stored backup data, you can lose data.
Create a
StorageClassCR file namedetcd-backup-local-storage.yamlwith the following contents:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the
StorageClassCR by running the following command:oc apply -f etcd-backup-local-storage.yaml
$ oc apply -f etcd-backup-local-storage.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PV named
etcd-backup-pv-fs.yamlfrom the appliedStorageClasswith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow TipRun the following command to list the available nodes:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the creation of the PV by running the following command:
oc get pv
$ oc get pvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a PVC named
etcd-backup-pvc.yamlwith contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The amount of storage available to the PVC. Adjust this value for your requirements.
Apply the PVC by running the following command:
oc apply -f etcd-backup-pvc.yaml
$ oc apply -f etcd-backup-pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a custom resource definition (CRD) file named
etcd-recurring-backups.yaml. The contents of the created CRD define the schedule and retention type of automated backups.For the default retention type of
RetentionNumberwith 15 retained backups, use contents such as the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
CronTabschedule for recurring backups. Adjust this value for your needs.
To use retention based on the maximum number of backups, add the following key-value pairs to the
etcdkey:Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningA known issue causes the number of retained backups to be one greater than the configured value.
For retention based on the file size of backups, use the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The maximum file size of the retained backups in gigabytes. Adjust this value for your needs. Defaults to 10 GB if unspecified.
WarningA known issue causes the maximum size of retained backups to be up to 10 GB greater than the configured value.
Create the cron job defined by the CRD by running the following command:
oc create -f etcd-recurring-backup.yaml
$ oc create -f etcd-recurring-backup.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To find the created cron job, run the following command:
oc get cronjob -n openshift-etcd
$ oc get cronjob -n openshift-etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2. Replacing an unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
This document describes the process to replace a single unhealthy etcd member.
This process depends on whether the etcd member is unhealthy because the machine is not running or the node is not ready, or whether it is unhealthy because the etcd pod is crashlooping.
If you have lost the majority of your control plane hosts, follow the disaster recovery procedure to restore to a previous cluster state instead of this procedure.
If the control plane certificates are not valid on the member being replaced, then you must follow the procedure to recover from expired control plane certificates instead of this procedure.
If a control plane node is lost and a new one is created, the etcd cluster Operator handles generating the new TLS certificates and adding the node as an etcd member.
5.2.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- Take an etcd backup prior to replacing an unhealthy etcd member.
5.2.2. Identifying an unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
You can identify if your cluster has an unhealthy etcd member.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole.
Procedure
Check the status of the
EtcdMembersAvailablestatus condition using the following command:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}{end}'$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="EtcdMembersAvailable")]}{.message}{"\n"}{end}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the output:
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy
2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthyCopy to Clipboard Copied! Toggle word wrap Toggle overflow This example output shows that the
ip-10-0-131-183.ec2.internaletcd member is unhealthy.
5.2.3. Determining the state of the unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
The steps to replace an unhealthy etcd member depend on which of the following states your etcd member is in:
- The machine is not running or the node is not ready
- The etcd pod is crashlooping
This procedure determines which state your etcd member is in. This enables you to know which procedure to follow to replace the unhealthy etcd member.
If you are aware that the machine is not running or the node is not ready, but you expect it to return to a healthy state soon, then you do not need to perform a procedure to replace the etcd member. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Prerequisites
-
You have access to the cluster as a user with the
cluster-adminrole. - You have identified an unhealthy etcd member.
Procedure
Determine if the machine is not running:
oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v running$ oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{"\t"}{@.status.providerStatus.instanceState}{"\n"}' | grep -v runningCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal stopped
ip-10-0-131-183.ec2.internal stopped1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This output lists the node and the status of the node’s machine. If the status is anything other than
running, then the machine is not running.
If the machine is not running, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the node is not ready.
If either of the following scenarios are true, then the node is not ready.
If the machine is running, then check whether the node is unreachable:
oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachable$ oc get nodes -o jsonpath='{range .items[*]}{"\n"}{.metadata.name}{"\t"}{range .spec.taints[*]}{.key}{" "}' | grep unreachableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable
ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed with an
unreachabletaint, then the node is not ready.
If the node is still reachable, then check whether the node is listed as
NotReady:oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"
$ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ip-10-0-131-183.ec2.internal NotReady master 122m v1.28.5
ip-10-0-131-183.ec2.internal NotReady master 122m v1.28.51 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- If the node is listed as
NotReady, then the node is not ready.
If the node is not ready, then follow the Replacing an unhealthy etcd member whose machine is not running or whose node is not ready procedure.
Determine if the etcd pod is crashlooping.
If the machine is running and the node is ready, then check whether the etcd pod is crashlooping.
Verify that all control plane nodes are listed as
Ready:oc get nodes -l node-role.kubernetes.io/master
$ oc get nodes -l node-role.kubernetes.io/masterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.28.5
NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.28.5Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check whether the status of an etcd pod is either
ErrororCrashloopBackoff:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Since this status of this pod is
Error, then the etcd pod is crashlooping.
If the etcd pod is crashlooping, then follow the Replacing an unhealthy etcd member whose etcd pod is crashlooping procedure.
5.2.4. Replacing the unhealthy etcd member 링크 복사링크가 클립보드에 복사되었습니다!
Depending on the state of your unhealthy etcd member, use one of the following procedures:
This procedure details the steps to replace an etcd member that is unhealthy either because the machine is not running or because the node is not ready.
If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for an etcd recovery procedure.
Prerequisites
- You have identified the unhealthy etcd member.
You have verified that either the machine is not running or the node is not ready.
ImportantYou must wait if the other control plane nodes are powered off. The control plane nodes must remain powered off until the replacement of an unhealthy etcd member is complete.
-
You have access to the cluster as a user with the
cluster-adminrole. You have taken an etcd backup.
ImportantBefore you perform this procedure, take an etcd backup so that you can restore your cluster if you experience any issues.
Procedure
Remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member because these values are needed later in the procedure. The
$ etcdctl endpoint healthcommand will list the removed member until the procedure of replacement is finished and a new member is added.Remove the unhealthy etcd member by providing the ID to the
etcdctl member removecommand:etcdctl member remove 6fc1e7c9db35841d
sh-4.2# etcdctl member remove 6fc1e7c9db35841dCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346
Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Turn off the quorum guard by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command ensures that you can successfully re-create secrets and roll out the static pods.
ImportantAfter you turn off the quorum guard, the cluster might be unreachable for a short time while the remaining etcd instances reboot to reflect the configuration change.
Noteetcd cannot tolerate any additional member failure when running with two members. Restarting either remaining member breaks the quorum and causes downtime in your cluster. The quorum guard protects etcd from restarts due to configuration changes that could cause downtime, so it must be disabled to complete this procedure.
Delete the affected node by running the following command:
oc delete node <node_name>
$ oc delete node <node_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example command
oc delete node ip-10-0-131-183.ec2.internal
$ oc delete node ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Check whether a control plane machine set exists by entering the following command:
oc -n openshift-machine-api get controlplanemachineset
$ oc -n openshift-machine-api get controlplanemachinesetCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the control plane machine set exists, delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically. For more information, see "Replacing an unhealthy etcd member whose machine is not running or whose node is not ready".
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new control plane by using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the control plane machine for the unhealthy node,
ip-10-0-131-183.ec2.internal.
Delete the machine of the unhealthy member:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the unhealthy node.
A new machine is automatically provisioned after deleting the machine of the unhealthy member.
Verify that a new machine was created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3is being created and is ready once the phase changes fromProvisioningtoRunning.
It might take a few minutes for the new machine to be created. The etcd cluster Operator automatically syncs when the machine or node returns to a healthy state.
NoteVerify the subnet IDs that you are using for your machine sets to ensure that they end up in the correct availability zone.
If the control plane machine set does not exist, delete and re-create the control plane machine. After this machine is re-created, a new revision is forced and etcd scales up automatically.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps. Otherwise, you must create the new control plane by using the same method that was used to originally create it.
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the control plane machine for the unhealthy node,
ip-10-0-131-183.ec2.internal.
Save the machine configuration to a file on your file system:
oc get machine clustername-8qw5l-master-0 \ -n openshift-machine-api \ -o yaml \ > new-master-machine.yaml$ oc get machine clustername-8qw5l-master-0 \1 -n openshift-machine-api \ -o yaml \ > new-master-machine.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the unhealthy node.
Edit the
new-master-machine.yamlfile that was created in the previous step to assign a new name and remove unnecessary fields.Remove the entire
statussection:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the
metadata.namefield to a new name.Keep the same base name as the old machine and change the ending number to the next available number. In this example,
clustername-8qw5l-master-0is changed toclustername-8qw5l-master-3.For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the
spec.providerIDfield:providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
providerID: aws:///us-east-1a/i-0fdb85790d76d0c3fCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Delete the machine of the unhealthy member:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the unhealthy node.
Verify that the machine was deleted:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new machine by using the
new-master-machine.yamlfile:oc apply -f new-master-machine.yaml
$ oc apply -f new-master-machine.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new machine was created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3is being created and is ready once the phase changes fromProvisioningtoRunning.
It might take a few minutes for the new machine to be created. The etcd cluster Operator automatically syncs when the machine or node returns to a healthy state.
Turn the quorum guard back on by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the
unsupportedConfigOverridessection is removed from the object by entering this command:oc get etcd/cluster -oyaml
$ oc get etcd/cluster -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
Example output
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m
etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124mCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
Verify that there are exactly three etcd members.
Connect to the running etcd container, passing in the name of a pod that was not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
WarningBe sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
5.2.4.2. Replacing an unhealthy etcd member whose etcd pod is crashlooping 링크 복사링크가 클립보드에 복사되었습니다!
This procedure details the steps to replace an etcd member that is unhealthy because the etcd pod is crashlooping.
Prerequisites
- You have identified the unhealthy etcd member.
- You have verified that the etcd pod is crashlooping.
-
You have access to the cluster as a user with the
cluster-adminrole. You have taken an etcd backup.
ImportantIt is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Stop the crashlooping etcd pod.
Debug the node that is crashlooping.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc debug node/ip-10-0-131-183.ec2.internal
$ oc debug node/ip-10-0-131-183.ec2.internal1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace this with the name of the unhealthy node.
Change your root directory to
/host:chroot /host
sh-4.2# chroot /hostCopy to Clipboard Copied! Toggle word wrap Toggle overflow Move the existing etcd pod file out of the kubelet manifest directory:
mkdir /var/lib/etcd-backup
sh-4.2# mkdir /var/lib/etcd-backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/
sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move the etcd data directory to a different location:
mv /var/lib/etcd/ /tmp
sh-4.2# mv /var/lib/etcd/ /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Remove the unhealthy member.
Choose a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m
etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are needed later in the procedure.
Remove the unhealthy etcd member by providing the ID to the
etcdctl member removecommand:etcdctl member remove 62bcf33650a7170a
sh-4.2# etcdctl member remove 62bcf33650a7170aCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346
Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
Turn off the quorum guard by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command ensures that you can successfully re-create secrets and roll out the static pods.
Remove the old secrets for the unhealthy etcd member that was removed.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal
$ oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
Example output
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m
etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal
$ oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Force etcd redeployment.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "single-master-recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
When the etcd cluster Operator performs a redeployment, it ensures that all control plane nodes have a functioning etcd pod.
Turn the quorum guard back on by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the
unsupportedConfigOverridessection is removed from the object by entering this command:oc get etcd/cluster -oyaml
$ oc get etcd/cluster -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
Example output
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the new member is available and healthy.
Connect to the running etcd container again.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal
$ oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all members are healthy:
etcdctl endpoint health
sh-4.2# etcdctl endpoint healthCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms
https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645msCopy to Clipboard Copied! Toggle word wrap Toggle overflow
This procedure details the steps to replace a bare metal etcd member that is unhealthy either because the machine is not running or because the node is not ready.
If you are running installer-provisioned infrastructure or you used the Machine API to create your machines, follow these steps. Otherwise you must create the new control plane node using the same method that was used to originally create it.
Prerequisites
- You have identified the unhealthy bare metal etcd member.
- You have verified that either the machine is not running or the node is not ready.
-
You have access to the cluster as a user with the
cluster-adminrole. You have taken an etcd backup.
ImportantYou must take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues.
Procedure
Verify and remove the unhealthy member.
Choose a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc -n openshift-etcd get pods -l k8s-app=etcd -o wide
$ oc -n openshift-etcd get pods -l k8s-app=etcd -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>
etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Connect to the running etcd container, passing in the name of a pod that is not on the affected node:
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-openshift-control-plane-0
$ oc rsh -n openshift-etcd etcd-openshift-control-plane-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Take note of the ID and the name of the unhealthy etcd member, because these values are required later in the procedure. The
etcdctl endpoint healthcommand will list the removed member until the replacement procedure is completed and the new member is added.Remove the unhealthy etcd member by providing the ID to the
etcdctl member removecommand:WarningBe sure to remove the correct etcd member; removing a good etcd member might lead to quorum loss.
etcdctl member remove 7a8197040a5126c8
sh-4.2# etcdctl member remove 7a8197040a5126c8Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b
Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1bCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list again and verify that the member was removed:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can now exit the node shell.
ImportantAfter you remove the member, the cluster might be unreachable for a short time while the remaining etcd instances reboot.
Turn off the quorum guard by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command ensures that you can successfully re-create secrets and roll out the static pods.
Remove the old secrets for the unhealthy etcd member that was removed by running the following commands.
List the secrets for the unhealthy etcd member that was removed.
oc get secrets -n openshift-etcd | grep openshift-control-plane-2
$ oc get secrets -n openshift-etcd | grep openshift-control-plane-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pass in the name of the unhealthy etcd member that you took note of earlier in this procedure.
There is a peer, serving, and metrics secret as shown in the following output:
etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m
etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134mCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the secrets for the unhealthy etcd member that was removed.
Delete the peer secret:
oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd
$ oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret "etcd-peer-openshift-control-plane-2" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the serving secret:
oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd
$ oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-metrics-openshift-control-plane-2" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the metrics secret:
oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd
$ oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret "etcd-serving-openshift-control-plane-2" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Obtain the machine for the unhealthy member.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the control plane machine for the unhealthy node,
examplecluster-control-plane-2.
Ensure that the Bare Metal Operator is available by running the following command:
oc get clusteroperator baremetal
$ oc get clusteroperator baremetalCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.15.0 True False False 3d15h
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.15.0 True False False 3d15hCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the old
BareMetalHostobject by running the following command:oc delete bmh openshift-control-plane-2 -n openshift-machine-api
$ oc delete bmh openshift-control-plane-2 -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
baremetalhost.metal3.io "openshift-control-plane-2" deleted
baremetalhost.metal3.io "openshift-control-plane-2" deletedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the machine of the unhealthy member by running the following command:
oc delete machine -n openshift-machine-api examplecluster-control-plane-2
$ oc delete machine -n openshift-machine-api examplecluster-control-plane-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you remove the
BareMetalHostandMachineobjects, then theMachinecontroller automatically deletes theNodeobject.If deletion of the machine is delayed for any reason or the command is obstructed and delayed, you can force deletion by removing the machine object finalizer field.
ImportantDo not interrupt machine deletion by pressing
Ctrl+c. You must allow the command to proceed to completion. Open a new terminal window to edit and delete the finalizer fields.A new machine is automatically provisioned after deleting the machine of the unhealthy member.
Edit the machine configuration by running the following command:
oc edit machine -n openshift-machine-api examplecluster-control-plane-2
$ oc edit machine -n openshift-machine-api examplecluster-control-plane-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the following fields in the
Machinecustom resource, and then save the updated file:finalizers: - machine.machine.openshift.io
finalizers: - machine.machine.openshift.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
machine.machine.openshift.io/examplecluster-control-plane-2 edited
machine.machine.openshift.io/examplecluster-control-plane-2 editedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the machine was deleted by running the following command:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned
NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisionedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the node has been deleted by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the new
BareMetalHostobject and the secret to store the BMC credentials:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe username and password can be found from the other bare metal host’s secrets. The protocol to use in
bmc:addresscan be taken from other bmh objects.ImportantIf you reuse the
BareMetalHostobject definition from an existing control plane host, do not leave theexternallyProvisionedfield set totrue.Existing control plane
BareMetalHostobjects may have theexternallyProvisionedflag set totrueif they were provisioned by the OpenShift Container Platform installation program.After the inspection is complete, the
BareMetalHostobject is created and available to be provisioned.Verify the creation process using available
BareMetalHostobjects:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that a new machine has been created:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3is being created and is ready after the phase changes fromProvisioningtoRunning.
It should take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.
Verify that the bare metal host becomes provisioned and no error reported by running the following command:
oc get bmh -n openshift-machine-api
$ oc get bmh -n openshift-machine-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the new node is added and in a ready state by running this command:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Turn the quorum guard back on by entering the following command:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the
unsupportedConfigOverridessection is removed from the object by entering this command:oc get etcd/cluster -oyaml
$ oc get etcd/cluster -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are using single-node OpenShift, restart the node. Otherwise, you might encounter the following error in the etcd cluster Operator:
Example output
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]
EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets "etcd-peer-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-sno-0": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets "etcd-serving-metrics-sno-0": the object has been modified; please apply your changes to the latest version and try again]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that all etcd pods are running properly.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m
etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103mCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output from the previous command only lists two pods, you can manually force an etcd redeployment. In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
To verify there are exactly three etcd members, connect to the running etcd container, passing in the name of a pod that was not on the affected node. In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc rsh -n openshift-etcd etcd-openshift-control-plane-0
$ oc rsh -n openshift-etcd etcd-openshift-control-plane-0Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the member list:
etcdctl member list -w table
sh-4.2# etcdctl member list -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the output from the previous command lists more than three etcd members, you must carefully remove the unwanted member.
Verify that all etcd members are healthy by running the following command:
etcdctl endpoint health --cluster
# etcdctl endpoint health --clusterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms
https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203msCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate that all nodes are at the latest revision by running the following command:
oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow AllNodesAtLatestRevision
AllNodesAtLatestRevisionCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Disaster recovery 링크 복사링크가 클립보드에 복사되었습니다!
5.3.1. About disaster recovery 링크 복사링크가 클립보드에 복사되었습니다!
The disaster recovery documentation provides information for administrators on how to recover from several disaster situations that might occur with their OpenShift Container Platform cluster. As an administrator, you might need to follow one or more of the following procedures to return your cluster to a working state.
Disaster recovery requires you to have at least one healthy control plane host.
- Restoring to a previous cluster state
This solution handles situations where you want to restore your cluster to a previous state, for example, if an administrator deletes something critical. This also includes situations where you have lost the majority of your control plane hosts, leading to etcd quorum loss and the cluster going offline. As long as you have taken an etcd backup, you can follow this procedure to restore your cluster to a previous state.
If applicable, you might also need to recover from expired control plane certificates.
WarningRestoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This procedure should only be used as a last resort.
Prior to performing a restore, see About restoring cluster state for more information on the impact to the cluster.
NoteIf you have a majority of your masters still available and have an etcd quorum, then follow the procedure to replace a single unhealthy etcd member.
- Recovering from expired control plane certificates
- This solution handles situations where your control plane certificates have expired. For example, if you shut down your cluster before the first certificate rotation, which occurs 24 hours after installation, your certificates will not be rotated and will expire. You can follow this procedure to recover from expired control plane certificates.
5.3.2. Restoring to a previous cluster state 링크 복사링크가 클립보드에 복사되었습니다!
To restore the cluster to a previous state, you must have previously backed up the etcd data by creating a snapshot. You will use this snapshot to restore the cluster state. For more information, see "Backing up etcd data".
5.3.2.1. About restoring cluster state 링크 복사링크가 클립보드에 복사되었습니다!
You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:
- The cluster has lost the majority of control plane hosts (quorum loss).
- An administrator has deleted something critical and must restore to recover the cluster.
Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort.
If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup.
Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, SDN controllers, and persistent volume controllers.
It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.
In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.
5.3.2.2. Restoring to a previous cluster state 링크 복사링크가 클립보드에 복사되었습니다!
You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.
If your cluster uses a control plane machine set, see "Recovering a degraded etcd Operator" in "Troubleshooting the control plane machine set" for an etcd recovery procedure.
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole through a certificate-basedkubeconfigfile, like the one that was used during installation. - A healthy control plane host to use as the recovery host.
- SSH access to control plane hosts.
-
A backup directory containing both the
etcdsnapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats:snapshot_<datetimestamp>.dbandstatic_kuberesources_<datetimestamp>.tar.gz.
For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.
Procedure
- Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.
Establish SSH connectivity to each of the control plane nodes, including the recovery host.
kube-apiserverbecomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.ImportantIf you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the
etcdbackup directory to the recovery control plane host.This procedure assumes that you copied the
backupdirectory containing theetcdsnapshot and the resources for the static pods to the/home/core/directory of your recovery control plane host.Stop the static pods on any other control plane nodes.
NoteYou do not need to stop the static pods on the recovery host.
- Access a control plane host that is not the recovery host.
Move the existing etcd pod file out of the kubelet manifest directory by running:
sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
etcdpods are stopped by using:sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-apiserverfile out of the kubelet manifest directory by running:sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
kube-apiservercontainers are stopped by running:sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"
$ sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-controller-managerfile out of the kubelet manifest directory by using:sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
kube-controller-managercontainers are stopped by running:sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard"
$ sudo crictl ps | grep kube-controller-manager | egrep -v "operator|guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the existing
kube-schedulerfile out of the kubelet manifest directory by using:sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
kube-schedulercontainers are stopped by using:sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard"
$ sudo crictl ps | grep kube-scheduler | egrep -v "operator|guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is not empty, wait a few minutes and check again.
Move the
etcddata directory to a different location with the following example:sudo mv -v /var/lib/etcd/ /tmp
$ sudo mv -v /var/lib/etcd/ /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the
/etc/kubernetes/manifests/keepalived.yamlfile exists and the node is deleted, follow these steps:Move the
/etc/kubernetes/manifests/keepalived.yamlfile out of the kubelet manifest directory:sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp
$ sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that any containers managed by the
keepaliveddaemon are stopped:sudo crictl ps --name keepalived
$ sudo crictl ps --name keepalivedCopy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
Check if the control plane has any Virtual IPs (VIPs) assigned to it:
ip -o address | egrep '<api_vip>|<ingress_vip>'
$ ip -o address | egrep '<api_vip>|<ingress_vip>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each reported VIP, run the following command to remove it:
sudo ip address del <reported_vip> dev <reported_vip_device>
$ sudo ip address del <reported_vip> dev <reported_vip_device>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Repeat this step on each of the other control plane hosts that is not the recovery host.
- Access the recovery control plane host.
If the
keepaliveddaemon is in use, verify that the recovery control plane node owns the VIP:ip -o address | grep <api_vip>
$ ip -o address | grep <api_vip>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The address of the VIP is highlighted in the output if it exists. This command returns an empty string if the VIP is not set or configured incorrectly.
If the cluster-wide proxy is enabled, be sure that you have exported the
NO_PROXY,HTTP_PROXY, andHTTPS_PROXYenvironment variables.TipYou can check whether the proxy is enabled by reviewing the output of
oc get proxy cluster -o yaml. The proxy is enabled if thehttpProxy,httpsProxy, andnoProxyfields have values set.Run the restore script on the recovery control plane host and pass in the path to the
etcdbackup directory:sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup
$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example script output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The cluster-restore.sh script must show that
etcd,kube-apiserver,kube-controller-manager, andkube-schedulerpods are stopped and then started at the end of the restore process.NoteThe restore process can cause nodes to enter the
NotReadystate if the node certificates were updated after the lastetcdbackup.Check the nodes to ensure they are in the
Readystate.Run the following command:
oc get nodes -w
$ oc get nodes -wCopy to Clipboard Copied! Toggle word wrap Toggle overflow Sample output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow It can take several minutes for all nodes to report their state.
If any nodes are in the
NotReadystate, log in to the nodes and remove all of the PEM files from the/var/lib/kubelet/pkidirectory on each node. You can SSH into the nodes or use the terminal window in the web console.ssh -i <ssh-key-path> core@<master-hostname>
$ ssh -i <ssh-key-path> core@<master-hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample
pkidirectorypwd ls
sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pemCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart the kubelet service on all control plane hosts.
From the recovery host, run:
sudo systemctl restart kubelet.service
$ sudo systemctl restart kubelet.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat this step on all other control plane hosts.
Approve the pending Certificate Signing Requests (CSRs):
NoteClusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step.
Get the list of current CSRs by running:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid by running:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapperCSR by running:oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For user-provisioned installations, approve each valid kubelet service CSR by running:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the single member control plane has started successfully.
From the recovery host, verify that the
etcdcontainer is running by using:sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"
$ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the recovery host, verify that the
etcdpod is running by using:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s
NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47sCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the status is
Pending, or the output lists more than one runningetcdpod, wait a few minutes and check again.
If you are using the
OVNKubernetesnetwork plugin, you must restartovnkube-controlplanepods.Delete all of the
ovnkube-controlplanepods by running:oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane
$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-planeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all of the
ovnkube-controlplanepods were redeployed by using:oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane
$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-planeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node:
ImportantRestart OVN-Kubernetes pods in the following order:
- The recovery control plane host
- The other control plane hosts (if available)
- The other nodes
NoteValidating and mutating admission webhooks can reject pods. If you add any additional webhooks with the
failurePolicyset toFail, then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again.Alternatively, you can temporarily set the
failurePolicytoIgnorewhile restoring the cluster state. After the cluster state is restored successfully, you can set thefailurePolicytoFail.Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run:
sudo rm -f /var/lib/ovn-ic/etc/*.db
$ sudo rm -f /var/lib/ovn-ic/etc/*.dbCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command:
sudo systemctl restart ovs-vswitchd ovsdb-server
$ sudo systemctl restart ovs-vswitchd ovsdb-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the
ovnkube-nodepod on the node by running the following command, replacing<node>with the name of the node that you are restarting:oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
$ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of the OVN pods by running the following command:
oc get po -n openshift-ovn-kubernetes
$ oc get po -n openshift-ovn-kubernetesCopy to Clipboard Copied! Toggle word wrap Toggle overflow If any OVN pods are in the
Terminatingstatus, delete the node that is running that OVN pod by running the following command. Replace<node>with the name of the node you are deleting:oc delete node <node>
$ oc delete node <node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use SSH to log in to the OVN pod node with the
Terminatingstatus by running the following command:ssh -i <ssh-key-path> core@<node>
$ ssh -i <ssh-key-path> core@<node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Move all PEM files from the
/var/lib/kubelet/pkidirectory by running the following command:sudo mv /var/lib/kubelet/pki/* /tmp
$ sudo mv /var/lib/kubelet/pki/* /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the kubelet service by running the following command:
sudo systemctl restart kubelet.service
$ sudo systemctl restart kubelet.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Return to the recovery etcd machines by running the following command:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending
NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> PendingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Approve all new CSRs by running the following command, replacing
csr-<uuid>with the name of the CSR:oc adm certificate approve csr-<uuid>
oc adm certificate approve csr-<uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the node is back by running the following command:
oc get nodes
$ oc get nodesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify that the
ovnkube-nodepod is running again with:oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
$ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt might take several minutes for the pods to restart.
Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and
etcdautomatically scales up.If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see "Installing a user-provisioned cluster on bare metal".
WarningDo not delete and re-create the machine for the recovery host.
If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:
WarningDo not delete and re-create the machine for the recovery host.
For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see "Replacing a bare-metal control plane node".
Obtain the machine for one of the lost control plane hosts.
In a terminal that has access to the cluster as a cluster-admin user, run the following command:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- This is the control plane machine for the lost control plane host,
ip-10-0-131-183.ec2.internal.
Delete the machine of the lost control plane host by running:
oc delete machine -n openshift-machine-api clustername-8qw5l-master-0
$ oc delete machine -n openshift-machine-api clustername-8qw5l-master-01 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specify the name of the control plane machine for the lost control plane host.
A new machine is automatically provisioned after deleting the machine of the lost control plane host.
Verify that a new machine has been created by running:
oc get machines -n openshift-machine-api -o wide
$ oc get machines -n openshift-machine-api -o wideCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The new machine,
clustername-8qw5l-master-3is being created and is ready after the phase changes fromProvisioningtoRunning.
It might take a few minutes for the new machine to be created. The
etcdcluster Operator will automatically sync when the machine or node returns to a healthy state.- Repeat these steps for each lost control plane host that is not the recovery host.
Turn off the quorum guard by entering:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow This command ensures that you can successfully re-create secrets and roll out the static pods.
In a separate terminal window within the recovery host, export the recovery
kubeconfigfile by running:export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig
$ export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfigCopy to Clipboard Copied! Toggle word wrap Toggle overflow Force
etcdredeployment.In the same terminal window where you exported the recovery
kubeconfigfile, run:oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- The
forceRedeploymentReasonvalue must be unique, which is why a timestamp is appended.
When the
etcdcluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.Turn the quorum guard back on by entering:
oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'$ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can verify that the
unsupportedConfigOverridessection is removed from the object by running:oc get etcd/cluster -oyaml
$ oc get etcd/cluster -oyamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
In a terminal that has access to the cluster as a
cluster-adminuser, run:oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition foretcdto verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.After
etcdis redeployed, force new rollouts for the control plane.kube-apiserverwill reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.In a terminal that has access to the cluster as a
cluster-adminuser, run:Force a new rollout for
kube-apiserver:oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision.
oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the Kubernetes controller manager by running the following command:
oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision by running:
oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.Force a new rollout for the
kube-schedulerby running:oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge$ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify all nodes are updated to the latest revision by using:
oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'$ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the
NodeInstallerProgressingstatus condition to verify that all nodes are at the latest revision. The output showsAllNodesAtLatestRevisionupon successful update:AllNodesAtLatestRevision 3 nodes are at revision 7
AllNodesAtLatestRevision 3 nodes are at revision 71 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- In this example, the latest revision number is
7.
If the output includes multiple revision numbers, such as
2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.
Verify that all control plane hosts have started and joined the cluster.
In a terminal that has access to the cluster as a
cluster-adminuser, run the following command:oc -n openshift-etcd get pods -l k8s-app=etcd
$ oc -n openshift-etcd get pods -l k8s-app=etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h
etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To ensure that all workloads return to normal operation following a recovery procedure, restart all control plane nodes.
On completion of the previous procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted.
Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command:
export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig
Issue the following command to display your authenticated user name:
oc whoami
$ oc whoami
5.3.2.3. Restoring a cluster manually from an etcd backup 링크 복사링크가 클립보드에 복사되었습니다!
The restore procedure described in the section "Restoring to a previous cluster state":
-
Requires the complete recreation of 2 control plane nodes, which might be a complex procedure for clusters installed with the UPI installation method, since an UPI installation does not create any
MachineorControlPlaneMachinesetfor the control plane nodes. - Uses the script /usr/local/bin/cluster-restore.sh, which starts a new single-member etcd cluster and then scales it to three members.
In contrast, this procedure:
- Does not require recreating any control plane nodes.
- Directly starts a three-member etcd cluster.
If the cluster uses a MachineSet for the control plane, it is suggested to use the "Restoring to a previous cluster state" for a simpler etcd recovery procedure.
When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OpenShift Container Platform 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.
Prerequisites
-
Access to the cluster as a user with the
cluster-adminrole; for example, thekubeadminuser. -
SSH access to all control plane hosts, with a host user allowed to become
root; for example, the defaultcorehost user. -
A backup directory containing both a previous etcd snapshot and the resources for the static pods from the same backup. The file names in the directory must be in the following formats:
snapshot_<datetimestamp>.dbandstatic_kuberesources_<datetimestamp>.tar.gz.
Procedure
Use SSH to connect to each of the control plane nodes.
The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to use a SSH connection for each control plane host you are accessing in a separate terminal.
ImportantIf you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.
Copy the etcd backup directory to each control plane host.
This procedure assumes that you copied the
backupdirectory containing the etcd snapshot and the resources for the static pods to the/home/core/assetsdirectory of each control plane host. You might need to create suchassetsfolder if it does not exist yet.Stop the static pods on all the control plane nodes; one host at a time.
Move the existing Kubernetes API Server static pod manifest out of the kubelet manifest directory.
mkdir -p /root/manifests-backup mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /root/manifests-backup/
$ mkdir -p /root/manifests-backup $ mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /root/manifests-backup/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Kubernetes API Server containers have stopped with the command:
crictl ps | grep kube-apiserver | grep -E -v "operator|guard"
$ crictl ps | grep kube-apiserver | grep -E -v "operator|guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow The output of this command should be empty. If it is not empty, wait a few minutes and check again.
If the Kubernetes API Server containers are still running, terminate them manually with the following command:
crictl stop <container_id>
$ crictl stop <container_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat the same steps for
kube-controller-manager-pod.yaml,kube-scheduler-pod.yamland finallyetcd-pod.yaml.Stop the
kube-controller-managerpod with the following command:mv /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /root/manifests-backup/
$ mv /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /root/manifests-backup/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the containers are stopped using the following command:
crictl ps | grep kube-controller-manager | grep -E -v "operator|guard"
$ crictl ps | grep kube-controller-manager | grep -E -v "operator|guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the
kube-schedulerpod using the following command:mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /root/manifests-backup/
$ mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /root/manifests-backup/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the containers are stopped using the following command:
crictl ps | grep kube-scheduler | grep -E -v "operator|guard"
$ crictl ps | grep kube-scheduler | grep -E -v "operator|guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Stop the
etcdpod using the following command:mv /etc/kubernetes/manifests/etcd-pod.yaml /root/manifests-backup/
$ mv /etc/kubernetes/manifests/etcd-pod.yaml /root/manifests-backup/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the containers are stopped using the following command:
crictl ps | grep etcd | grep -E -v "operator|guard"
$ crictl ps | grep etcd | grep -E -v "operator|guard"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
On each control plane host, save the current
etcddata, by moving it into thebackupfolder:mkdir /home/core/assets/old-member-data mv /var/lib/etcd/member /home/core/assets/old-member-data
$ mkdir /home/core/assets/old-member-data $ mv /var/lib/etcd/member /home/core/assets/old-member-dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow This data will be useful in case the
etcdbackup restore does not work and theetcdcluster must be restored to the current state.Find the correct etcd parameters for each control plane host.
The value for
<ETCD_NAME>is unique for the each control plane host, and it is equal to the value of theETCD_NAMEvariable in the manifest/etc/kubernetes/static-pod-resources/etcd-certs/configmaps/restore-etcd-pod/pod.yamlfile in the specific control plane host. It can be found with the command:RESTORE_ETCD_POD_YAML="/etc/kubernetes/static-pod-resources/etcd-certs/configmaps/restore-etcd-pod/pod.yaml" cat $RESTORE_ETCD_POD_YAML | \ grep -A 1 $(cat $RESTORE_ETCD_POD_YAML | grep 'export ETCD_NAME' | grep -Eo 'NODE_.+_ETCD_NAME') | \ grep -Po '(?<=value: ").+(?=")'
RESTORE_ETCD_POD_YAML="/etc/kubernetes/static-pod-resources/etcd-certs/configmaps/restore-etcd-pod/pod.yaml" cat $RESTORE_ETCD_POD_YAML | \ grep -A 1 $(cat $RESTORE_ETCD_POD_YAML | grep 'export ETCD_NAME' | grep -Eo 'NODE_.+_ETCD_NAME') | \ grep -Po '(?<=value: ").+(?=")'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value for
<UUID>can be generated in a control plane host with the command:uuidgen
$ uuidgenCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe value for
<UUID>must be generated only once. After generatingUUIDon one control plane host, do not generate it again on the others. The sameUUIDwill be used in the next steps on all control plane hosts.The value for
ETCD_NODE_PEER_URLshould be set like the following example:https://<IP_CURRENT_HOST>:2380
https://<IP_CURRENT_HOST>:2380Copy to Clipboard Copied! Toggle word wrap Toggle overflow The correct IP can be found from the
<ETCD_NAME>of the specific control plane host, with the command:echo <ETCD_NAME> | \ sed -E 's/[.-]/_/g' | \ xargs -I {} grep {} /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/etcd-scripts/etcd.env | \ grep "IP" | grep -Po '(?<=").+(?=")'$ echo <ETCD_NAME> | \ sed -E 's/[.-]/_/g' | \ xargs -I {} grep {} /etc/kubernetes/static-pod-resources/etcd-certs/configmaps/etcd-scripts/etcd.env | \ grep "IP" | grep -Po '(?<=").+(?=")'Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value for
<ETCD_INITIAL_CLUSTER>should be set like the following, where<ETCD_NAME_n>is the<ETCD_NAME>of each control plane host.NoteThe port used must be 2380 and not 2379. The port 2379 is used for etcd database management and is configured directly in etcd start command in container.
Example output
<ETCD_NAME_0>=<ETCD_NODE_PEER_URL_0>,<ETCD_NAME_1>=<ETCD_NODE_PEER_URL_1>,<ETCD_NAME_2>=<ETCD_NODE_PEER_URL_2>
<ETCD_NAME_0>=<ETCD_NODE_PEER_URL_0>,<ETCD_NAME_1>=<ETCD_NODE_PEER_URL_1>,<ETCD_NAME_2>=<ETCD_NODE_PEER_URL_2>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies the
ETCD_NODE_PEER_URLvalues from each control plane host.
The
<ETCD_INITIAL_CLUSTER>value remains same across all control plane hosts. The same value is required in the next steps on every control plane host.
Regenerate the etcd database from the backup.
Such operation must be executed on each control plane host.
Copy the
etcdbackup to/var/lib/etcddirectory with the command:cp /home/core/assets/backup/<snapshot_yyyy-mm-dd_hhmmss>.db /var/lib/etcd
$ cp /home/core/assets/backup/<snapshot_yyyy-mm-dd_hhmmss>.db /var/lib/etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Identify the correct
etcdctlimage before proceeding. Use the following command to retrieve the image from the backup of the pod manifest:jq -r '.spec.containers[]|select(.name=="etcdctl")|.image' /root/manifests-backup/etcd-pod.yaml
$ jq -r '.spec.containers[]|select(.name=="etcdctl")|.image' /root/manifests-backup/etcd-pod.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow podman run --rm -it --entrypoint="/bin/bash" -v /var/lib/etcd:/var/lib/etcd:z <image-hash>
$ podman run --rm -it --entrypoint="/bin/bash" -v /var/lib/etcd:/var/lib/etcd:z <image-hash>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the version of the
etcdctltool is the version of theetcdserver where the backup was created:etcdctl version
$ etcdctl versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the following command to regenerate the
etcddatabase, using the correct values for the current host:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe quotes are mandatory when regenerating the
etcddatabase.
Record the values printed in the
added memberlogs; for example:Example output
2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "56cd73b614699e7", "added-peer-peer-urls": ["https://10.0.91.5:2380"], "added-peer-is-learner": false} 2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "1f63d01b31bb9a9e", "added-peer-peer-urls": ["https://10.0.90.221:2380"], "added-peer-is-learner": false} 2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "fdc2725b3b70127c", "added-peer-peer-urls": ["https://10.0.94.214:2380"], "added-peer-is-learner": false}2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "56cd73b614699e7", "added-peer-peer-urls": ["https://10.0.91.5:2380"], "added-peer-is-learner": false} 2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "1f63d01b31bb9a9e", "added-peer-peer-urls": ["https://10.0.90.221:2380"], "added-peer-is-learner": false} 2022-06-28T19:52:43Z info membership/cluster.go:421 added member {"cluster-id": "c5996b7c11c30d6b", "local-member-id": "0", "added-peer-id": "fdc2725b3b70127c", "added-peer-peer-urls": ["https://10.0.94.214:2380"], "added-peer-is-learner": false}Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Exit from the container.
-
Repeat these steps on the other control plane hosts, checking that the values printed in the
added memberlogs are the same for all control plane hosts.
Move the regenerated
etcddatabase to the default location.Such operation must be executed on each control plane host.
Move the regenerated database (the
memberfolder created by the previousetcdctl snapshot restorecommand) to the default etcd location/var/lib/etcd:mv /var/lib/etcd/restore-<UUID>/member /var/lib/etcd
$ mv /var/lib/etcd/restore-<UUID>/member /var/lib/etcdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the SELinux context for
/var/lib/etcd/memberfolder on/var/lib/etcddirectory:restorecon -vR /var/lib/etcd/
$ restorecon -vR /var/lib/etcd/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the leftover files and directories:
rm -rf /var/lib/etcd/restore-<UUID>
$ rm -rf /var/lib/etcd/restore-<UUID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow rm /var/lib/etcd/<snapshot_yyyy-mm-dd_hhmmss>.db
$ rm /var/lib/etcd/<snapshot_yyyy-mm-dd_hhmmss>.dbCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantWhen you are finished the
/var/lib/etcddirectory must contain only the foldermember.- Repeat these steps on the other control plane hosts.
Restart the etcd cluster.
- The following steps must be executed on all control plane hosts, but one host at a time.
Move the
etcdstatic pod manifest back to the kubelet manifest directory, in order to make kubelet start the related containers :mv /tmp/etcd-pod.yaml /etc/kubernetes/manifests
$ mv /tmp/etcd-pod.yaml /etc/kubernetes/manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all the
etcdcontainers have started:crictl ps | grep etcd | grep -v operator
$ crictl ps | grep etcd | grep -v operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
38c814767ad983 f79db5a8799fd2c08960ad9ee22f784b9fbe23babe008e8a3bf68323f004c840 28 seconds ago Running etcd-health-monitor 2 fe4b9c3d6483c e1646b15207c6 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcd-metrics 0 fe4b9c3d6483c 08ba29b1f58a7 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcd 0 fe4b9c3d6483c 2ddc9eda16f53 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcdctl
38c814767ad983 f79db5a8799fd2c08960ad9ee22f784b9fbe23babe008e8a3bf68323f004c840 28 seconds ago Running etcd-health-monitor 2 fe4b9c3d6483c e1646b15207c6 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcd-metrics 0 fe4b9c3d6483c 08ba29b1f58a7 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcd 0 fe4b9c3d6483c 2ddc9eda16f53 9d28c15860870e85c91d0e36b45f7a6edd3da757b113ec4abb4507df88b17f06 About a minute ago Running etcdctlCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the output of this command is empty, wait a few minutes and check again.
Check the status of the
etcdcluster.On any of the control plane hosts, check the status of the
etcdcluster with the following command:crictl exec -it $(crictl ps | grep etcdctl | awk '{print $1}') etcdctl endpoint status -w table$ crictl exec -it $(crictl ps | grep etcdctl | awk '{print $1}') etcdctl endpoint status -w tableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Restart the other static pods.
The following steps must be executed on all control plane hosts, but one host at a time.
Move the Kubernetes API Server static pod manifest back to the kubelet manifest directory to make kubelet start the related containers with the command:
mv /root/manifests-backup/kube-apiserver-pod.yaml /etc/kubernetes/manifests
$ mv /root/manifests-backup/kube-apiserver-pod.yaml /etc/kubernetes/manifestsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all the Kubernetes API Server containers have started:
crictl ps | grep kube-apiserver | grep -v operator
$ crictl ps | grep kube-apiserver | grep -v operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Noteif the output of the following command is empty, wait a few minutes and check again.
Repeat the same steps for
kube-controller-manager-pod.yamlandkube-scheduler-pod.yamlfiles.Restart the kubelets in all nodes using the following command:
systemctl restart kubelet
$ systemctl restart kubeletCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the remaining control plane pods using the following command:
mv /root/manifests-backup/kube-* /etc/kubernetes/manifests/
$ mv /root/manifests-backup/kube-* /etc/kubernetes/manifests/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check if the
kube-apiserver,kube-schedulerandkube-controller-managerpods start correctly:crictl ps | grep -E 'kube-(apiserver|scheduler|controller-manager)' | grep -v -E 'operator|guard'
$ crictl ps | grep -E 'kube-(apiserver|scheduler|controller-manager)' | grep -v -E 'operator|guard'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wipe the OVN databases using the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.2.5. Issues and workarounds for restoring a persistent storage state 링크 복사링크가 클립보드에 복사되었습니다!
If your OpenShift Container Platform cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OpenShift Container Platform is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.
The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OpenShift Container Platform cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.
The following are some example scenarios that produce an out-of-date status:
- MySQL database is running in a pod backed up by a PV object. Restoring OpenShift Container Platform from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.
- Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OpenShift Container Platform is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.
- Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.
A device is removed or renamed from OpenShift Container Platform nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from
/dev/disk/by-idor/devdirectories. This situation might cause the local PVs to refer to devices that no longer exist.To fix this problem, an administrator must:
- Manually remove the PVs with invalid devices.
- Remove symlinks from respective nodes.
-
Delete
LocalVolumeorLocalVolumeSetobjects (see Storage → Configuring persistent storage → Persistent storage using local volumes → Deleting the Local Storage Operator Resources).
5.3.3. Recovering from expired control plane certificates 링크 복사링크가 클립보드에 복사되었습니다!
5.3.3.1. Recovering from expired control plane certificates 링크 복사링크가 클립보드에 복사되었습니다!
The cluster can automatically recover from expired control plane certificates.
However, you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. For user-provisioned installations, you might also need to approve pending kubelet serving CSRs.
Use the following steps to approve the pending CSRs:
Procedure
Get the list of current CSRs:
oc get csr
$ oc get csrCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Review the details of a CSR to verify that it is valid:
oc describe csr <csr_name>
$ oc describe csr <csr_name>1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
<csr_name>is the name of a CSR from the list of current CSRs.
Approve each valid
node-bootstrapperCSR:oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For user-provisioned installations, approve each valid kubelet serving CSR:
oc adm certificate approve <csr_name>
$ oc adm certificate approve <csr_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Legal Notice
링크 복사링크가 클립보드에 복사되었습니다!
Copyright © 2025 Red Hat
OpenShift documentation is licensed under the Apache License 2.0 (https://www.apache.org/licenses/LICENSE-2.0).
Modified versions must remove all Red Hat trademarks.
Portions adapted from https://github.com/kubernetes-incubator/service-catalog/ with modifications by Red Hat.
Red Hat, Red Hat Enterprise Linux, the Red Hat logo, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.