Este conteúdo não está disponível no idioma selecionado.
Chapter 4. OADP Application backup and restore
4.1. Introduction to OpenShift API for Data Protection Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs).
However, OADP does not serve as a disaster recovery solution for etcd or {OCP-short} Operators.
OADP support is provided to customer workload namespaces, and cluster scope resources.
Full cluster backup and restore are not supported.
4.1.1. OpenShift API for Data Protection APIs Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources.
OADP provides the following APIs:
4.1.1.1. Support for OpenShift API for Data Protection Copiar o linkLink copiado para a área de transferência!
| Version | OCP version | General availability | Full support ends | Maintenance ends | Extended Update Support (EUS) | Extended Update Support Term 2 (EUS Term 2) |
| 1.4 |
| 10 Jul 2024 | Release of 1.5 | Release of 1.6 | 27 Jun 2026 EUS must be on OCP 4.16 | 27 Jun 2027 EUS Term 2 must be on OCP 4.16 |
| 1.3 |
| 29 Nov 2023 | 10 Jul 2024 | Release of 1.5 | 31 Oct 2025 EUS must be on OCP 4.14 | 31 Oct 2026 EUS Term 2 must be on OCP 4.14 |
4.1.1.1.1. Unsupported versions of the OADP Operator Copiar o linkLink copiado para a área de transferência!
| Version | General availability | Full support ended | Maintenance ended |
| 1.2 | 14 Jun 2023 | 29 Nov 2023 | 10 Jul 2024 |
| 1.1 | 01 Sep 2022 | 14 Jun 2023 | 29 Nov 2023 |
| 1.0 | 09 Feb 2022 | 01 Sep 2022 | 14 Jun 2023 |
For more details about EUS, see Extended Update Support.
For more details about EUS Term 2, see Extended Update Support Term 2.
4.2. OADP release notes Copiar o linkLink copiado para a área de transferência!
4.2.1. OADP 1.4 release notes Copiar o linkLink copiado para a área de transferência!
The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.
For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs
4.2.1.1. OADP 1.4.7 release notes Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.4.7 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.4.6.
4.2.1.2. OADP 1.4.6 release notes Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.4.6 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.4.5.
4.2.1.3. OADP 1.4.5 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.4.5 release notes lists new features and resolved issues.
4.2.1.3.1. New features Copiar o linkLink copiado para a área de transferência!
Collecting logs with the must-gather tool has been improved with a Markdown summary
You can collect logs and information about OpenShift API for Data Protection (OADP) custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in the clusters directory of the must-gather logs. (OADP-5904)
4.2.1.3.2. Resolved issues Copiar o linkLink copiado para a área de transferência!
- OADP 1.4.5 fixes the following CVEs
4.2.1.4. OADP 1.4.4 release notes Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.4.4 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.4.3.
4.2.1.4.1. Known issues Copiar o linkLink copiado para a área de transferência!
Issue with restoring stateful applications
When you restore a stateful application that uses the azurefile-csi storage class, the restore operation remains in the Finalizing phase. (OADP-5508)
4.2.1.5. OADP 1.4.3 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.4.3 release notes lists the following new feature.
4.2.1.5.1. New features Copiar o linkLink copiado para a área de transferência!
Notable changes in the kubevirt velero plugin in version 0.7.1
With this release, the kubevirt velero plugin has been updated to version 0.7.1. Notable improvements include the following bug fix and new features:
- Virtual machine instances (VMIs) are no longer ignored from backup when the owner VM is excluded.
- Object graphs now include all extra objects during backup and restore operations.
- Optionally generated labels are now added to new firmware Universally Unique Identifiers (UUIDs) during restore operations.
- Switching VM run strategies during restore operations is now possible.
- Clearing a MAC address by label is now supported.
- The restore-specific checks during the backup operation are now skipped.
-
The
VirtualMachineClusterInstancetypeandVirtualMachineClusterPreferencecustom resource definitions (CRDs) are now supported.
4.2.1.6. OADP 1.4.2 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.4.2 release notes lists new features, resolved issues and bugs, and known issues.
4.2.1.6.1. New features Copiar o linkLink copiado para a área de transferência!
Backing up different volumes in the same namespace by using the VolumePolicy feature is now possible
With this release, Velero provides resource policies to back up different volumes in the same namespace by using the VolumePolicy feature. The supported VolumePolicy feature to back up different volumes includes skip, snapshot, and fs-backup actions. OADP-1071
File system backup and data mover can now use short-term credentials
File system backup and data mover can now use short-term credentials such as AWS Security Token Service (STS) and Google Cloud WIF. With this support, backup is successfully completed without any PartiallyFailed status. OADP-5095
4.2.1.6.2. Resolved issues Copiar o linkLink copiado para a área de transferência!
DPA now reports errors if VSL contains an incorrect provider value
Previously, if the provider of a Volume Snapshot Location (VSL) spec was incorrect, the Data Protection Application (DPA) reconciled successfully. With this update, DPA reports errors and requests for a valid provider value. OADP-5044
Data Mover restore is successful irrespective of using different OADP namespaces for backup and restore
Previously, when backup operation was executed by using OADP installed in one namespace but was restored by using OADP installed in a different namespace, the Data Mover restore failed. With this update, Data Mover restore is now successful. OADP-5460
SSE-C backup works with the calculated MD5 of the secret key
Previously, backup failed with the following error:
Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.
Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.
With this update, missing Server-Side Encryption with Customer-Provided Keys (SSE-C) base64 and MD5 hash are now fixed. As a result, SSE-C backup works with the calculated MD5 of the secret key. In addition, incorrect errorhandling for the customerKey size is also fixed. OADP-5388
For a complete list of all issues resolved in this release, see the list of OADP 1.4.2 resolved issues in Jira.
4.2.1.6.3. Known issues Copiar o linkLink copiado para a área de transferência!
The nodeSelector spec is not supported for the Data Mover restore action
When a Data Protection Application (DPA) is created with the nodeSelector field set in the nodeAgent parameter, Data Mover restore partially fails instead of completing the restore operation. OADP-5260
The S3 storage does not use proxy environment when TLS skip verify is specified
In the image registry backup, the S3 storage does not use the proxy environment when the insecureSkipTLSVerify parameter is set to true. OADP-3143
Kopia does not delete artifacts after backup expiration
Even after you delete a backup, Kopia does not delete the volume artifacts from the ${bucket_name}/kopia/$openshift-adp on the S3 location after backup expired. For more information, see "About Kopia repository maintenance". OADP-5131
4.2.1.7. OADP 1.4.1 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.4.1 release notes lists new features, resolved issues and bugs, and known issues.
4.2.1.7.1. New features Copiar o linkLink copiado para a área de transferência!
New DPA fields to update client qps and burst
You can now change Velero Server Kubernetes API queries per second and burst values by using the new Data Protection Application (DPA) fields. The new DPA fields are spec.configuration.velero.client-qps and spec.configuration.velero.client-burst, which both default to 100. OADP-4076
Enabling non-default algorithms with Kopia
With this update, you can now configure the hash, encryption, and splitter algorithms in Kopia to select non-default options to optimize performance for different backup workloads.
To configure these algorithms, set the env variable of a velero pod in the podConfig section of the DataProtectionApplication (DPA) configuration. If this variable is not set, or an unsupported algorithm is chosen, Kopia will default to its standard algorithms. OADP-4640
4.2.1.7.2. Resolved issues Copiar o linkLink copiado para a área de transferência!
Restoring a backup without pods is now successful
Previously, restoring a backup without pods and having StorageClass VolumeBindingMode set as WaitForFirstConsumer, resulted in the PartiallyFailed status with an error: fail to patch dynamic PV, err: context deadline exceeded. With this update, patching dynamic PV is skipped and restoring a backup is successful without any PartiallyFailed status. OADP-4231
PodVolumeBackup CR now displays correct message
Previously, the PodVolumeBackup custom resource (CR) generated an incorrect message, which was: get a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed". With this update, the message produced is now:
found a podvolumebackup with status "InProgress" during the server starting, mark it as "Failed".
found a podvolumebackup with status "InProgress" during the server starting,
mark it as "Failed".
Overriding imagePullPolicy is now possible with DPA
Previously, OADP set the imagePullPolicy parameter to Always for all images. With this update, OADP checks if each image contains sha256 or sha512 digest, then it sets imagePullPolicy to IfNotPresent; otherwise imagePullPolicy is set to Always. You can now override this policy by using the new spec.containerImagePullPolicy DPA field. OADP-4172
OADP Velero can now retry updating the restore status if initial update fails
Previously, OADP Velero failed to update the restored CR status. This left the status at InProgress indefinitely. Components which relied on the backup and restore CR status to determine the completion would fail. With this update, the restore CR status for a restore correctly proceeds to the Completed or Failed status. OADP-3227
Restoring BuildConfig Build from a different cluster is successful without any errors
Previously, when performing a restore of the BuildConfig Build resource from a different cluster, the application generated an error on TLS verification to the internal image registry. The resulting error was failed to verify certificate: x509: certificate signed by unknown authority error. With this update, the restore of the BuildConfig build resources to a different cluster can proceed successfully without generating the failed to verify certificate error. OADP-4692
Restoring an empty PVC is successful
Previously, downloading data failed while restoring an empty persistent volume claim (PVC). It failed with the following error:
data path restore failed: Failed to run kopia restore: Unable to load
snapshot : snapshot not found
data path restore failed: Failed to run kopia restore: Unable to load
snapshot : snapshot not found
With this update, the downloading of data proceeds to correct conclusion when restoring an empty PVC and the error message is not generated. OADP-3106
There is no Velero memory leak in CSI and DataMover plugins
Previously, a Velero memory leak was caused by using the CSI and DataMover plugins. When the backup ended, the Velero plugin instance was not deleted and the memory leak consumed memory until an Out of Memory (OOM) condition was generated in the Velero pod. With this update, there is no resulting Velero memory leak when using the CSI and DataMover plugins. OADP-4448
Post-hook operation does not start before the related PVs are released
Previously, due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the Data Mover persistent volume claim (PVC) releases the persistent volumes (PVs) of the related pods. This problem would cause the backup to fail with a PartiallyFailed status. With this update, the post-hook operation is not started until the related PVs are released by the Data Mover PVC, eliminating the PartiallyFailed backup status. OADP-3140
Deploying a DPA works as expected in namespaces with more than 37 characters
When you install the OADP Operator in a namespace with more than 37 characters to create a new DPA, labeling the "cloud-credentials" Secret fails and the DPA reports the following error:
The generated label name is too long.
The generated label name is too long.
With this update, creating a DPA does not fail in namespaces with more than 37 characters in the name. OADP-3960
Restore is successfully completed by overriding the timeout error
Previously, in a large scale environment, the restore operation would result in a Partiallyfailed status with the error: fail to patch dynamic PV, err: context deadline exceeded. With this update, the resourceTimeout Velero server argument is used to override this timeout error resulting in a successful restore. OADP-4344
For a complete list of all issues resolved in this release, see the list of OADP 1.4.1 resolved issues in Jira.
4.2.1.7.3. Known issues Copiar o linkLink copiado para a área de transferência!
Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning the error CrashLoopBackoff state after restoring OADP. The StatefulSet controller then recreates these pods and it runs normally. OADP-4407
Deployment referencing ImageStream is not restored properly leading to corrupted pod and volume contents
During a File System Backup (FSB) restore operation, a Deployment resource referencing an ImageStream is not restored properly. The restored pod that runs the FSB, and the postHook is terminated prematurely.
During the restore operation, the OpenShift Container Platform controller updates the spec.template.spec.containers[0].image field in the Deployment resource with an updated ImageStreamTag hash. The update triggers the rollout of a new pod, terminating the pod on which velero runs the FSB along with the post-hook. For more information about image stream trigger, see Triggering updates on image stream changes.
The workaround for this behavior is a two-step restore process:
Perform a restore excluding the
Deploymentresources, for example:velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.apps
$ velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --exclude-resources=deployment.appsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Once the first restore is successful, perform a second restore by including these resources, for example:
velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.apps
$ velero restore create <RESTORE_NAME> \ --from-backup <BACKUP_NAME> \ --include-resources=deployment.appsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.1.8. OADP 1.4.0 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.4.0 release notes lists resolved issues and known issues.
4.2.1.8.1. Resolved issues Copiar o linkLink copiado para a área de transferência!
Restore works correctly in OpenShift Container Platform 4.16
Previously, while restoring the deleted application namespace, the restore operation partially failed with the resource name may not be empty error in OpenShift Container Platform 4.16. With this update, restore works as expected in OpenShift Container Platform 4.16. OADP-4075
Data Mover backups work properly in the OpenShift Container Platform 4.16 cluster
Previously, Velero was using the earlier version of SDK where the Spec.SourceVolumeMode field did not exist. As a consequence, Data Mover backups failed in the OpenShift Container Platform 4.16 cluster on the external snapshotter with version 4.2. With this update, external snapshotter is upgraded to version 7.0 and later. As a result, backups do not fail in the OpenShift Container Platform 4.16 cluster. OADP-3922
For a complete list of all issues resolved in this release, see the list of OADP 1.4.0 resolved issues in Jira.
4.2.1.8.2. Known issues Copiar o linkLink copiado para a área de transferência!
Backup fails when checksumAlgorithm is not set for MCG
While performing a backup of any application with Noobaa as the backup location, if the checksumAlgorithm configuration parameter is not set, backup fails. To fix this problem, if you do not provide a value for checksumAlgorithm in the Backup Storage Location (BSL) configuration, an empty value is added. The empty value is only added for BSLs that are created using Data Protection Application (DPA) custom resource (CR), and this value is not added if BSLs are created using any other method. OADP-4274
For a complete list of all known issues in this release, see the list of OADP 1.4.0 known issues in Jira.
4.2.1.8.3. Upgrade notes Copiar o linkLink copiado para a área de transferência!
Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3.
4.2.1.8.3.1. Changes from OADP 1.3 to 1.4 Copiar o linkLink copiado para a área de transferência!
The Velero server has been updated from version 1.12 to 1.14. Note that there are no changes in the Data Protection Application (DPA).
This changes the following:
-
The
velero-plugin-for-csicode is now available in the Velero code, which means aninitcontainer is no longer required for the plugin. - Velero changed client Burst and QPS defaults from 30 and 20 to 100 and 100, respectively.
The
velero-plugin-for-awsplugin updated default value of thespec.config.checksumAlgorithmfield inBackupStorageLocationobjects (BSLs) from""(no checksum calculation) to theCRC32algorithm. For more information, see Velero plugins for AWS Backup Storage Location. The checksum algorithm types are known to work only with AWS. Several S3 providers require themd5sumto be disabled by setting the checksum algorithm to"". Confirmmd5sumalgorithm support and configuration with your storage provider.In OADP 1.4, the default value for BSLs created within DPA for this configuration is
"". This default value means that themd5sumis not checked, which is consistent with OADP 1.3. For BSLs created within DPA, update it by using thespec.backupLocations[].velero.config.checksumAlgorithmfield in the DPA. If your BSLs are created outside DPA, you can update this configuration by usingspec.config.checksumAlgorithmin the BSLs.
4.2.1.8.3.2. Backing up the DPA configuration Copiar o linkLink copiado para a área de transferência!
You must back up your current DataProtectionApplication (DPA) configuration.
Procedure
Save your current DPA configuration by running the following command:
Example command
oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.1.8.3.3. Upgrading the OADP Operator Copiar o linkLink copiado para a área de transferência!
Use the following procedure when upgrading the OpenShift API for Data Protection (OADP) Operator.
Procedure
-
Change your subscription channel for the OADP Operator from
stable-1.3tostable-1.4. - Wait for the Operator and containers to update and restart.
4.2.1.8.4. Converting DPA to the new version Copiar o linkLink copiado para a área de transferência!
To upgrade from OADP 1.3 to 1.4, no Data Protection Application (DPA) changes are required.
4.2.1.8.5. Verifying the upgrade Copiar o linkLink copiado para a área de transferência!
Use the following procedure to verify the upgrade.
Procedure
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2. OADP 1.3 release notes Copiar o linkLink copiado para a área de transferência!
The release notes for OpenShift API for Data Protection (OADP) 1.3 describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.
4.2.2.1. OADP 1.3.8 release notes Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.3.8 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.7.
4.2.2.2. OADP 1.3.7 release notes Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.3.7 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.6.
The following Common Vulnerabilities and Exposures (CVEs) have been fixed in OADP 1.3.7
4.2.2.2.1. New features Copiar o linkLink copiado para a área de transferência!
Collecting logs with the must-gather tool has been improved with a Markdown summary
You can collect logs and information about OADP custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in the must-gather logs clusters directory. OADP-5384
4.2.2.3. OADP 1.3.6 release notes Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.3.6 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.5.
4.2.2.4. OADP 1.3.5 release notes Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.3.5 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.3.4.
4.2.2.5. OADP 1.3.4 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.3.4 release notes list resolved issues and known issues.
4.2.2.5.1. Resolved issues Copiar o linkLink copiado para a área de transferência!
The backup spec.resourcepolicy.kind parameter is now case-insensitive
Previously, the backup spec.resourcepolicy.kind parameter was only supported with a lower-level string. With this fix, it is now case-insensitive. OADP-2944
Use olm.maxOpenShiftVersion to prevent cluster upgrade to OCP 4.16 version
The cluster operator-lifecycle-manager operator must not be upgraded between minor OpenShift Container Platform versions. Using the olm.maxOpenShiftVersion parameter prevents upgrading to OpenShift Container Platform 4.16 version when OADP 1.3 is installed. To upgrade to OpenShift Container Platform 4.16 version, upgrade OADP 1.3 on OCP 4.15 version to OADP 1.4. OADP-4803
BSL and VSL are removed from the cluster
Previously, when any Data Protection Application (DPA) was modified to remove the Backup Storage Locations (BSL) or Volume Snapshot Locations (VSL) from the backupLocations or snapshotLocations section, BSL or VSL were not removed from the cluster until the DPA was deleted. With this update, BSL/VSL are removed from the cluster. OADP-3050
DPA reconciles and validates the secret key
Previously, the Data Protection Application (DPA) reconciled successfully on the wrong Volume Snapshot Locations (VSL) secret key name. With this update, DPA validates the secret key name before reconciling on any VSL. OADP-3052
Velero’s cloud credential permissions are now restrictive
Previously, Velero’s cloud credential permissions were mounted with the 0644 permissions. As a consequence, any one could read the /credentials/cloud file apart from the owner and group making it easier to access sensitive information such as storage access keys. With this update, the permissions of this file are updated to 0640, and this file cannot be accessed by other users except the owner and group.
Warning is displayed when ArgoCD managed namespace is included in the backup
A warning is displayed during the backup operation when ArgoCD and Velero manage the same namespace. OADP-4736
The list of security fixes that are included in this release is documented in the RHSA-2024:9960 advisory.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.4 resolved issues in Jira.
4.2.2.5.2. Known issues Copiar o linkLink copiado para a área de transferência!
Cassandra application pods enter into the CrashLoopBackoff status after restore
After OADP restores, the Cassandra application pods might enter the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally. OADP-3767
defaultVolumesToFSBackup and defaultVolumesToFsBackup flags are not identical
The dpa.spec.configuration.velero.defaultVolumesToFSBackup flag is not identical to the backup.spec.defaultVolumesToFsBackup flag, which can lead to confusion. OADP-3692
PodVolumeRestore works even though the restore is marked as failed
The podvolumerestore continues the data transfer even though the restore is marked as failed. OADP-3039
Velero is unable to skip restoring of initContainer spec
Velero might restore the restore-wait init container even though it is not required. OADP-3759
4.2.2.6. OADP 1.3.3 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.3.3 release notes list resolved issues and known issues.
4.2.2.6.1. Resolved issues Copiar o linkLink copiado para a área de transferência!
OADP fails when its namespace name is longer than 37 characters
When installing the OADP Operator in a namespace with more than 37 characters and when creating a new DPA, labeling the cloud-credentials secret fails. With this release, the issue has been fixed. OADP-4211
OADP image PullPolicy set to Always
In previous versions of OADP, the image PullPolicy of the adp-controller-manager and Velero pods was set to Always. This was problematic in edge scenarios where there could be limited network bandwidth to the registry, resulting in slow recovery time following a pod restart. In OADP 1.3.3, the image PullPolicy of the openshift-adp-controller-manager and Velero pods is set to IfNotPresent.
The list of security fixes that are included in this release is documented in the RHSA-2024:4982 advisory.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.3 resolved issues in Jira.
4.2.2.6.2. Known issues Copiar o linkLink copiado para a área de transferência!
Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally.
4.2.2.7. OADP 1.3.2 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.3.2 release notes list resolved issues and known issues.
4.2.2.7.1. Resolved issues Copiar o linkLink copiado para a área de transferência!
DPA fails to reconcile if a valid custom secret is used for BSL
DPA fails to reconcile if a valid custom secret is used for Backup Storage Location (BSL), but the default secret is missing. The workaround is to create the required default cloud-credentials initially. When the custom secret is re-created, it can be used and checked for its existence.
CVE-2023-45290: oadp-velero-container: Golang net/http: Memory exhaustion in Request.ParseMultipartForm
A flaw was found in the net/http Golang standard library package, which impacts previous versions of OADP. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue, Request.PostFormValue, or Request.FormFile, limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2023-45290.
CVE-2023-45289: oadp-velero-container: Golang net/http/cookiejar: Incorrect forwarding of sensitive headers and cookies on HTTP redirect
A flaw was found in the net/http/cookiejar Golang standard library package, which impacts previous versions of OADP. When following an HTTP redirect to a domain that is not a subdomain match or exact match of the initial domain, an http.Client does not forward sensitive headers such as Authorization or Cookie. A maliciously crafted HTTP redirect could cause sensitive headers to be unexpectedly forwarded. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2023-45289.
CVE-2024-24783: oadp-velero-container: Golang crypto/x509: Verify panics on certificates with an unknown public key algorithm
A flaw was found in the crypto/x509 Golang standard library package, which impacts previous versions of OADP. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert. The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24783.
CVE-2024-24784: oadp-velero-plugin-container: Golang net/mail: Comments in display names are incorrectly handled
A flaw was found in the net/mail Golang standard library package, which impacts previous versions of OADP. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. Because this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24784.
CVE-2024-24785: oadp-velero-container: Golang: html/template: errors returned from MarshalJSON methods may break template escaping
A flaw was found in the html/template Golang standard library package, which impacts previous versions of OADP. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in OADP 1.3.2.
For more details, see CVE-2024-24785.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.2 resolved issues in Jira.
4.2.2.7.2. Known issues Copiar o linkLink copiado para a área de transferência!
Cassandra application pods enter into the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods that are returning an error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally.
4.2.2.8. OADP 1.3.1 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.3.1 release notes lists new features and resolved issues.
4.2.2.8.1. New features Copiar o linkLink copiado para a área de transferência!
OADP 1.3.0 Data Mover is now fully supported
The OADP built-in Data Mover, introduced in OADP 1.3.0 as a Technology Preview, is now fully supported for both containerized and virtual machine workloads.
4.2.2.8.2. Resolved issues Copiar o linkLink copiado para a área de transferência!
IBM Cloud(R) Object Storage is now supported as a backup storage provider
IBM Cloud® Object Storage is one of the AWS S3 compatible backup storage providers, which was unsupported previously. With this update, IBM Cloud® Object Storage is now supported as an AWS S3 compatible backup storage provider.
OADP operator now correctly reports the missing region error
Previously, when you specified profile:default without specifying the region in the AWS Backup Storage Location (BSL) configuration, the OADP operator failed to report the missing region error on the Data Protection Application (DPA) custom resource (CR). This update corrects validation of DPA BSL specification for AWS. As a result, the OADP Operator reports the missing region error.
Custom labels are not removed from the openshift-adp namespace
Previously, the openshift-adp-controller-manager pod would reset the labels attached to the openshift-adp namespace. This caused synchronization issues for applications requiring custom labels such as Argo CD, leading to improper functionality. With this update, this issue is fixed and custom labels are not removed from the openshift-adp namespace.
OADP must-gather image collects CRDs
Previously, the OADP must-gather image did not collect the custom resource definitions (CRDs) shipped by OADP. Consequently, you could not use the omg tool to extract data in the support shell. With this fix, the must-gather image now collects CRDs shipped by OADP and can use the omg tool to extract data.
Garbage collection has the correct description for the default frequency value
Previously, the garbage-collection-frequency field had a wrong description for the default frequency value. With this update, garbage-collection-frequency has a correct value of one hour for the gc-controller reconciliation default frequency.
FIPS Mode flag is available in OperatorHub
By setting the fips-compliant flag to true, the FIPS mode flag is now added to the OADP Operator listing in OperatorHub. This feature was enabled in OADP 1.3.0 but did not show up in the Red Hat Container catalog as being FIPS enabled.
CSI plugin does not panic with a nil pointer when csiSnapshotTimeout is set to a short duration
Previously, when the csiSnapshotTimeout parameter was set to a short duration, the CSI plugin encountered the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference.
With this fix, the backup fails with the following error: Timed out awaiting reconciliation of volumesnapshot.
For a complete list of all issues resolved in this release, see the list of OADP 1.3.1 resolved issues in Jira.
4.2.2.8.3. Known issues Copiar o linkLink copiado para a área de transferência!
Backup and storage restrictions for Single-node OpenShift clusters deployed on IBM Power(R) and IBM Z(R) platforms
Review the following backup and storage related restrictions for Single-node OpenShift clusters that are deployed on IBM Power® and IBM Z® platforms:
- Storage
- Only NFS storage is currently compatible with single-node OpenShift clusters deployed on IBM Power® and IBM Z® platforms.
- Backup
-
Only the backing up applications with File System Backup such as
kopiaandresticare supported for backup and restore operations.
Cassandra application pods enter in the CrashLoopBackoff status after restoring OADP
After OADP restores, the Cassandra application pods might enter in the CrashLoopBackoff status. To work around this problem, delete the StatefulSet pods with any error or the CrashLoopBackoff state after restoring OADP. The StatefulSet controller recreates these pods and it runs normally.
4.2.2.9. OADP 1.3.0 release notes Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) 1.3.0 release notes lists new features, resolved issues and bugs, and known issues.
4.2.2.9.1. New features Copiar o linkLink copiado para a área de transferência!
Velero built-in DataMover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
OADP 1.3 includes a built-in Data Mover that you can use to move Container Storage Interface (CSI) volume snapshots to a remote object store. The built-in Data Mover allows you to restore stateful applications from the remote object store if a failure, accidental deletion, or corruption of the cluster occurs. It uses Kopia as the uploader mechanism to read the snapshot data and to write to the Unified Repository.
Backing up applications with File System Backup: Kopia or Restic
Velero’s File System Backup (FSB) supports two backup libraries: the Restic path and the Kopia path.
Velero allows users to select between the two paths.
For backup, specify the path during the installation through the uploader-type flag. The valid value is either restic or kopia. This field defaults to kopia if the value is not specified. The selection cannot be changed after the installation.
Google Cloud authentication
Google Cloud authentication enables you to use short-lived Google credentials.
Google Cloud with Workload Identity Federation enables you to use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This eliminates the maintenance and security risks associated with service account keys.
AWS ROSA STS authentication
You can use OpenShift API for Data Protection (OADP) with Red Hat OpenShift Service on AWS (ROSA) clusters to backup and restore application data.
ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to speed up the building and delivering of differentiating experiences to your customers.
You can subscribe to the service directly from your AWS account.
After the clusters are created, you can operate your clusters by using the OpenShift web console. The ROSA service also uses OpenShift APIs and command-line interface (CLI) tools.
4.2.2.9.2. Resolved issues Copiar o linkLink copiado para a área de transferência!
ACM applications were removed and re-created on managed clusters after restore
Applications on managed clusters were deleted and re-created upon restore activation. OpenShift API for Data Protection (OADP 1.2) backup and restore process is faster than the older versions. The OADP performance change caused this behavior when restoring ACM resources. Therefore, some resources were restored before other resources, which caused the removal of the applications from managed clusters. OADP-2686
Restic restore was partially failing due to Pod Security standard
During interoperability testing, OpenShift Container Platform 4.14 had the pod Security mode set to enforce, which caused the pod to be denied. This was caused due to the restore order. The pod was getting created before the security context constraints (SCC) resource, since the pod violated the podSecurity standard, it denied the pod. When setting the restore priority field on the Velero server, restore is successful. OADP-2688
Possible pod volume backup failure if Velero is installed in several namespaces
There was a regression in Pod Volume Backup (PVB) functionality when Velero was installed in several namespaces. The PVB controller was not properly limiting itself to PVBs in its own namespace. OADP-2308
OADP Velero plugins returning "received EOF, stopping recv loop" message
In OADP, Velero plugins were started as separate processes. When the Velero operation completes, either successfully or not, they exit. Therefore, if you see a received EOF, stopping recv loop messages in debug logs, it does not mean an error occurred, it means that a plugin operation has completed. OADP-2176
CVE-2023-39325 Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack)
In previous releases of OADP, the HTTP/2 protocol was susceptible to a denial of service attack because request cancellation could reset multiple streams quickly. The server had to set up and tear down the streams while not hitting any server-side limit for the maximum number of active streams per connection. This resulted in a denial of service due to server resource consumption.
For more information, see CVE-2023-39325 (Rapid Reset Attack)
For a complete list of all issues resolved in this release, see the list of OADP 1.3.0 resolved issues in Jira.
4.2.2.9.3. Known issues Copiar o linkLink copiado para a área de transferência!
CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration
The CSI plugin errors on nil pointer when csiSnapshotTimeout is set to a short duration. Sometimes it succeeds to complete the snapshot within a short duration, but often it panics with the backup PartiallyFailed with the following error: plugin panicked: runtime error: invalid memory address or nil pointer dereference.
Backup is marked as PartiallyFailed when volumeSnapshotContent CR has an error
If any of the VolumeSnapshotContent CRs have an error related to removing the VolumeSnapshotBeingCreated annotation, it moves the backup to the WaitingForPluginOperationsPartiallyFailed phase. OADP-2871
Performance issues when restoring 30,000 resources for the first time
When restoring 30,000 resources for the first time, without an existing-resource-policy, it takes twice as long to restore them, than it takes during the second and third try with an existing-resource-policy set to update. OADP-3071
Post restore hooks might start running before Datadownload operation has released the related PV
Due to the asynchronous nature of the Data Mover operation, a post-hook might be attempted before the related pods persistent volumes (PVs) are released by the Data Mover persistent volume claim (PVC).
Google Cloud Workload Identity Federation VSL backup PartiallyFailed
VSL backup PartiallyFailed when Google Cloud workload identity is configured on Google Cloud.
For a complete list of all known issues in this release, see the list of OADP 1.3.0 known issues in Jira.
4.2.2.9.4. Upgrade notes Copiar o linkLink copiado para a área de transferência!
Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OpenShift API for Data Protection (OADP) 1.1 to 1.3, upgrade first to 1.2, and then to 1.3.
4.2.2.9.4.1. Changes from OADP 1.2 to 1.3 Copiar o linkLink copiado para a área de transferência!
The Velero server has been updated from version 1.11 to 1.12.
OpenShift API for Data Protection (OADP) 1.3 uses the Velero built-in Data Mover instead of the VolumeSnapshotMover (VSM) or the Volsync Data Mover.
This changes the following:
-
The
spec.features.dataMoverfield and the VSM plugin are not compatible with OADP 1.3, and you must remove the configuration from theDataProtectionApplication(DPA) configuration. - The Volsync Operator is no longer required for Data Mover functionality, and you can remove it.
-
The custom resource definitions
volumesnapshotbackups.datamover.oadp.openshift.ioandvolumesnapshotrestores.datamover.oadp.openshift.ioare no longer required, and you can remove them. - The secrets used for the OADP-1.2 Data Mover are no longer required, and you can remove them.
OADP 1.3 supports Kopia, which is an alternative file system backup tool to Restic.
To employ Kopia, use the new
spec.configuration.nodeAgentfield as shown in the following example:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
spec.configuration.resticfield is deprecated in OADP 1.3 and will be removed in a future version of OADP. To avoid seeing deprecation warnings, remove therestickey and its values, and use the following new syntax:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
In a future OADP release, it is planned that the kopia tool will become the default uploaderType value.
4.2.2.9.4.2. Upgrading from OADP 1.2 Technology Preview Data Mover Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) 1.2 Data Mover backups cannot be restored with OADP 1.3. To prevent a gap in the data protection of your applications, complete the following steps before upgrading to OADP 1.3:
Procedure
- If your cluster backups are sufficient and Container Storage Interface (CSI) storage is available, back up the applications with a CSI backup.
If you require off cluster backups:
-
Back up the applications with a file system backup that uses the
--default-volumes-to-fs-backup=true or backup.spec.defaultVolumesToFsBackupoptions. -
Back up the applications with your object storage plugins, for example,
velero-plugin-for-aws.
-
Back up the applications with a file system backup that uses the
The default timeout value for the Restic file system backup is one hour. In OADP 1.3.1 and later, the default timeout value for Restic and Kopia is four hours.
To restore OADP 1.2 Data Mover backup, you must uninstall OADP, and install and configure OADP 1.2.
4.2.2.9.4.3. Backing up the DPA configuration Copiar o linkLink copiado para a área de transferência!
You must back up your current DataProtectionApplication (DPA) configuration.
Procedure
Save your current DPA configuration by running the following command:
Example
oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.2.2.9.4.4. Upgrading the OADP Operator Copiar o linkLink copiado para a área de transferência!
Use the following sequence when upgrading the OpenShift API for Data Protection (OADP) Operator.
Procedure
-
Change your subscription channel for the OADP Operator from
stable-1.2tostable-1.3. - Allow time for the Operator and containers to update and restart.
4.2.2.9.4.5. Converting DPA to the new version Copiar o linkLink copiado para a área de transferência!
If you need to move backups off cluster with the Data Mover, reconfigure the DataProtectionApplication (DPA) manifest as follows.
Procedure
-
Click Operators
Installed Operators and select the OADP Operator. - In the Provided APIs section, click View more.
- Click Create instance in the DataProtectionApplication box.
Click YAML View to display the current DPA parameters.
Example current DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the DPA parameters:
-
Remove the
features.dataMoverkey and values from the DPA. - Remove the VolumeSnapshotMover (VSM) plugin.
Add the
nodeAgentkey and values.Example updated DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Remove the
- Wait for the DPA to reconcile successfully.
4.2.2.9.4.6. Verifying the upgrade Copiar o linkLink copiado para a área de transferência!
Use the following procedure to verify the upgrade.
Procedure
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
In OADP 1.3 you can start data movement off cluster per backup versus creating a DataProtectionApplication (DPA) configuration.
Example command
velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true
$ velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true
Example configuration file
4.3. OADP performance Copiar o linkLink copiado para a área de transferência!
4.3.1. OADP recommended network settings Copiar o linkLink copiado para a área de transferência!
For a supported experience with OpenShift API for Data Protection (OADP), you should have a stable and resilient network across {OCP-short} nodes, S3 storage, and in supported cloud environments that meet {OCP-short} network requirement recommendations.
To ensure successful backup and restore operations for deployments with remote S3 buckets located off-cluster with suboptimal data paths, it is recommended that your network settings meet the following minimum requirements in such less optimal conditions:
- Bandwidth (network upload speed to object storage): Greater than 2 Mbps for small backups and 10-100 Mbps depending on the data volume for larger backups.
- Packet loss: 1%
- Packet corruption: 1%
- Latency: 100ms
Ensure that your OpenShift Container Platform network performs optimally and meets OpenShift Container Platform network requirements.
Although Red Hat provides supports for standard backup and restore failures, it does not provide support for failures caused by network settings that do not meet the recommended thresholds.
4.4. OADP features and plugins Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications.
The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources.
4.4.1. OADP features Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) supports the following features:
- Backup
You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label.
OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic.
NoteYou must exclude Operators from the backup of an application for backup and restore to succeed.
- Restore
You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label.
NoteYou must exclude Operators from the backup of an application for backup and restore to succeed.
- Schedule
- You can schedule backups at specified intervals.
- Hooks
-
You can use hooks to run commands in a container on a pod, for example,
fsfreezeto freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container.
4.4.2. OADP plugins Copiar o linkLink copiado para a área de transferência!
The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins.
OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots.
| OADP plugin | Function | Storage location |
|---|---|---|
|
| Backs up and restores Kubernetes objects. | AWS S3 |
| Backs up and restores volumes with snapshots. | AWS EBS | |
|
| Backs up and restores Kubernetes objects. | Microsoft Azure Blob storage |
| Backs up and restores volumes with snapshots. | Microsoft Azure Managed Disks | |
|
| Backs up and restores Kubernetes objects. | Google Cloud Storage |
| Backs up and restores volumes with snapshots. | Google Compute Engine Disks | |
|
| Backs up and restores OpenShift Container Platform resources. [1] | Object store |
|
| Backs up and restores OpenShift Virtualization resources. [2] | Object store |
|
| Backs up and restores volumes with CSI snapshots. [3] | Cloud storage that supports CSI snapshots |
|
| VolumeSnapshotMover relocates snapshots from the cluster into an object store to be used during a restore process to recover stateful applications, in situations such as cluster deletion. [4] | Object store |
- Mandatory.
- Virtual machine disks are backed up with CSI snapshots or Restic.
The
csiplugin uses the Kubernetes CSI snapshot API.-
OADP 1.1 or later uses
snapshot.storage.k8s.io/v1 -
OADP 1.0 uses
snapshot.storage.k8s.io/v1beta1
-
OADP 1.1 or later uses
- OADP 1.2 only.
4.4.3. About OADP Velero plugins Copiar o linkLink copiado para a área de transferência!
You can configure two types of plugins when you install Velero:
- Default cloud provider plugins
- Custom plugins
Both types of plugin are optional, but most users configure at least one cloud provider plugin.
4.4.3.1. Default Velero cloud provider plugins Copiar o linkLink copiado para a área de transferência!
You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment:
-
aws(Amazon Web Services) -
gcp(Google Cloud) -
azure(Microsoft Azure) -
openshift(OpenShift Velero plugin) -
csi(Container Storage Interface) -
kubevirt(KubeVirt)
You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the openshift, aws, azure, and gcp plugins:
4.4.3.2. Custom Velero plugins Copiar o linkLink copiado para a área de transferência!
You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment.
You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the default openshift, azure, and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin:
4.4.3.3. Velero plugins returning "received EOF, stopping recv loop" message Copiar o linkLink copiado para a área de transferência!
Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred.
4.4.4. Supported architectures for OADP Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) supports the following architectures:
- AMD64
- ARM64
- PPC64le
- s390x
OADP 1.2.0 and later versions support the ARM64 architecture.
4.4.5. OADP support for IBM Power and IBM Z Copiar o linkLink copiado para a área de transferência!
OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power® and to IBM Z®.
- OADP 1.1.7 was tested successfully against OpenShift Container Platform 4.11 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.1.7 in terms of backup locations for these systems.
- OADP 1.2.3 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.2.3 in terms of backup locations for these systems.
- OADP 1.3.8 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.3.8 in terms of backup locations for these systems.
- OADP 1.4.7 was tested successfully against OpenShift Container Platform 4.14, 4.15, and 4.16 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.4.7 in terms of backup locations for these systems.
4.4.5.1. OADP support for target backup locations using IBM Power Copiar o linkLink copiado para a área de transferência!
- IBM Power® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.8 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.13, 4.14, and 4.15, and OADP 1.3.8 against all S3 backup location targets, which are not AWS, as well.
- IBM Power® running with OpenShift Container Platform 4.14, 4.15, and 4.16, and OADP 1.4.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.14, 4.15, and 4.16, and OADP 1.4.7 against all S3 backup location targets, which are not AWS, as well.
4.4.5.2. OADP testing and support for target backup locations using IBM Z Copiar o linkLink copiado para a área de transferência!
- IBM Z® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.8 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.13 4.14, and 4.15, and 1.3.8 against all S3 backup location targets, which are not AWS, as well.
- IBM Z® running with OpenShift Container Platform 4.14, 4.15, and 4.16, and 1.4.7 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.14, 4.15, and 4.16, and 1.4.7 against all S3 backup location targets, which are not AWS, as well.
4.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms Copiar o linkLink copiado para a área de transferência!
- Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power® and IBM Z® platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue.
4.4.6. OADP plugins known issues Copiar o linkLink copiado para a área de transferência!
The following section describes known issues in OpenShift API for Data Protection (OADP) plugins:
4.4.6.1. Velero plugin panics during imagestream backups due to a missing secret Copiar o linkLink copiado para a área de transferência!
When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller, meaning the DPA reconciliation does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret.
When the backup is run, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error:
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94…
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
4.4.6.1.1. Workaround to avoid the panic error Copiar o linkLink copiado para a área de transferência!
To avoid the Velero plugin panic error, perform the following steps:
Label the custom BSL with the relevant label:
oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
$ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bslCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the BSL is labeled, wait until the DPA reconciles.
NoteYou can force the reconciliation by making any minor change to the DPA itself.
When the DPA reconciles, confirm that the relevant
oadp-<bsl_name>-<bsl_provider>-registry-secrethas been created and that the correct registry data has been populated into it:oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
$ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.4.6.2. OpenShift ADP Controller segmentation fault Copiar o linkLink copiado para a área de transferência!
If you configure a DPA with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.
You can have either velero or cloudstorage defined, because they are mutually exclusive fields.
-
If you have both
veleroandcloudstoragedefined, theopenshift-adp-controller-managerfails. -
If you have neither
veleronorcloudstoragedefined, theopenshift-adp-controller-managerfails.
For more information about this issue, see OADP-1054.
4.4.6.2.1. OpenShift ADP Controller segmentation fault workaround Copiar o linkLink copiado para a área de transferência!
You must define either velero or cloudstorage when you configure a DPA. If you define both APIs in your DPA, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault.
4.4.7. OADP and FIPS Copiar o linkLink copiado para a área de transferência!
Federal Information Processing Standards (FIPS) are a set of computer security standards developed by the United States federal government in line with the Federal Information Security Management Act (FISMA).
OpenShift API for Data Protection (OADP) has been tested and works on FIPS-enabled OpenShift Container Platform clusters.
4.5. OADP use cases Copiar o linkLink copiado para a área de transferência!
4.5.1. Backup using OpenShift API for Data Protection and Red Hat OpenShift Data Foundation (ODF) Copiar o linkLink copiado para a área de transferência!
Following is a use case for using OADP and ODF to back up an application.
4.5.1.1. Backing up an application using OADP and ODF Copiar o linkLink copiado para a área de transferência!
In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF).
- You create an object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
-
You use the NooBaa MCG service with OADP by using the
awsprovider plugin. - You configure the Data Protection Application (DPA) with the backup storage location (BSL).
- You create a backup custom resource (CR) and specify the application namespace to back up.
- You create and verify the backup.
Prerequisites
- You installed the OADP Operator.
- You installed the ODF Operator.
- You have an application with a database running in a separate namespace.
Procedure
Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
test-obc- Specifies the name of the object bucket claim.
test-backup-bucket- Specifies the name of the bucket.
Create the OBC by running the following command:
oc create -f <obc_file_name>
$ oc create -f <obc_file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<obc_file_name>- Specifies the file name of the object bucket claim manifest.
When you create an OBC, ODF creates a
secretand aconfig mapwith the same name as the object bucket claim. Thesecrethas the bucket credentials, and theconfig maphas information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:oc extract --to=- cm/test-obc
$ oc extract --to=- cm/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow test-obcis the name of the OBC.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the bucket credentials from the generated
secret, run the following command:oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPymCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the public URL for the S3 endpoint from the s3 route in the
openshift-storagenamespace by running the following command:oc get route s3 -n openshift-storage
$ oc get route s3 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cloud-credentialsfile with the object bucket credentials as shown in the following command:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
cloud-credentialssecret with thecloud-credentialsfile content as shown in the following command:oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials
$ oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Data Protection Application (DPA) as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
defaultSnapshotMoveData-
Set to
trueto use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. s3Url- Specifies the S3 URL of ODF storage.
<bucket_name>- Specifies the bucket name.
Create the DPA by running the following command:
oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the DPA is created successfully by running the following command. In the example output, you can see the
statusobject hastypefield set toReconciled. This means, the DPA is successfully created.oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a backup CR as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<application_namespace>- Specifies the namespace for the application to back up.
Create the backup CR by running the following command:
oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the backup object is in the
Completedphase by running the following command. For more details, see the example output.oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.2. OpenShift API for Data Protection (OADP) restore use case Copiar o linkLink copiado para a área de transferência!
Following is a use case for using OADP to restore a backup to a different namespace.
4.5.2.1. Restoring an application to a different namespace using OADP Copiar o linkLink copiado para a área de transferência!
Restore a backup of an application by using OADP to a new target namespace, test-restore-application. To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources.
Prerequisites
- You installed the OADP Operator.
- You have the backup of an application to be restored.
Procedure
Create a restore CR as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
test-restore- Specifies the name of the restore CR.
<backup_name>- Specifies the name of the backup.
<application_namespace>-
Specifies the target namespace to restore to.
namespaceMappingmaps the source application namespace to the target application namespace.test-restore-applicationis the name of target namespace where you want to restore the backup.
Apply the restore CR by running the following command:
oc apply -f <restore_cr_filename>
$ oc apply -f <restore_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the restore is in the
Completedphase by running the following command:oc describe restores.velero.io <restore_name> -n openshift-adp
$ oc describe restores.velero.io <restore_name> -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the restored namespace
test-restore-applicationby running the following command:oc project test-restore-application
$ oc project test-restore-applicationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command:
oc get pvc,svc,deployment,secret,configmap
$ oc get pvc,svc,deployment,secret,configmapCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.3. Including a self-signed CA certificate during backup Copiar o linkLink copiado para a área de transferência!
You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by Red Hat OpenShift Data Foundation (ODF).
4.5.3.1. Backing up an application and its self-signed CA certificate Copiar o linkLink copiado para a área de transferência!
The s3.openshift-storage.svc service, provided by ODF, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA.
To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks:
- Request a NooBaa bucket by creating an object bucket claim (OBC).
- Extract the bucket details.
-
Include a self-signed CA certificate in the
DataProtectionApplicationCR. - Back up an application.
Prerequisites
- You installed the OADP Operator.
- You installed the ODF Operator.
- You have an application with a database running in a separate namespace.
Procedure
Create an OBC manifest to request a NooBaa bucket as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
test-obc- Specifies the name of the object bucket claim.
test-backup-bucket- Specifies the name of the bucket.
Create the OBC by running the following command:
oc create -f <obc_file_name>
$ oc create -f <obc_file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you create an OBC, ODF creates a
secretand aConfigMapwith the same name as the object bucket claim. Thesecretobject contains the bucket credentials, and theConfigMapobject contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:oc extract --to=- cm/test-obc
$ oc extract --to=- cm/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow test-obcis the name of the OBC.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the bucket credentials from the
secretobject, run the following command:oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPymCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cloud-credentialsfile with the object bucket credentials by using the following example configuration:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
cloud-credentialssecret with thecloud-credentialsfile content by running the following command:oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials
$ oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the service CA certificate from the
openshift-service-ca.crtconfig map by running the following command. Ensure that you encode the certificate inBase64format and note the value to use in the next step.oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo$ oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0K
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0KCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
DataProtectionApplicationCR manifest file with the bucket name and CA certificate as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
insecureSkipTLSVerify-
Specifies whether SSL/TLS security is enabled. If set to
true, SSL/TLS security is disabled. If set tofalse, SSL/TLS security is enabled. <bucket_name>- Specifies the name of the bucket extracted in an earlier step.
<ca_cert>-
Specifies the
Base64encoded certificate from the previous step.
Create the
DataProtectionApplicationCR by running the following command:oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplicationCR is created successfully by running the following command:oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
BackupCR by using the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<application_namespace>- Specifies the namespace for the application to back up.
Create the
BackupCR by running the following command:oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
Backupobject is in theCompletedphase by running the following command:oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.5.4. Using the legacy-aws Velero plugin Copiar o linkLink copiado para a área de transferência!
If you are using an AWS S3-compatible backup storage location, you might get a SignatureDoesNotMatch error while backing up your application. This error occurs because some backup storage locations still use the older versions of the S3 APIs, which are incompatible with the newer AWS SDK for Go V2. To resolve this issue, you can use the legacy-aws Velero plugin in the DataProtectionApplication custom resource (CR). The legacy-aws Velero plugin uses the older AWS SDK for Go V1, which is compatible with the legacy S3 APIs, ensuring successful backups.
4.5.4.1. Using the legacy-aws Velero plugin in the DataProtectionApplication CR Copiar o linkLink copiado para a área de transferência!
In the following use case, you configure the DataProtectionApplication CR with the legacy-aws Velero plugin and then back up an application.
Depending on the backup storage location you choose, you can use either the legacy-aws or the aws plugin in your DataProtectionApplication CR. If you use both of the plugins in the DataProtectionApplication CR, the following error occurs: aws and legacy-aws can not be both specified in DPA spec.configuration.velero.defaultPlugins.
Prerequisites
- You have installed the OADP Operator.
- You have configured an AWS S3-compatible object storage as a backup location.
- You have an application with a database running in a separate namespace.
Procedure
Configure the
DataProtectionApplicationCR to use thelegacy-awsVelero plugin as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
legacy-aws-
Specifies to use the
legacy-awsplugin. <bucket_name>- Specifies the bucket name.
Create the
DataProtectionApplicationCR by running the following command:oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplicationCR is created successfully by running the following command. In the example output, you can see thestatusobject has thetypefield set toReconciledand thestatusfield set to"True". That status indicates that theDataProtectionApplicationCR is successfully created.oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see an output similar to the following example:
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a
BackupCR as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<application_namespace>- Specifies the namespace for the application to back up.
Create the
BackupCR by running the following command:oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the backup object is in the
Completedphase by running the following command. For more details, see the example output.oc describe backups.velero.io test-backup -n openshift-adp
$ oc describe backups.velero.io test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.6. Installing OADP Copiar o linkLink copiado para a área de transferência!
4.6.1. About installing OADP Copiar o linkLink copiado para a área de transferência!
As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types:
- Amazon Web Services
- Microsoft Azure
- Google Cloud
- Multicloud Object Gateway
- IBM Cloud® Object Storage S3
- AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO
You can configure multiple backup storage locations within the same namespace for each individual OADP deployment.
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
The CloudStorage API, which automates the creation of a bucket for object storage, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
The CloudStorage API is a Technology Preview feature when you use a CloudStorage object and want OADP to use the CloudStorage API to automatically create an S3 bucket for use as a BackupStorageLocation.
The CloudStorage API supports manually creating a BackupStorageLocation object by specifying an existing S3 bucket. The CloudStorage API that creates an S3 bucket automatically is currently only enabled for AWS S3 storage.
You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB).
To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers:
- Amazon Web Services
- Microsoft Azure
- Google Cloud
- CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation
If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1.x.
OADP 1.0.x does not support CSI backup on OCP 4.11 and later. OADP 1.0.x includes Velero 1.7.x and expects the API group snapshot.storage.k8s.io/v1beta1, which is not present on OCP 4.11 and later.
If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage.
You create a default Secret and then you install the Data Protection Application.
4.6.1.1. AWS S3 compatible backup storage providers Copiar o linkLink copiado para a área de transferência!
OADP works with many S3-compatible object storage providers. Several object storage providers are certified and tested with every release of OADP. Various S3 providers are known to work with OADP but are not specifically tested and certified. These providers will be supported on a best-effort basis. Additionally, there are a few S3 object storage providers with known issues and limitations that are listed in this documentation.
Red Hat will provide support for OADP on any S3-compatible storage, but support will stop if the S3 endpoint is determined to be the root cause of an issue.
4.6.1.1.1. Certified backup storage providers Copiar o linkLink copiado para a área de transferência!
The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations:
- MinIO
- Multicloud Object Gateway (MCG)
- Amazon Web Services (AWS) S3
- IBM Cloud® Object Storage S3
- Ceph RADOS Gateway (Ceph Object Gateway)
- Red Hat Container Storage
- Red Hat OpenShift Data Foundation
- NetApp ONTAP S3 Object Storage
Google Cloud and Microsoft Azure have their own Velero object store plugins.
4.6.1.1.2. Unsupported backup storage providers Copiar o linkLink copiado para a área de transferência!
The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat:
- Oracle Cloud
- DigitalOcean
- NooBaa, unless installed using Multicloud Object Gateway (MCG)
- Tencent Cloud
- Quobyte
- Cloudian HyperStore
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
4.6.1.1.3. Backup storage providers with known limitations Copiar o linkLink copiado para a área de transferência!
The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set:
- Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore.
4.6.1.2. Configuring Multicloud Object Gateway (MCG) for disaster recovery on OpenShift Data Foundation Copiar o linkLink copiado para a área de transferência!
If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store.
Failure to configure MCG as an external object store might lead to backups not being available.
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
Procedure
- Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud.
4.6.1.3. About OADP update channels Copiar o linkLink copiado para a área de transferência!
When you install an OADP Operator, you choose an update channel. This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time.
The following update channels are available:
-
The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP
ClusterServiceVersionforOADP.v1.1.zand older versions fromOADP.v1.0.z. - The stable-1.0 channel is deprecated and is not supported.
- The stable-1.1 channel is deprecated and is not supported.
- The stable-1.2 channel is deprecated and is not supported.
-
The stable-1.3 channel contains
OADP.v1.3.z, the most recent OADP 1.3ClusterServiceVersion. -
The stable-1.4 channel contains
OADP.v1.4.z, the most recent OADP 1.4ClusterServiceVersion.
For more information, see OpenShift Operator Life Cycles.
Which update channel is right for you?
-
The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from
OADP.v1.1.z. - Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z.
When must you switch update channels?
- If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z.
- If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z.
- If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z.
You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it.
4.6.1.4. Installation of OADP on multiple namespaces Copiar o linkLink copiado para a área de transferência!
You can install OpenShift API for Data Protection into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI).
You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements:
- All deployments of OADP on the same cluster must be the same version, for example, 1.4.0. Installing different versions of OADP on the same cluster is not supported.
-
Each individual deployment of OADP must have a unique set of credentials and at least one
BackupStorageLocationconfiguration. You can also use multipleBackupStorageLocationconfigurations within the same namespace. - By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to carefully review potential impacts, such as not backing up and restoring to and from the same namespace concurrently.
4.6.1.5. OADP support for backup data immutability Copiar o linkLink copiado para a área de transferência!
Starting with OADP 1.4, you can store OADP backups in an AWS S3 bucket with enabled versioning. The versioning support is only for AWS S3 buckets and not for S3-compatible buckets.
See the following list for specific cloud provider limitations:
- AWS S3 service supports backups because an S3 object lock applies only to versioned buckets. You can still update the object data for the new version. However, when backups are deleted, old versions of the objects are not deleted.
- OADP backups are not supported and might not work as expected when you enable immutability on Azure Storage Blob.
- Google Cloud storage policy only supports bucket-level immutability. Therefore, it is not feasible to implement it in the Google Cloud environment.
Depending on your storage provider, the immutability options are called differently:
- S3 object lock
- Object retention
- Bucket versioning
- Write Once Read Many (WORM) buckets
The primary reason for the absence of support for other S3-compatible object storage is that OADP initially saves the state of a backup as finalizing and then verifies whether any asynchronous operations are in progress.
4.6.1.6. Velero CPU and memory requirements based on collected data Copiar o linkLink copiado para a área de transferência!
The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources.
4.6.1.6.1. CPU and memory requirement for configurations Copiar o linkLink copiado para a área de transferência!
| Configuration types | [1] Average usage | [2] Large usage | resourceTimeouts |
|---|---|---|---|
| CSI | Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi | Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi | N/A |
| Restic | [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi | [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi | 900m |
| [5] Data Mover | N/A | N/A | 10m - average usage 60m - large usage |
- Average usage - use these settings for most usage situations.
- Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets.
- Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations.
- Increasing the CPU has a significant impact on improving backup and restore times.
- Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m.
The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above.
4.6.1.6.2. NodeAgent CPU for large usage Copiar o linkLink copiado para a área de transferência!
Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP).
You can tune your OpenShift Container Platform environment based on your performance analysis and preference. Use CPU limits in the workloads when you use Kopia for file system backups.
If you do not use CPU limits on the pods, the pods can use excess CPU when it is available. If you specify CPU limits, the pods might be throttled if they exceed their limits. Therefore, the use of CPU limits on the pods is considered an anti-pattern.
Ensure that you are accurately specifying CPU requests so that pods can take advantage of excess CPU. Resource allocation is guaranteed based on CPU requests rather than CPU limits.
Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications.
In some environments, you might need to adjust Ceph MDS pod resources to avoid pod restarts, which occur when default settings cause resource saturation.
For more information about how to set the pod resources limit in Ceph MDS pods, see Changing the CPU and memory resources on the rook-ceph pods.
4.6.2. Installing the OADP Operator Copiar o linkLink copiado para a área de transferência!
Install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.14 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.14.
4.6.2.1. Installing the OADP Operator Copiar o linkLink copiado para a área de transferência!
Install the OADP Operator by using the OpenShift Container Platform web console.
Prerequisites
You must be logged in as a user with
cluster-adminprivileges. .Procedure-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. -
Click Operators
Installed Operators to verify the installation.
-
In the OpenShift Container Platform web console, click Operators
4.6.2.2. OADP-Velero-OpenShift Container Platform version relationship Copiar o linkLink copiado para a área de transferência!
Review the version relationship between OADP, Velero, and OpenShift Container Platform to decide compatible version combinations. This helps you select the appropriate OADP version for your cluster environment.
4.7. Configuring OADP with AWS S3 compatible storage Copiar o linkLink copiado para a área de transferência!
4.7.1. Configuring the OpenShift API for Data Protection with AWS S3 compatible storage Copiar o linkLink copiado para a área de transferência!
You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) S3 compatible storage by installing the OADP Operator. The Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure AWS for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.7.1.1. About Amazon Simple Storage Service, Identity and Access Management, and GovCloud Copiar o linkLink copiado para a área de transferência!
Review Amazon Simple Storage Service (S3), Identity and Access Management (IAM), and AWS GovCloud requirements to configure backup storage with appropriate security controls. This helps you meet federal data security requirements and use correct endpoints.
AWS S3 is a storage solution of Amazon for the internet. As an authorized user, you can use this service to store and retrieve any amount of data whenever you want, from anywhere on the web.
You securely control access to Amazon S3 and other Amazon services by using the AWS Identity and Access Management (IAM) web service.
You can use IAM to manage permissions that control which AWS resources users can access. You use IAM to both authenticate, or verify that a user is who they claim to be, and to authorize, or grant permissions to use resources.
AWS GovCloud (US) is an Amazon storage solution developed to meet the stringent and specific data security requirements of the United States Federal Government. AWS GovCloud (US) works the same as Amazon S3 except for the following:
- You cannot copy the contents of an Amazon S3 bucket in the AWS GovCloud (US) regions directly to or from another AWS region.
If you use Amazon S3 policies, use the AWS GovCloud (US) Amazon Resource Name (ARN) identifier to unambiguously specify a resource across all of AWS, such as in IAM policies, Amazon S3 bucket names, and API calls.
In AWS GovCloud (US) regions, ARNs have an identifier that is different from the one in other standard AWS regions,
arn:aws-us-gov. If you need to specify the US-West or US-East region, use one the following ARNs:-
For US-West, use
us-gov-west-1. -
For US-East, use
us-gov-east-1.
-
For US-West, use
-
For all other standard regions, ARNs begin with:
arn:aws.
- In AWS GovCloud (US) regions, use the endpoints listed in the AWS GovCloud (US-East) and AWS GovCloud (US-West) rows of the "Amazon S3 endpoints" table on Amazon Simple Storage Service endpoints and quotas. If you are processing export-controlled data, use one of the SSL/TLS endpoints. If you have FIPS requirements, use a FIPS 140-2 endpoint such as https://s3-fips.us-gov-west-1.amazonaws.com or https://s3-fips.us-gov-east-1.amazonaws.com.
- To find the other AWS-imposed restrictions, see How Amazon Simple Storage Service Differs for AWS GovCloud (US).
4.7.1.2. Configuring Amazon Web Services Copiar o linkLink copiado para a área de transferência!
Configure Amazon Web Services (AWS) S3 storage and Identity and Access Management (IAM) credentials for backup storage with OADP. This provides the necessary permissions and storage infrastructure for data protection operations.
Prerequisites
- You must have the AWS CLI installed.
Procedure
Set the
BUCKETvariable:BUCKET=<your_bucket>
$ BUCKET=<your_bucket>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
REGIONvariable:REGION=<your_region>
$ REGION=<your_region>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS S3 bucket:
aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION$ aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGIONCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
LocationConstraint-
Specifies the bucket configuration location constraint.
us-east-1does not supportLocationConstraint. If your region isus-east-1, omit--create-bucket-configuration LocationConstraint=$REGION.
Create an IAM user:
aws iam create-user --user-name velero
$ aws iam create-user --user-name veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
velero- Specifies the user name. If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
Create a
velero-policy.jsonfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the minimum necessary permissions:aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json
$ aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an access key for the
velerouser:aws iam create-access-key --user-name velero
$ aws iam create-access-key --user-name veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
credentials-velerofile:cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF
$ cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for AWS before you install the Data Protection Application.
4.7.1.3. About backup and snapshot locations and their secrets Copiar o linkLink copiado para a área de transferência!
Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.
4.7.1.3.1. Backup locations Copiar o linkLink copiado para a área de transferência!
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
4.7.1.3.2. Snapshot locations Copiar o linkLink copiado para a área de transferência!
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
4.7.1.3.3. Secrets Copiar o linkLink copiado para a área de transferência!
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.7.1.3.4. Creating a default Secret Copiar o linkLink copiado para a área de transferência!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.See the following example:
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Secretis referenced in thespec.backupLocations.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application.
4.7.1.3.5. Creating profiles for different credentials Copiar o linkLink copiado para a área de transferência!
If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file.
Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR).
Procedure
Create a
credentials-velerofile with separate profiles for the backup and snapshot locations, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretobject with thecredentials-velerofile:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the profiles to the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.3.6. Creating an OADP SSE-C encryption key for additional data security Copiar o linkLink copiado para a área de transferência!
Configure server-side encryption with customer-provided keys (SSE-C) to add an additional layer of encryption for backup data stored in Amazon Web Services (AWS) S3. This protects backup data if AWS credentials become exposed.
Amazon Web Services (AWS) S3 applies server-side encryption with AWS S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3.
OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption.
The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption.
You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed.
Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key.
Prerequisites
To make OADP mount a secret that contains your SSE-C key to the Velero pod at
/credentials, use the following default secret name for AWS:cloud-credentials, and leave at least one of the following labels empty:-
dpa.spec.backupLocations[].velero.credential dpa.spec.snapshotLocations[].velero.credentialThis is a workaround for a known issue: https://issues.redhat.com/browse/OADP-3971.
-
The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting.
If you need the backup location to have credentials with a different name than
cloud-credentials, you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the following example does not contain a credential name, the snapshot location will usecloud-credentialsas its secret for taking snapshots.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create an SSE-C encryption key:
Generate a random number and save it as a file named
sse.keyby running the following command:dd if=/dev/urandom bs=1 count=32 > sse.key
$ dd if=/dev/urandom bs=1 count=32 > sse.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an OpenShift Container Platform secret:
If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command:
oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse.key
$ oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are updating an existing installation, edit the values of the
cloud-credentialsecretblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Edit the value of the
customerKeyEncryptionFileattribute in thebackupLocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningYou must restart the Velero pod to remount the secret credentials properly on an existing installation.
The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key.
Verification
To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it.
Create a test file by running the following command:
echo "encrypt me please" > test.txt
$ echo "encrypt me please" > test.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the test file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Try to download the file. In either the Amazon web console or the terminal, run the following command:
s3cmd get s3://<bucket>/test.txt test.txt
$ s3cmd get s3://<bucket>/test.txt test.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow The download fails because the file is encrypted with an additional key.
Download the file with the additional encryption key by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Read the file contents by running the following command:
cat downloaded.txt
$ cat downloaded.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow encrypt me please
encrypt me pleaseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.3.6.1. Downloading a file with an SSE-C encryption key for files backed up by Velero Copiar o linkLink copiado para a área de transferência!
When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velero.
Procedure
Download the file with the additional encryption key for files backed up by Velero by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.4. Installing the Data Protection Application Copiar o linkLink copiado para a área de transferência!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create a
Secretwith the default name,cloud-credentials, which contains separate profiles for the backup and snapshot location credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
-
Click Operators
Installed Operators and select the OADP Operator. - Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
namespace-
Specifies the default namespace for OADP which is
openshift-adp. The namespace is a variable and is configurable. openshift-
Specifies that the
openshiftplugin is mandatory. resourceTimeout- Specifies how many minutes to wait for several Velero resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability, before timeout occurs. The default is 10m.
nodeAgent- Specifies the administrative agent that routes the administrative requests to servers.
enable-
Set this value to
trueif you want to enablenodeAgentand perform File System Backup. uploaderType-
Specifies the uploader type. Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. nodeSelector- Specifies the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
bucket- Specifies a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
prefix-
Specifies a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes. s3ForcePathStyle- Specifies whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage.
s3Url- Specifies the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage.
name-
Specifies the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the backup location. snapshotLocations- Specifies a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs.
region- Specifies that the snapshot location must be in the same region as the PVs.
name-
Specifies the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in thecredentials-velerofile.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
PHASEis inAvailable.
4.7.1.4.1. Setting Velero CPU and memory resource allocations Copiar o linkLink copiado para a área de transferência!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
4.7.1.4.2. Enabling self-signed CA certificates Copiar o linkLink copiado para a área de transferência!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
caCert- Specifies the Base64-encoded CA certificate string.
insecureSkipTLSVerify-
Specifies the
insecureSkipTLSVerifyconfiguration. The configuration can be set to either"true"or"false". If set to"true", SSL/TLS security is disabled. If set to"false", SSL/TLS security is enabled.
4.7.1.4.3. Using CA certificates with the velero command aliased for Velero deployment Copiar o linkLink copiado para a área de transferência!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed. .ProcedureTo use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
velero version
$ velero versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP
Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADPCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
4.7.1.4.4. Configuring node agents and node labels Copiar o linkLink copiado para a área de transferência!
The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.
Procedure
Run the node agent on any node that you choose by adding a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny label specified must match the labels on each node.
Use the same custom label in the
DPA.spec.configuration.nodeAgent.podConfig.nodeSelectorfield, which you used for labeling nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example is an anti-pattern of
nodeSelectorand does not work unless both labels,node-role.kubernetes.io/infra: ""andnode-role.kubernetes.io/worker: "", are on the node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.7.1.5. Configuring the backup storage location with a MD5 checksum algorithm Copiar o linkLink copiado para a área de transferência!
You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA.
-
CRC32 -
CRC32C -
SHA1 -
SHA256
You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check. If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32.
Prerequisites
- You have installed the OADP Operator.
- You have configured Amazon S3, or S3-compatible object storage as a backup location.
Procedure
Configure the BSL in the DPA as shown in the following example:
Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
checksumAlgorithm-
Specifies the
checksumAlgorithm. In this example, thechecksumAlgorithmfield is set to an empty value. You can select an option from the following list:CRC32,CRC32C,SHA1,SHA256.
ImportantIf you are using Noobaa as the object storage provider, and you do not set the
spec.backupLocations.velero.config.checksumAlgorithmfield in the DPA, an empty value ofchecksumAlgorithmis added to the BSL configuration.The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method.
4.7.1.6. Configuring the DPA with client burst and QPS settings Copiar o linkLink copiado para a área de transferência!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
client-burst-
Specifies the
client-burstvalue. In this example, theclient-burstfield is set to 500. client-qps-
Specifies the
client-qpsvalue. In this example, theclient-qpsfield is set to 300.
4.7.1.7. Overriding the imagePullPolicy setting in the DPA Copiar o linkLink copiado para a área de transferência!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
imagePullPolicy-
Specifies the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.7.1.8. Enabling CSI in the DataProtectionApplication CR Copiar o linkLink copiado para a área de transferência!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
csi-
Specifies the
csidefault plugin.
4.7.1.9. Disabling the node agent in DataProtectionApplication Copiar o linkLink copiado para a área de transferência!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enable- Enables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enableEnables the node agent.
You can set up a job to enable and disable the
nodeAgentfield in theDataProtectionApplicationCR. For more information, see "Running tasks in pods using jobs".
4.8. Configuring OADP with IBM Cloud Copiar o linkLink copiado para a área de transferência!
4.8.1. Configuring the OpenShift API for Data Protection with IBM Cloud Copiar o linkLink copiado para a área de transferência!
You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups.
4.8.1.1. Configuring the COS instance Copiar o linkLink copiado para a área de transferência!
You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials.
Prerequisites
- You have an IBM Cloud Platform account.
- You installed the IBM Cloud CLI.
- You are logged in to IBM Cloud.
Procedure
Install the IBM Cloud Object Storage (COS) plugin by running the following command:
ibmcloud plugin install cos -f
$ ibmcloud plugin install cos -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bucket name by running the following command:
BUCKET=<bucket_name>
$ BUCKET=<bucket_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bucket region by running the following command:
REGION=<bucket_region>
$ REGION=<bucket_region>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<bucket_region>-
Specifies the bucket region. For example,
eu-gb.
Create a resource group by running the following command:
ibmcloud resource group-create <resource_group_name>
$ ibmcloud resource group-create <resource_group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the target resource group by running the following command:
ibmcloud target -g <resource_group_name>
$ ibmcloud target -g <resource_group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the target resource group is correctly set by running the following command:
ibmcloud target
$ ibmcloud targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default
API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: DefaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the example output, the resource group is set to
Default.Set a resource group name by running the following command:
RESOURCE_GROUP=<resource_group>
$ RESOURCE_GROUP=<resource_group>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<resource_group>-
Specifies the resource group name. For example,
"default".
Create an IBM Cloud
service-instanceresource by running the following command:ibmcloud resource service-instance-create \ <service_instance_name> \ <service_name> \ <service_plan> \ <region_name>
$ ibmcloud resource service-instance-create \ <service_instance_name> \ <service_name> \ <service_plan> \ <region_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<service_instance_name>-
Specifies a name for the
service-instanceresource. <service_name>- Specifies the service name. Alternatively, you can specify a service ID.
<service_plan>- Specifies the service plan for your IBM Cloud account.
<region_name>- Specifies the region name.
Refer to the following example command:
ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ standard \ global \ -d premium-global-deployment
$ ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ standard \ global \ -d premium-global-deploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
cloud-object-storage- Specifies the service name.
-d premium-global-deployment- Specifies the deployment name.
Extract the service instance ID by running the following command:
SERVICE_INSTANCE_ID=$(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')
$ SERVICE_INSTANCE_ID=$(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a COS bucket by running the following command:
ibmcloud cos bucket-create \ --bucket $BUCKET \ --ibm-service-instance-id $SERVICE_INSTANCE_ID \ --region $REGION
$ ibmcloud cos bucket-create \ --bucket $BUCKET \ --ibm-service-instance-id $SERVICE_INSTANCE_ID \ --region $REGIONCopy to Clipboard Copied! Toggle word wrap Toggle overflow Variables such as
$BUCKET,$SERVICE_INSTANCE_ID, and$REGIONare replaced by the values you set previously.Create
HMACcredentials by running the following command.ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true}$ ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the access key ID and the secret access key from the
HMACcredentials and save them in thecredentials-velerofile. You can use thecredentials-velerofile to create asecretfor the backup storage location. Run the following command:cat > credentials-velero << __EOF__ [default] aws_access_key_id=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__
$ cat > credentials-velero << __EOF__ [default] aws_access_key_id=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.1.2. Creating a default Secret Copiar o linkLink copiado para a área de transferência!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
-
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider. Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Secretis referenced in thespec.backupLocations.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application.
4.8.1.3. Creating secrets for different credentials Copiar o linkLink copiado para a área de transferência!
Create separate Secret objects when your backup and snapshot locations require different credentials. This allows you to configure distinct authentication for each storage location while maintaining secure credential management.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
custom_secret-
Specifies the backup location
Secretwith custom name.
4.8.1.4. Installing the Data Protection Application Copiar o linkLink copiado para a área de transferência!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
-
Click Operators
Installed Operators and select the OADP Operator. - Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
provider-
Specifies that the provider is
awswhen you use IBM Cloud as a backup storage location. bucket- Specifies the IBM Cloud Object Storage (COS) bucket name.
region-
Specifies the COS region name, for example,
eu-gb. s3Url-
Specifies the S3 URL of the COS bucket. For example,
http://s3.eu-gb.cloud-object-storage.appdomain.cloud. Here,eu-gbis the region name. Replace the region name according to your bucket region. name-
Specifies the name of the secret you created by using the access key and the secret access key from the
HMACcredentials.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
PHASEis inAvailable.
4.8.1.5. Setting Velero CPU and memory resource allocations Copiar o linkLink copiado para a área de transferência!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
4.8.1.6. Configuring node agents and node labels Copiar o linkLink copiado para a área de transferência!
The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.
Procedure
Run the node agent on any node that you choose by adding a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny label specified must match the labels on each node.
Use the same custom label in the
DPA.spec.configuration.nodeAgent.podConfig.nodeSelectorfield, which you used for labeling nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example is an anti-pattern of
nodeSelectorand does not work unless both labels,node-role.kubernetes.io/infra: ""andnode-role.kubernetes.io/worker: "", are on the node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.8.1.7. Configuring the DPA with client burst and QPS settings Copiar o linkLink copiado para a área de transferência!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
client-burst-
Specifies the
client-burstvalue. In this example, theclient-burstfield is set to 500. client-qps-
Specifies the
client-qpsvalue. In this example, theclient-qpsfield is set to 300.
4.8.1.8. Overriding the imagePullPolicy setting in the DPA Copiar o linkLink copiado para a área de transferência!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
imagePullPolicy-
Specifies the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.8.1.9. Configuring the DPA with more than one BSL Copiar o linkLink copiado para a área de transferência!
Configure the DataProtectionApplication (DPA) custom resource (CR) with multiple BackupStorageLocation (BSL) resources to store backups across different locations using provider-specific credentials. This provides backup distribution and location-specific restore capabilities.
For example, you have configured the following two BSLs:
- Configured one BSL in the DPA and set it as the default BSL.
-
Created another BSL independently by using the
BackupStorageLocationCR.
As you have already set the BSL created through the DPA as the default, you cannot set the independently created BSL again as the default. This means, at any given time, you can set only one BSL as the default BSL.
Prerequisites
- You must install the OADP Operator.
- You must create the secrets by using the credentials provided by the cloud provider.
Procedure
Configure the
DataProtectionApplicationCR with more than oneBackupStorageLocationCR. See the following example:Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
name: aws- Specifies a name for the first BSL.
default: true-
Indicates that this BSL is the default BSL. If a BSL is not set in the
Backup CR, the default BSL is used. You can set only one BSL as the default. <bucket_name>- Specifies the bucket name.
<prefix>-
Specifies a prefix for Velero backups. For example,
velero. <region_name>- Specifies the AWS region for the bucket.
cloud-credentials-
Specifies the name of the default
Secretobject that you created. name: odf- Specifies a name for the second BSL.
<url>- Specifies the URL of the S3 endpoint.
<custom_secret_name_odf>-
Specifies the correct name for the
Secret. For example,custom_secret_name_odf. If you do not specify aSecretname, the default name is used.
Specify the BSL to be used in the backup CR. See the following example.
Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Specifies the namespace to back up.
<backup_storage_location>- Specifies the storage location.
4.8.1.10. Disabling the node agent in DataProtectionApplication Copiar o linkLink copiado para a área de transferência!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enable- Enables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enableEnables the node agent.
You can set up a job to enable and disable the
nodeAgentfield in theDataProtectionApplicationCR. For more information, see "Running tasks in pods using jobs".
4.9. Configuring OADP with Azure Copiar o linkLink copiado para a área de transferência!
4.9.1. Configuring the OpenShift API for Data Protection with Microsoft Azure Copiar o linkLink copiado para a área de transferência!
Configure the OpenShift API for Data Protection (OADP) with Microsoft Azure to back up and restore cluster resources by using Azure storage. This provides data protection capabilities for your OpenShift Container Platform clusters.
The OADP Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure Azure for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.9.1.1. Configuring Microsoft Azure Copiar o linkLink copiado para a área de transferência!
Configure Microsoft Azure storage and service principal credentials for backup storage with OADP. This provides the necessary authentication and storage infrastructure for data protection operations.
Prerequisites
- You must have the Azure CLI installed.
Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools.
This identity is used for access to resources.
- Create a service principal
- Sign in using a service principal and password
- Sign in using a service principal and certificate
- Manage service principal roles
- Create an Azure resource using a service principal
- Reset service principal credentials
For more details, see Create an Azure service principal with Azure CLI.
Procedure
Log in to Azure:
az login
$ az loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
AZURE_RESOURCE_GROUPvariable:AZURE_RESOURCE_GROUP=Velero_Backups
$ AZURE_RESOURCE_GROUP=Velero_BackupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure resource group:
az group create -n $AZURE_RESOURCE_GROUP --location CentralUS
$ az group create -n $AZURE_RESOURCE_GROUP --location CentralUSCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
CentralUS- Specifies your location.
Set the
AZURE_STORAGE_ACCOUNT_IDvariable:AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
$ AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure storage account:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BLOB_CONTAINERvariable:BLOB_CONTAINER=velero
$ BLOB_CONTAINER=veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure Blob storage container:
az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID
$ az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service principal and credentials for
velero:AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`
$ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service principal with the
Contributorrole, assigning a specific--roleand--scopes:AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$AZURE_RESOURCE_GROUP`$ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$AZURE_RESOURCE_GROUP`Copy to Clipboard Copied! Toggle word wrap Toggle overflow The CLI generates a password for you. Ensure you capture the password.
After creating the service principal, obtain the client id.
AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`
$ AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor this to be successful, you must know your Azure application ID.
Save the service principal credentials in the
credentials-velerofile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to add Azure as a replication repository.
4.9.1.2. About backup and snapshot locations and their secrets Copiar o linkLink copiado para a área de transferência!
Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.
4.9.1.2.1. Backup locations Copiar o linkLink copiado para a área de transferência!
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
4.9.1.2.2. Snapshot locations Copiar o linkLink copiado para a área de transferência!
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
4.9.1.2.3. Secrets Copiar o linkLink copiado para a área de transferência!
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.9.1.3. About authenticating OADP with Azure Copiar o linkLink copiado para a área de transferência!
Review authentication methods for OADP with Azure to select the appropriate authentication approach for your security requirements.
You can authenticate OADP with Azure by using the following methods:
- A Velero-specific service principal with secret-based authentication.
- A Velero-specific storage account access key with secret-based authentication.
4.9.1.4. Using a service principal or a storage account access key Copiar o linkLink copiado para a área de transferência!
You create a default Secret object and reference it in the backup storage location custom resource. The credentials file for the Secret object can contain information about the Azure service principal or a storage account access key.
The default name of the Secret is cloud-credentials-azure.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
-
You have access to the OpenShift cluster as a user with
cluster-adminprivileges. - You have an Azure subscription with appropriate permissions.
- You have installed OADP.
- You have configured an object storage for storing the backups.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.You can use one of the following two methods to authenticate OADP with Azure.
Use the service principal with secret-based authentication. See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a storage account access key. See the following example:
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<azure_storage_account_access_key> AZURE_SUBSCRIPTION_ID=<azure_subscription_id> AZURE_RESOURCE_GROUP=<azure_resource_group> AZURE_CLOUD_NAME=<azure_cloud_name>
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<azure_storage_account_access_key> AZURE_SUBSCRIPTION_ID=<azure_subscription_id> AZURE_RESOURCE_GROUP=<azure_resource_group> AZURE_CLOUD_NAME=<azure_cloud_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reference the
Secretin thespec.backupLocations.velero.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<custom_secret>-
Specifies the backup location
Secretwith custom name.
You can configure the Data Protection Application by setting Velero resource allocations or enabling self-signed CA certificates.
4.9.1.5. Setting Velero CPU and memory resource allocations Copiar o linkLink copiado para a área de transferência!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
4.9.1.6. Enabling self-signed CA certificates Copiar o linkLink copiado para a área de transferência!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
caCert- Specifies the Base64-encoded CA certificate string.
insecureSkipTLSVerify-
Specifies the
insecureSkipTLSVerifyconfiguration. The configuration can be set to either"true"or"false". If set to"true", SSL/TLS security is disabled. If set to"false", SSL/TLS security is enabled.
4.9.1.6.1. Using CA certificates with the velero command aliased for Velero deployment Copiar o linkLink copiado para a área de transferência!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed. .ProcedureTo use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
velero version
$ velero versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP
Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADPCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
4.9.1.7. Installing the Data Protection Application Copiar o linkLink copiado para a área de transferência!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials-azure. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. -
Secretwith another custom name for the snapshot location. You add thisSecretto theDataProtectionApplicationCR.
NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.-
Procedure
-
Click Operators
Installed Operators and select the OADP Operator. - Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
namespace-
Specifies the default namespace for OADP which is
openshift-adp. The namespace is a variable and is configurable. openshift-
Specifies that the
openshiftplugin is mandatory. resourceTimeout- Specifies how many minutes to wait for several Velero resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability, before timeout occurs. The default is 10m.
nodeAgent- Specifies the administrative agent that routes the administrative requests to servers.
enable-
Set this value to
trueif you want to enablenodeAgentand perform File System Backup. uploaderType-
Specifies the uploader type. Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. nodeSelector- Specifies the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
resourceGroup- Specifies the Azure resource group.
storageAccount- Specifies the Azure storage account ID.
subscriptionId- Specifies the Azure subscription ID.
name-
Specifies the name of the
Secretobject. If you do not specify this value, the default name,cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location. bucket- Specifies a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
prefix-
Specifies a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes. snapshotLocations- Specifies the snapshot location. You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs.
name-
Specifies the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify that the
PHASEis inAvailable.
4.9.1.8. Configuring the DPA with client burst and QPS settings Copiar o linkLink copiado para a área de transferência!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
client-burst-
Specifies the
client-burstvalue. In this example, theclient-burstfield is set to 500. client-qps-
Specifies the
client-qpsvalue. In this example, theclient-qpsfield is set to 300.
4.9.1.9. Overriding the imagePullPolicy setting in the DPA Copiar o linkLink copiado para a área de transferência!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
imagePullPolicy-
Specifies the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
4.9.1.9.1. Configuring node agents and node labels Copiar o linkLink copiado para a área de transferência!
The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.
Procedure
Run the node agent on any node that you choose by adding a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny label specified must match the labels on each node.
Use the same custom label in the
DPA.spec.configuration.nodeAgent.podConfig.nodeSelectorfield, which you used for labeling nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example is an anti-pattern of
nodeSelectorand does not work unless both labels,node-role.kubernetes.io/infra: ""andnode-role.kubernetes.io/worker: "", are on the node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.9.1.9.2. Enabling CSI in the DataProtectionApplication CR Copiar o linkLink copiado para a área de transferência!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
csi-
Specifies the
csidefault plugin.
4.9.1.9.3. Disabling the node agent in DataProtectionApplication Copiar o linkLink copiado para a área de transferência!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enable- Enables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enableEnables the node agent.
You can set up a job to enable and disable the
nodeAgentfield in theDataProtectionApplicationCR. For more information, see "Running tasks in pods using jobs".
4.10. Configuring OADP with Google Cloud Copiar o linkLink copiado para a área de transferência!
4.10.1. Configuring the OpenShift API for Data Protection with Google Cloud Copiar o linkLink copiado para a área de transferência!
You install the OpenShift API for Data Protection (OADP) with Google Cloud by installing the OADP Operator. The Operator installs Velero 1.14.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure Google Cloud for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details.
4.10.1.1. Configuring Google Cloud Copiar o linkLink copiado para a área de transferência!
You configure Google Cloud for the OpenShift API for Data Protection (OADP).
Prerequisites
-
You must have the
gcloudandgsutilCLI tools installed. See the Google cloud documentation for details.
Procedure
Log in to Google Cloud:
gcloud auth login
$ gcloud auth loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BUCKETvariable:BUCKET=<bucket>
$ BUCKET=<bucket>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
bucket- Specifies the bucket name.
Create the storage bucket:
gsutil mb gs://$BUCKET/
$ gsutil mb gs://$BUCKET/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
PROJECT_IDvariable to your active project:PROJECT_ID=$(gcloud config get-value project)
$ PROJECT_ID=$(gcloud config get-value project)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account:
gcloud iam service-accounts create velero \ --display-name "Velero service account"$ gcloud iam service-accounts create velero \ --display-name "Velero service account"Copy to Clipboard Copied! Toggle word wrap Toggle overflow List your service accounts:
gcloud iam service-accounts list
$ gcloud iam service-accounts listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
SERVICE_ACCOUNT_EMAILvariable to match itsemailvalue:SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')$ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the minimum necessary permissions:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
velero.servercustom role:gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"$ gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add IAM policy binding to the project:
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server$ gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the IAM service account:
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the IAM service account keys to the
credentials-velerofile in the current directory:gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL$ gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAILCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for Google Cloud before you install the Data Protection Application.
4.10.1.2. About backup and snapshot locations and their secrets Copiar o linkLink copiado para a área de transferência!
Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.
4.10.1.2.1. Backup locations Copiar o linkLink copiado para a área de transferência!
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
4.10.1.2.2. Snapshot locations Copiar o linkLink copiado para a área de transferência!
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
4.10.1.2.3. Secrets Copiar o linkLink copiado para a área de transferência!
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
4.10.1.2.4. Creating a default Secret Copiar o linkLink copiado para a área de transferência!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials-gcp.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
-
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider. Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Secretis referenced in thespec.backupLocations.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application.
4.10.1.2.5. Creating secrets for different credentials Copiar o linkLink copiado para a área de transferência!
Create separate Secret objects when your backup and snapshot locations require different credentials. This allows you to configure distinct authentication for each storage location while maintaining secure credential management.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
custom_secret-
Specifies the backup location
Secretwith custom name.
4.10.1.2.6. Setting Velero CPU and memory resource allocations Copiar o linkLink copiado para a área de transferência!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
4.11. Enabling self-signed CA certificates Copiar o linkLink copiado para a área de transferência!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
caCert- Specifies the Base64-encoded CA certificate string.
insecureSkipTLSVerify-
Specifies the
insecureSkipTLSVerifyconfiguration. The configuration can be set to either"true"or"false". If set to"true", SSL/TLS security is disabled. If set to"false", SSL/TLS security is enabled.
4.12. Using CA certificates with the velero command aliased for Velero deployment Copiar o linkLink copiado para a área de transferência!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed. .ProcedureTo use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
velero version
$ velero versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP
Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADPCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.