Questo contenuto non è disponibile nella lingua selezionata.
Chapter 5. OADP Application backup and restore
5.1. Introduction to OpenShift API for Data Protection Copia collegamentoCollegamento copiato negli appunti!
The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs).
However, OADP does not serve as a disaster recovery solution for etcd or OpenShift Operators.
5.1.1. OpenShift API for Data Protection APIs Copia collegamentoCollegamento copiato negli appunti!
OADP provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources.
OADP provides the following APIs:
5.1.1.1. Support for OpenShift API for Data Protection Copia collegamentoCollegamento copiato negli appunti!
| Version | OCP version | General availability | Full support ends | Maintenance ends | Extended Update Support (EUS) | Extended Update Support Term 2 (EUS Term 2) |
| 1.5 |
| 17 June 2025 | Release of 1.6 | Release of 1.7 | EUS must be on OCP 4.20 | EUS Term 2 must be on OCP 4.20 |
| 1.4 |
| 10 Jul 2024 | Release of 1.5 | Release of 1.6 | 27 Jun 2026 EUS must be on OCP 4.16 | 27 Jun 2027 EUS Term 2 must be on OCP 4.16 |
| 1.3 |
| 29 Nov 2023 | 10 Jul 2024 | Release of 1.5 | 31 Oct 2025 EUS must be on OCP 4.14 | 31 Oct 2026 EUS Term 2 must be on OCP 4.14 |
5.1.1.1.1. Unsupported versions of the OADP Operator Copia collegamentoCollegamento copiato negli appunti!
| Version | General availability | Full support ended | Maintenance ended |
| 1.2 | 14 Jun 2023 | 29 Nov 2023 | 10 Jul 2024 |
| 1.1 | 01 Sep 2022 | 14 Jun 2023 | 29 Nov 2023 |
| 1.0 | 09 Feb 2022 | 01 Sep 2022 | 14 Jun 2023 |
For more details about EUS, see Extended Update Support.
For more details about EUS Term 2, see Extended Update Support Term 2.
5.2. OADP release notes Copia collegamentoCollegamento copiato negli appunti!
5.2.1. OADP 1.5 release notes Copia collegamentoCollegamento copiato negli appunti!
The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.
For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs
5.2.1.1. OADP 1.5.4 release notes Copia collegamentoCollegamento copiato negli appunti!
OpenShift API for Data Protection (OADP) 1.5.4 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.5.3. OADP 1.5.4 introduces a known issue and fixes several Common Vulnerabilities and Exposures (CVEs).
5.2.1.1.1. Known issues Copia collegamentoCollegamento copiato negli appunti!
- Simultaneous updates to the same
NonAdminBackupStorageLocationRequestobjects cause resource conflicts Simultaneous updates by several controllers or processes to the same
NonAdminBackupStorageLocationRequestobjects cause resource conflicts during backup creation in OADP self-service. As a consequence, reconciliation attempts fail withobject has been modifiederrors. No known workaround exists.
5.2.1.1.2. Resolved issues Copia collegamentoCollegamento copiato negli appunti!
- OADP 1.5.4 fixes the following CVEs
5.2.1.2. OADP 1.5.3 release notes Copia collegamentoCollegamento copiato negli appunti!
OpenShift API for Data Protection (OADP) 1.5.3 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.5.2.
5.2.1.3. OADP 1.5.2 release notes Copia collegamentoCollegamento copiato negli appunti!
The OpenShift API for Data Protection (OADP) 1.5.2 release notes lists resolved issues.
5.2.1.3.1. Resolved issues Copia collegamentoCollegamento copiato negli appunti!
Self-signed certificate for internal image backup should not break other BSLs
Before this update, OADP would only process the first custom CA certificate found among all backup storage locations (BSLs) and apply it globally. This behavior prevented multiple BSLs with different CA certificates from working correctly. Additionally, system-trusted certificates were not included, causing failures when connecting to standard services. With this update, OADP now:
- Concatenates all unique CA certificates from AWS BSLs into a single bundle.
- Includes system-trusted certificates automatically.
- Enables multiple BSLs with different custom CA certificates to operate simultaneously.
- Only processes CA certificates when image backup is enabled (default behavior).
This enhancement improves compatibility for environments using multiple storage providers with different certificate requirements, particularly when backing up internal images to AWS S3-compatible storage with self-signed certificates.
5.2.1.4. OADP 1.5.1 release notes Copia collegamentoCollegamento copiato negli appunti!
The OpenShift API for Data Protection (OADP) 1.5.1 release notes lists new features, resolved issues, known issues, and deprecated features.
5.2.1.4.1. New features Copia collegamentoCollegamento copiato negli appunti!
CloudStorage API is fully supported
The CloudStorage API feature, available as a Technology Preview before this update, is fully supported from OADP 1.5.1. The CloudStorage API automates the creation of a bucket for object storage.
New DataProtectionTest custom resource is available
The DataProtectionTest (DPT) is a custom resource (CR) that provides a framework to validate your OADP configuration. The DPT CR checks and reports information for the following parameters:
- The upload performance of the backups to the object storage.
- The Container Storage Interface (CSI) snapshot readiness for persistent volume claims.
- The storage bucket configuration, such as encryption and versioning.
Using this information in the DPT CR, you can ensure that your data protection environment is properly configured and performing according to the set configuration.
Note that you must configure STORAGE_ACCOUNT_ID when using DPT with OADP on Azure.
New node agent load affinity configurations are available
-
Node agent load affinity: You can schedule the node agent pods on specific nodes by using the
spec.podConfig.nodeSelectorobject of theDataProtectionApplication(DPA) custom resource (CR). You can add more restrictions on the node agent pods scheduling by using thenodeagent.loadAffinityobject in the DPA spec. Repository maintenance job affinity configurations: You can use the repository maintenance job affinity configurations in the
DataProtectionApplication(DPA) custom resource (CR) only if you use Kopia as the backup repository.You have the option to configure the load affinity at the global level affecting all repositories, or for each repository. You can also use a combination of global and per-repository configuration.
-
Velero load affinity: You can use the
podConfig.nodeSelectorobject to assign the Velero pod to specific nodes. You can also configure thevelero.loadAffinityobject for pod-level affinity and anti-affinity.
Node agent load concurrency is available
With this update, users can control the maximum number of node agent operations that can run simultaneously on each node within their cluster. It also enables better resource management, optimizing backup and restore workflows for improved performance and a more streamlined experience.
5.2.1.4.2. Resolved issues Copia collegamentoCollegamento copiato negli appunti!
DataProtectionApplicationSpec overflowed annotation limit, causing potential misconfiguration in deployments
Before this update, the DataProtectionApplicationSpec used deprecated PodAnnotations, which led to an annotation limit overflow. This caused potential misconfigurations in deployments. In this release, we have added PodConfig for annotations in pods deployed by the Operator, ensuring consistent annotations and improved manageability for end users. As a result, deployments should now be more reliable and easier to manage.
Root file system for OADP controller manager is now read-only
Before this update, the manager container of the openshift-adp-controller-manager-* pod was configured to run with a writable root file system. As a consequence, this could allow for tampering with the container’s file system or the writing of foreign executables. With this release, the container’s security context has been updated to set the root file system to read-only while ensuring necessary functions that require write access, such as the Kopia cache, continue to operate correctly. As a result, the container is hardened against potential threats.
nonAdmin.enable: false in multiple DPAs no longer causes reconcile issues
Before this update, when a user attempted to create a second non-admin DataProtectionApplication (DPA) on a cluster where one already existed, the new DPA failed to reconcile. With this release, the restriction on Non-Admin Controller installation to one per cluster has been removed. As a result, users can install multiple Non-Admin Controllers across the cluster without encountering errors.
OADP supports self-signed certificates
Before this update, using a self-signed certificate for backup images with a storage provider such as Minio resulted in an x509: certificate signed by unknown authority error during the backup process. With this release, certificate validation has been updated to support self-signed certificates in OADP, ensuring successful backups.
velero describe includes defaultVolumesToFsBackup
Before this update, the velero describe output command omitted the defaultVolumesToFsBackup flag. This affected the visibility of backup configuration details for users. With this release, the velero describe output includes the defaultVolumesToFsBackup flag information, improving the visibility of backup settings.
DPT CR no longer fail when s3Url is secured
Before this update, DataProtectionTest (DPT) failed to run when s3Url was secured due to an unverified certificate because the DPT CR lacked the ability to skip or add the caCert in the spec field. As a consequence, data upload failure occurred due to an unverified certificate. With this release, DPT CR has been updated to accept and skip CA cert in spec field, resolving SSL verification errors. As a result, DPT no longer fails when using secured s3Url.
Adding a backupLocation to DPA with an existing backupLocation name is not rejected
Before this update, adding a second backupLocation with the same name in DataProtectionApplication (DPA) caused OADP to enter an invalid state, leading to Backup and Restore failures due to Velero’s inability to read Secret credentials. As a consequence, Backup and Restore operations failed. With this release, the duplicate backupLocation names in DPA are no longer allowed, preventing Backup and Restore failures. As a result, duplicate backupLocation names are rejected, ensuring seamless data protection.
5.2.1.4.3. Known issues Copia collegamentoCollegamento copiato negli appunti!
The restore fails for backups created on OpenStack using the Cinder CSI driver
When you start a restore operation for a backup that was created on an OpenStack platform using the Cinder Container Storage Interface (CSI) driver, the initial backup only succeeds after the source application is manually scaled down. The restore job fails, preventing you from successfully recovering your application’s data and state from the backup. No known workaround exists.
Datamover pods scheduled on unexpected nodes during backup if the nodeAgent.loadAffinity parameter has many elements
Due to an issue in Velero 1.14 and later, the OADP node-agent only processes the first nodeSelector element within the loadAffinity array. As a consequence, if you define multiple nodeSelector objects, all objects except the first are ignored, potentially causing datamover pods to be scheduled on unexpected nodes during a backup.
To work around this problem, consolidate all required matchExpressions from multiple nodeSelector objects into the first nodeSelector object. As a result, all node affinity rules are correctly applied, ensuring datamover pods are scheduled to the appropriate nodes.
OADP Backup fails when using CA certificates with aliased command
The CA certificate is not stored as a file on the running Velero container. As a consequence, the user experience degraded due to missing caCert in Velero container, requiring manual setup and downloads. To work around this problem, manually add cert to the Velero deployment. For instructions, see Using cacert with velero command aliased via velero deployment.
The nodeSelector spec is not supported for the Data Mover restore action
When a Data Protection Application (DPA) is created with the nodeSelector field set in the nodeAgent parameter, Data Mover restore partially fails instead of completing the restore operation. No known workaround exists.
Image streams backups are partially failing when the DPA is configured with caCert
An unverified certificate in the S3 connection during backups with caCert in DataProtectionApplication (DPA) causes the ocp-django application’s backup to partially fail and result in data loss. No known workaround exists.
Kopia does not delete cache on worker node
When the ephemeral-storage parameter is configured and running file system restore, the cache is not automatically deleted from the worker node. As a consequence, the /var partition overflows during backup restore, causing increased storage usage and potential resource exhaustion. To work around this problem, restart the node agent pod, which clears the cache. As a result, cache is deleted.
Google Cloud VSL backups fail with Workload Identity because of invalid project configuration
When performing a volumeSnapshotLocation (VSL) backup on Google Cloud Workload Identity, the Velero Google Cloud plugin creates an invalid API request if the Google Cloud project is also specified in the snapshotLocations configuration of DataProtectionApplication (DPA). As a consequence, the Google Cloud API returns a RESOURCE_PROJECT_INVALID error, and the backup job finishes with a PartiallyFailed status. No known workaround exists.
VSL backups fail for CloudStorage API on AWS with STS
The volumeSnapshotLocation (VSL) backup fails because of missing the AZURE_RESOURCE_GROUP parameter in the credentials file, even if AZURE_RESOURCE_GROUP is already mentioned in the DataProtectionApplication (DPA) config for VSL. No known workaround exists.
Backups of applications with ImageStreams fail on Azure with STS
When backing up applications that include image stream resources on an Azure cluster using STS, the OADP plugin incorrectly attempts to locate a secret-based credential for the container registry. As a consequence, the required secret is not found in the STS environment, causing the ImageStream custom backup action to fail. This results in the overall backup status marked as PartiallyFailed. No known workaround exists.
DPA reconciliation fails for CloudStorageRef configuration
When a user creates a bucket and uses the backupLocations.bucket.cloudStorageRef configuration, bucket credentials are not present in the DataProtectionApplication (DPA) custom resource (CR). As a result, the DPA reconciliation fails even if bucket credentials are present in the CloudStorage CR. To work around this problem, add the same credentials to the backupLocations section of the DPA CR.
5.2.1.4.4. Deprecated features Copia collegamentoCollegamento copiato negli appunti!
The configuration.restic specification field has been deprecated
With OADP 1.5.0, the configuration.restic specification field has been deprecated. Use the nodeAgent section with the uploaderType field for selecting kopia or restic as a uploaderType. Note that Restic is deprecated in OADP 1.5.0.
5.2.1.5. OADP 1.5.0 release notes Copia collegamentoCollegamento copiato negli appunti!
The OpenShift API for Data Protection (OADP) 1.5.0 release notes lists resolved issues and known issues.
5.2.1.5.1. New features Copia collegamentoCollegamento copiato negli appunti!
OADP 1.5.0 introduces a new Self-Service feature
OADP 1.5.0 introduces a new feature named OADP Self-Service, enabling namespace admin users to back up and restore applications on the OpenShift Container Platform. In the earlier versions of OADP, you needed the cluster-admin role to perform OADP operations such as backing up and restoring an application, creating a backup storage location, and so on.
From OADP 1.5.0 onward, you do not need the cluster-admin role to perform the backup and restore operations. You can use OADP with the namespace admin role. The namespace admin role has administrator access only to the namespace the user is assigned to. You can use the Self-Service feature only after the cluster administrator installs the OADP Operator and provides the necessary permissions.
Collecting logs with the must-gather tool has been improved with a Markdown summary
You can collect logs, and information about OpenShift API for Data Protection (OADP) custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in the must-gather logs clusters directory.
dataMoverPrepareTimeout and resourceTimeout parameters are now added to nodeAgent within the DPA
The nodeAgent field in Data Protection Application (DPA) now includes the following parameters:
-
dataMoverPrepareTimeout: Defines the duration theDataUploadorDataDownloadprocess will wait. The default value is 30 minutes. -
resourceTimeout: Sets the timeout for resource processes not addressed by other specific timeout parameters. The default value is 10 minutes.
Use the spec.configuration.nodeAgent parameter in DPA for configuring nodeAgent daemon set
Velero no longer uses the node-agent-config config map for configuring the nodeAgent daemon set. With this update, you must use the new spec.configuration.nodeAgent parameter in a Data Protection Application (DPA) for configuring the nodeAgent daemon set.
Configuring DPA with the backup repository configuration config map is now possible
With Velero 1.15 and later, you can now configure the total size of a cache per repository. This prevents pods from being removed due to running out of ephemeral storage. See the following new parameters added to the NodeAgentConfig field in DPA:
-
cacheLimitMB: Sets the local data cache size limit in megabytes. fullMaintenanceInterval: The default value is 24 hours. Controls the removal rate of deleted Velero backups from the Kopia repository using the following override options:-
normalGC: 24 hours -
fastGC: 12 hours -
eagerGC: 6 hours
-
Enhancing the node-agent security
With this update, the following changes are added:
-
A new
configurationoption is now added to thevelerofield in DPA. The default value for the
disableFsBackupparameter isfalseornon-existing. With this update, the following options are added to theSecurityContextfield:-
Privileged: true -
AllowPrivilegeEscalation: true
-
If you set the
disableFsBackupparameter totrue, it removes the following mounts from the node-agent:-
host-pods -
host-plugins
-
- Modifies that the node-agent is always run as a non-root user.
- Changes the root file system to read only.
Updates the following mount points with the write access:
-
/home/velero -
tmp/credentials
-
-
Uses the
SeccompProfileTypeRuntimeDefaultoption for theSeccompProfileparameter.
Adds DPA support for parallel item backup
By default, only one thread processes an item block. Velero 1.16 supports a parallel item backup, where multiple items within a backup can be processed in parallel.
You can use the optional Velero server parameter --item-block-worker-count to run additional worker threads to process items in parallel. To enable this in OADP, set the dpa.Spec.Configuration.Velero.ItemBlockWorkerCount parameter to an integer value greater than zero.
Running multiple full backups in parallel is not yet supported.
OADP logs are now available in the JSON format
With the of release OADP 1.5.0, the logs are now available in the JSON format. It helps to have pre-parsed data in their Elastic logs management system.
The oc get dpa command now displays RECONCILED status
With this release, the oc get dpa command now displays RECONCILED status instead of displaying only NAME and AGE to improve user experience. For example:
oc get dpa -n openshift-adp NAME RECONCILED AGE velero-sample True 2m51s
$ oc get dpa -n openshift-adp
NAME RECONCILED AGE
velero-sample True 2m51s
5.2.1.5.2. Resolved issues Copia collegamentoCollegamento copiato negli appunti!
Containers now use FallbackToLogsOnError for terminationMessagePolicy
With this release, the terminationMessagePolicy field can now set the FallbackToLogsOnError value for the OpenShift API for Data Protection (OADP) Operator containers such as operator-manager, velero, node-agent, and non-admin-controller.
This change ensures that if a container exits with an error and the termination message file is empty, OpenShift uses the last portion of the container logs output as the termination message.
Namespace admin can now access the application after restore
Previously, the namespace admin could not execute an application after the restore operation with the following errors:
-
exec operation is not allowed because the pod’s security context exceeds your permissions -
unable to validate against any security context constraint -
not usable by user or serviceaccount, provider restricted-v2
With this update, this issue is now resolved and the namespace admin can access the application successfully after the restore.
Specifying status restoration at the individual resource instance level using the annotation is now possible
Previously, status restoration was only configured at the resource type using the restoreStatus field in the Restore custom resource (CR).
With this release, you can now specify the status restoration at the individual resource instance level using the following annotation:
metadata:
annotations:
velero.io/restore-status: "true"
metadata:
annotations:
velero.io/restore-status: "true"
Restore is now successful with excludedClusterScopedResources
Previously, on performing the backup of an application with the excludedClusterScopedResources field set to storageclasses, Namespace parameter, the backup was successful but the restore partially failed. With this update, the restore is successful.
Backup is completed even if it gets restarted during the waitingForPluginOperations phase
Previously, a backup was marked as failed with the following error message:
failureReason: found a backup with status "InProgress" during the server starting, mark it as "Failed"
failureReason: found a backup with status "InProgress" during the server starting,
mark it as "Failed"
With this update, the backup is completed if it gets restarted during the waitingForPluginOperations phase.
Error messages are now more informative when the` disableFsbackup` parameter is set to true in DPA
Previously, when the spec.configuration.velero.disableFsBackup field from a Data Protection Application (DPA) was set to true, the backup partially failed with an error, which was not informative.
This update makes error messages more useful for troubleshooting issues. For example, error messages indicating that disableFsBackup: true is the issue in a DPA or not having access to a DPA if it is for non-administrator users.
Handles AWS STS credentials in the parseAWSSecret
Previously, AWS credentials using STS authentication were not properly validated.
With this update, the parseAWSSecret function detects STS-specific fields, and updates the ensureSecretDataExists function to handle STS profiles correctly.
The repositoryMaintenance job affinity config is available to configure
Previously, the new configurations for repository maintenance job pod affinity was missing from a DPA specification.
With this update, the repositoryMaintenance job affinity config is now available to map a BackupRepository identifier to its configuration.
The ValidationErrors field fades away once the CR specification is correct
Previously, when a schedule CR was created with a wrong spec.schedule value and the same was later patched with a correct value, the ValidationErrors field still existed. Consequently, the ValidationErrors field was displaying incorrect information even though the spec was correct.
With this update, the ValidationErrors field fades away once the CR specification is correct.
The volumeSnapshotContents custom resources are restored when the includedNamesapces field is used in restoreSpec
Previously, when a restore operation was triggered with the includedNamespace field in a restore specification, restore operation was completed successfully but no volumeSnapshotContents custom resources (CR) were created and the PVCs were in a Pending status.
With this update, volumeSnapshotContents CR are restored even when the includedNamesapces field is used in restoreSpec. As a result, an application pod is in a Running state after restore.
OADP operator successfully creates bucket on top of AWS
Previously, the container was configured with the readOnlyRootFilesystem: true setting for security, but the code attempted to create temporary files in the /tmp directory using the os.CreateTemp() function. Consequently, while using the AWS STS authentication with the Cloud Credential Operator (CCO) flow, OADP failed to create temporary files that were required for AWS credential handling with the following error:
ERROR unable to determine if bucket exists. {"error": "open /tmp/aws-shared-credentials1211864681: read-only file system"}
ERROR unable to determine if bucket exists. {"error": "open /tmp/aws-shared-credentials1211864681: read-only file system"}
With this update, the following changes are added to address this issue:
-
A new
emptyDirvolume namedtmp-dirto the controller pod specification. -
A volume mount to the container, which mounts this volume to the
/tmpdirectory. -
For security best practices, the
readOnlyRootFilesystem: trueis maintained. -
Replaced the deprecated
ioutil.TempFile()function with the recommendedos.CreateTemp()function. -
Removed the unnecessary
io/ioutilimport, which is no longer needed.
For a complete list of all issues resolved in this release, see the list of OADP 1.5.0 resolved issues in Jira.
5.2.1.5.3. Known issues Copia collegamentoCollegamento copiato negli appunti!
Kopia does not delete all the artifacts after backup expiration
Even after deleting a backup, Kopia does not delete the volume artifacts from the ${bucket_name}/kopia/$openshift-adp on the S3 location after the backup expired. Information related to the expired and removed data files remains in the metadata. To ensure that OpenShift API for Data Protection (OADP) functions properly, the data is not deleted, and it exists in the /kopia/ directory, for example:
-
kopia.repository: Main repository format information such as encryption, version, and other details. -
kopia.blobcfg: Configuration for how data blobs are named. -
kopia.maintenance: Tracks maintenance owner, schedule, and last successful build. -
log: Log blobs.
For a complete list of all known issues in this release, see the list of OADP 1.5.0 known issues in Jira.
5.2.1.5.4. Deprecated features Copia collegamentoCollegamento copiato negli appunti!
The configuration.restic specification field has been deprecated
With OpenShift API for Data Protection (OADP) 1.5.0, the configuration.restic specification field has been deprecated. Use the nodeAgent section with the uploaderType field for selecting kopia or restic as a uploaderType. Note that, Restic is deprecated in OpenShift API for Data Protection (OADP) 1.5.0.
5.2.1.5.5. Technology Preview Copia collegamentoCollegamento copiato negli appunti!
Support for HyperShift hosted OpenShift clusters is available as a Technology Preview
OADP can support and facilitate application migrations within HyperShift hosted OpenShift clusters as a Technology Preview. It ensures a seamless backup and restore operation for applications in hosted clusters.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
5.2.1.6. Upgrading OADP 1.4 to 1.5 Copia collegamentoCollegamento copiato negli appunti!
Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OADP 1.1 to 1.3, upgrade first to 1.2, and then to 1.3.
5.2.1.6.1. Changes from OADP 1.4 to 1.5 Copia collegamentoCollegamento copiato negli appunti!
The Velero server has been updated from version 1.14 to 1.16.
This changes the following:
- Version Support changes
- OpenShift API for Data Protection implements a streamlined version support policy. Red Hat supports only one version of OpenShift API for Data Protection (OADP) on one OpenShift version to ensure better stability and maintainability. OADP 1.5.0 is only supported on OpenShift 4.19 version.
- OADP Self-Service
OADP 1.5.0 introduces a new feature named OADP Self-Service, enabling namespace admin users to back up and restore applications on the OpenShift Container Platform. In the earlier versions of OADP, you needed the cluster-admin role to perform OADP operations such as backing up and restoring an application, creating a backup storage location, and so on.
From OADP 1.5.0 onward, you do not need the cluster-admin role to perform the backup and restore operations. You can use OADP with the namespace admin role. The namespace admin role has administrator access only to the namespace the user is assigned to. You can use the Self-Service feature only after the cluster administrator installs the OADP Operator and provides the necessary permissions.
backupPVCandrestorePVCconfigurationsA
backupPVCresource is an intermediate persistent volume claim (PVC) to access data during the data movement backup operation. You create areadonlybackup PVC by using thenodeAgent.backupPVCsection of theDataProtectionApplication(DPA) custom resource.A
restorePVCresource is an intermediate PVC that is used to write data during the Data Mover restore operation.You can configure
restorePVCin the DPA by using theignoreDelayBindingfield.
5.2.1.6.2. Backing up the DPA configuration Copia collegamentoCollegamento copiato negli appunti!
You must back up your current DataProtectionApplication (DPA) configuration.
Procedure
Save your current DPA configuration by running the following command:
Example command
oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
$ oc get dpa -n openshift-adp -o yaml > dpa.orig.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.1.6.3. Upgrading the OADP Operator Copia collegamentoCollegamento copiato negli appunti!
You can upgrade the OpenShift API for Data Protection (OADP) Operator using the following procedure.
Do not install OADP 1.5.0 on a OpenShift 4.18 cluster.
Prerequisites
- You have installed the latest OADP 1.4.6.
- You have backed up your data.
Procedure
Upgrade OpenShift 4.18 to OpenShift 4.19.
NoteOpenShift API for Data Protection (OADP) 1.4 is not supported on OpenShift 4.19.
-
Change your subscription channel for the OADP Operator from
stable-1.4tostable. - Wait for the Operator and containers to update and restart.
5.2.1.6.4. Converting DPA to the new version for OADP 1.5.0 Copia collegamentoCollegamento copiato negli appunti!
The OpenShift API for Data Protection (OADP) 1.4 is not supported on OpenShift 4.19. You can convert Data Protection Application (DPA) to the new OADP 1.5 version by using the new spec.configuration.nodeAgent field and its sub-fields.
Procedure
To configure
nodeAgentdaemon set, use thespec.configuration.nodeAgentparameter in DPA. See the following example:Example
DataProtectionApplicationconfigurationCopy to Clipboard Copied! Toggle word wrap Toggle overflow To configure
nodeAgentdaemon set by using theConfigMapresource namednode-agent-config, see the following example configuration:Example config map
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.1.6.5. Verifying the upgrade Copia collegamentoCollegamento copiato negli appunti!
You can verify the OpenShift API for Data Protection (OADP) upgrade by using the following procedure.
Procedure
Verify that the
DataProtectionApplication(DPA) has been reconciled successfully:oc get dpa dpa-sample -n openshift-adp
$ oc get dpa dpa-sample -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME RECONCILED AGE dpa-sample True 2m51s
NAME RECONCILED AGE dpa-sample True 2m51sCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
RECONCILEDcolumn must beTrue.Verify that the installation finished by viewing the OADP resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
node-agentpods are created only while usingresticorkopiainDataProtectionApplication(DPA). In OADP 1.4.0 and OADP 1.3.0 version, thenode-agentpods are labeled asrestic.Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. OADP performance Copia collegamentoCollegamento copiato negli appunti!
5.3.1. OADP recommended network settings Copia collegamentoCollegamento copiato negli appunti!
For a supported experience with OpenShift API for Data Protection (OADP), you should have a stable and resilient network across OpenShift nodes, S3 storage, and in supported cloud environments that meet OpenShift network requirement recommendations.
To ensure successful backup and restore operations for deployments with remote S3 buckets located off-cluster with suboptimal data paths, it is recommended that your network settings meet the following minimum requirements in such less optimal conditions:
- Bandwidth (network upload speed to object storage): Greater than 2 Mbps for small backups and 10-100 Mbps depending on the data volume for larger backups.
- Packet loss: 1%
- Packet corruption: 1%
- Latency: 100ms
Ensure that your OpenShift Container Platform network performs optimally and meets OpenShift Container Platform network requirements.
Although Red Hat provides supports for standard backup and restore failures, it does not provide support for failures caused by network settings that do not meet the recommended thresholds.
5.4. OADP features and plugins Copia collegamentoCollegamento copiato negli appunti!
OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications.
The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources.
5.4.1. OADP features Copia collegamentoCollegamento copiato negli appunti!
OpenShift API for Data Protection (OADP) supports the following features:
- Backup
You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label.
OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic.
NoteYou must exclude Operators from the backup of an application for backup and restore to succeed.
- Restore
You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label.
NoteYou must exclude Operators from the backup of an application for backup and restore to succeed.
- Schedule
- You can schedule backups at specified intervals.
- Hooks
-
You can use hooks to run commands in a container on a pod, for example,
fsfreezeto freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container.
5.4.2. OADP plugins Copia collegamentoCollegamento copiato negli appunti!
The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins.
OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots.
| OADP plugin | Function | Storage location |
|---|---|---|
|
| Backs up and restores Kubernetes objects. | AWS S3 |
| Backs up and restores volumes with snapshots. | AWS EBS | |
|
| Backs up and restores Kubernetes objects. | Microsoft Azure Blob storage |
| Backs up and restores volumes with snapshots. | Microsoft Azure Managed Disks | |
|
| Backs up and restores Kubernetes objects. | Google Cloud Storage |
| Backs up and restores volumes with snapshots. | Google Compute Engine Disks | |
|
| Backs up and restores OpenShift Container Platform resources. [1] | Object store |
|
| Backs up and restores OpenShift Virtualization resources. [2] | Object store |
|
| Backs up and restores volumes with CSI snapshots. [3] | Cloud storage that supports CSI snapshots |
|
| Backs up and restores HyperShift hosted cluster resources. [4] | Object store |
- Mandatory.
- Virtual machine disks are backed up with CSI snapshots or Restic.
The
csiplugin uses the Kubernetes CSI snapshot API.-
OADP 1.1 or later uses
snapshot.storage.k8s.io/v1 -
OADP 1.0 uses
snapshot.storage.k8s.io/v1beta1
-
OADP 1.1 or later uses
-
Do not add the
hypershiftplugin in theDataProtectionApplicationcustom resource if the cluster is not a HyperShift hosted cluster.
5.4.3. About OADP Velero plugins Copia collegamentoCollegamento copiato negli appunti!
You can configure two types of plugins when you install Velero:
- Default cloud provider plugins
- Custom plugins
Both types of plugin are optional, but most users configure at least one cloud provider plugin.
5.4.3.1. Default Velero cloud provider plugins Copia collegamentoCollegamento copiato negli appunti!
You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment:
-
aws(Amazon Web Services) -
gcp(Google Cloud) -
azure(Microsoft Azure) -
openshift(OpenShift Velero plugin) -
csi(Container Storage Interface) -
kubevirt(KubeVirt)
You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the openshift, aws, azure, and gcp plugins:
5.4.3.2. Custom Velero plugins Copia collegamentoCollegamento copiato negli appunti!
You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment.
You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment.
Example file
The following .yaml file installs the default openshift, azure, and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin:
5.4.4. Supported architectures for OADP Copia collegamentoCollegamento copiato negli appunti!
OpenShift API for Data Protection (OADP) supports the following architectures:
- AMD64
- ARM64
- PPC64le
- s390x
OADP 1.2.0 and later versions support the ARM64 architecture.
5.4.5. OADP support for IBM Power and IBM Z Copia collegamentoCollegamento copiato negli appunti!
OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power® and to IBM Z®.
- OADP 1.3.6 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.3.6 in terms of backup locations for these systems.
- OADP 1.4.6 was tested successfully against OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.4.6 in terms of backup locations for these systems.
- OADP 1.5.4 was tested successfully against OpenShift Container Platform 4.19 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.5.4 in terms of backup locations for these systems.
5.4.5.1. OADP support for target backup locations using IBM Power Copia collegamentoCollegamento copiato negli appunti!
- IBM Power® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.13, 4.14, and 4.15, and OADP 1.3.6 against all S3 backup location targets, which are not AWS, as well.
- IBM Power® running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.6 against all S3 backup location targets, which are not AWS, as well.
- IBM Power® running with OpenShift Container Platform 4.19 and OADP 1.5.4 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.19 and OADP 1.5.4 against all S3 backup location targets, which are not AWS, as well.
5.4.5.2. OADP testing and support for target backup locations using IBM Z Copia collegamentoCollegamento copiato negli appunti!
- IBM Z® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.13 4.14, and 4.15, and 1.3.6 against all S3 backup location targets, which are not AWS, as well.
- IBM Z® running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.6 against all S3 backup location targets, which are not AWS, as well.
- IBM Z® running with OpenShift Container Platform 4.19 and OADP 1.5.4 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.19 and OADP 1.5.4 against all S3 backup location targets, which are not AWS, as well.
5.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms Copia collegamentoCollegamento copiato negli appunti!
- Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power® and IBM Z® platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue.
5.4.6. OADP and FIPS Copia collegamentoCollegamento copiato negli appunti!
Federal Information Processing Standards (FIPS) are a set of computer security standards developed by the United States federal government in line with the Federal Information Security Management Act (FISMA).
OpenShift API for Data Protection (OADP) has been tested and works on FIPS-enabled OpenShift Container Platform clusters.
5.4.7. Avoiding the Velero plugin panic error Copia collegamentoCollegamento copiato negli appunti!
A missing secret can cause a panic error for the Velero plugin during image stream backups.
When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret parameter.
During the backup operation, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error:
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item" backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94…
024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
Use the following workaround to avoid the Velero plugin panic error.
Procedure
Label the custom BSL with the relevant label by using the following command:
oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
$ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bslCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the BSL is labeled, wait until the DPA reconciles.
NoteYou can force the reconciliation by making any minor change to the DPA itself.
Verification
After the DPA is reconciled, confirm that the parameter has been created and that the correct registry data has been populated into it by entering the following command:
oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
$ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4.8. Workaround for OpenShift ADP Controller segmentation fault Copia collegamentoCollegamento copiato negli appunti!
If you configure a Data Protection Application (DPA) with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.
Define either velero or cloudstorage when you configure a DPA. Otherwise, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault due to the following settings:
-
If you define both
veleroandcloudstorage, theopenshift-adp-controller-managerfails. -
If you do not define both
veleroandcloudstorage, theopenshift-adp-controller-managerfails.
For more information about this issue, see OADP-1054.
5.5. OADP use cases Copia collegamentoCollegamento copiato negli appunti!
5.5.1. Backup using OpenShift API for Data Protection and Red Hat OpenShift Data Foundation (ODF) Copia collegamentoCollegamento copiato negli appunti!
Following is a use case for using OADP and ODF to back up an application.
5.5.1.1. Backing up an application using OADP and ODF Copia collegamentoCollegamento copiato negli appunti!
In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF).
- You create an object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
-
You use the NooBaa MCG service with OADP by using the
awsprovider plugin. - You configure the Data Protection Application (DPA) with the backup storage location (BSL).
- You create a backup custom resource (CR) and specify the application namespace to back up.
- You create and verify the backup.
Prerequisites
- You installed the OADP Operator.
- You installed the ODF Operator.
- You have an application with a database running in a separate namespace.
Procedure
Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
test-obc- Specifies the name of the object bucket claim.
test-backup-bucket- Specifies the name of the bucket.
Create the OBC by running the following command:
oc create -f <obc_file_name>
$ oc create -f <obc_file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<obc_file_name>- Specifies the file name of the object bucket claim manifest.
When you create an OBC, ODF creates a
secretand aconfig mapwith the same name as the object bucket claim. Thesecrethas the bucket credentials, and theconfig maphas information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:oc extract --to=- cm/test-obc
$ oc extract --to=- cm/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow test-obcis the name of the OBC.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the bucket credentials from the generated
secret, run the following command:oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPymCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the public URL for the S3 endpoint from the s3 route in the
openshift-storagenamespace by running the following command:oc get route s3 -n openshift-storage
$ oc get route s3 -n openshift-storageCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cloud-credentialsfile with the object bucket credentials as shown in the following command:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
cloud-credentialssecret with thecloud-credentialsfile content as shown in the following command:oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials
$ oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the Data Protection Application (DPA) as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
defaultSnapshotMoveData-
Set to
trueto use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. s3Url- Specifies the S3 URL of ODF storage.
<bucket_name>- Specifies the bucket name.
Create the DPA by running the following command:
oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the DPA is created successfully by running the following command. In the example output, you can see the
statusobject hastypefield set toReconciled. This means, the DPA is successfully created.oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a backup CR as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<application_namespace>- Specifies the namespace for the application to back up.
Create the backup CR by running the following command:
oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the backup object is in the
Completedphase by running the following command. For more details, see the example output.oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.2. OpenShift API for Data Protection (OADP) restore use case Copia collegamentoCollegamento copiato negli appunti!
Following is a use case for using OADP to restore a backup to a different namespace.
5.5.2.1. Restoring an application to a different namespace using OADP Copia collegamentoCollegamento copiato negli appunti!
Restore a backup of an application by using OADP to a new target namespace, test-restore-application. To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources.
Prerequisites
- You installed the OADP Operator.
- You have the backup of an application to be restored.
Procedure
Create a restore CR as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
test-restore- Specifies the name of the restore CR.
<backup_name>- Specifies the name of the backup.
<application_namespace>-
Specifies the target namespace to restore to.
namespaceMappingmaps the source application namespace to the target application namespace.test-restore-applicationis the name of target namespace where you want to restore the backup.
Apply the restore CR by running the following command:
oc apply -f <restore_cr_filename>
$ oc apply -f <restore_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the restore is in the
Completedphase by running the following command:oc describe restores.velero.io <restore_name> -n openshift-adp
$ oc describe restores.velero.io <restore_name> -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Change to the restored namespace
test-restore-applicationby running the following command:oc project test-restore-application
$ oc project test-restore-applicationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command:
oc get pvc,svc,deployment,secret,configmap
$ oc get pvc,svc,deployment,secret,configmapCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.3. Including a self-signed CA certificate during backup Copia collegamentoCollegamento copiato negli appunti!
You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by Red Hat OpenShift Data Foundation (ODF).
5.5.3.1. Backing up an application and its self-signed CA certificate Copia collegamentoCollegamento copiato negli appunti!
The s3.openshift-storage.svc service, provided by ODF, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA.
To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks:
- Request a NooBaa bucket by creating an object bucket claim (OBC).
- Extract the bucket details.
-
Include a self-signed CA certificate in the
DataProtectionApplicationCR. - Back up an application.
Prerequisites
- You installed the OADP Operator.
- You installed the ODF Operator.
- You have an application with a database running in a separate namespace.
Procedure
Create an OBC manifest to request a NooBaa bucket as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
test-obc- Specifies the name of the object bucket claim.
test-backup-bucket- Specifies the name of the bucket.
Create the OBC by running the following command:
oc create -f <obc_file_name>
$ oc create -f <obc_file_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you create an OBC, ODF creates a
secretand aConfigMapwith the same name as the object bucket claim. Thesecretobject contains the bucket credentials, and theConfigMapobject contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:oc extract --to=- cm/test-obc
$ oc extract --to=- cm/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow test-obcis the name of the OBC.Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To get the bucket credentials from the
secretobject, run the following command:oc extract --to=- secret/test-obc
$ oc extract --to=- secret/test-obcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym
# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPymCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
cloud-credentialsfile with the object bucket credentials by using the following example configuration:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
cloud-credentialssecret with thecloud-credentialsfile content by running the following command:oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials
$ oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentialsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the service CA certificate from the
openshift-service-ca.crtconfig map by running the following command. Ensure that you encode the certificate inBase64format and note the value to use in the next step.oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo$ oc get cm/openshift-service-ca.crt \ -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0K
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... ....gpwOHMwaG9CRmk5a3....FLS0tLS0KCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
DataProtectionApplicationCR manifest file with the bucket name and CA certificate as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
insecureSkipTLSVerify-
Specifies whether SSL/TLS security is enabled. If set to
true, SSL/TLS security is disabled. If set tofalse, SSL/TLS security is enabled. <bucket_name>- Specifies the name of the bucket extracted in an earlier step.
<ca_cert>-
Specifies the
Base64encoded certificate from the previous step.
Create the
DataProtectionApplicationCR by running the following command:oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplicationCR is created successfully by running the following command:oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the
BackupCR by using the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<application_namespace>- Specifies the namespace for the application to back up.
Create the
BackupCR by running the following command:oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
Backupobject is in theCompletedphase by running the following command:oc describe backup test-backup -n openshift-adp
$ oc describe backup test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.4. Using the legacy-aws Velero plugin Copia collegamentoCollegamento copiato negli appunti!
If you are using an AWS S3-compatible backup storage location, you might get a SignatureDoesNotMatch error while backing up your application. This error occurs because some backup storage locations still use the older versions of the S3 APIs, which are incompatible with the newer AWS SDK for Go V2. To resolve this issue, you can use the legacy-aws Velero plugin in the DataProtectionApplication custom resource (CR). The legacy-aws Velero plugin uses the older AWS SDK for Go V1, which is compatible with the legacy S3 APIs, ensuring successful backups.
5.5.4.1. Using the legacy-aws Velero plugin in the DataProtectionApplication CR Copia collegamentoCollegamento copiato negli appunti!
In the following use case, you configure the DataProtectionApplication CR with the legacy-aws Velero plugin and then back up an application.
Depending on the backup storage location you choose, you can use either the legacy-aws or the aws plugin in your DataProtectionApplication CR. If you use both of the plugins in the DataProtectionApplication CR, the following error occurs: aws and legacy-aws can not be both specified in DPA spec.configuration.velero.defaultPlugins.
Prerequisites
- You have installed the OADP Operator.
- You have configured an AWS S3-compatible object storage as a backup location.
- You have an application with a database running in a separate namespace.
Procedure
Configure the
DataProtectionApplicationCR to use thelegacy-awsVelero plugin as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
legacy-aws-
Specifies to use the
legacy-awsplugin. <bucket_name>- Specifies the bucket name.
Create the
DataProtectionApplicationCR by running the following command:oc apply -f <dpa_filename>
$ oc apply -f <dpa_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplicationCR is created successfully by running the following command. In the example output, you can see thestatusobject has thetypefield set toReconciledand thestatusfield set to"True". That status indicates that theDataProtectionApplicationCR is successfully created.oc get dpa -o yaml
$ oc get dpa -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the backup storage location (BSL) is available by running the following command:
oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see an output similar to the following example:
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a
BackupCR as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<application_namespace>- Specifies the namespace for the application to back up.
Create the
BackupCR by running the following command:oc apply -f <backup_cr_filename>
$ oc apply -f <backup_cr_filename>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the backup object is in the
Completedphase by running the following command. For more details, see the example output.oc describe backups.velero.io test-backup -n openshift-adp
$ oc describe backups.velero.io test-backup -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5.5. Backing up workloads on OADP with OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
To back up and restore workloads on ROSA, you can use OADP. You can create a backup of a workload, restore it from the backup, and verify the restoration. You can also clean up the OADP Operator, backup storage, and AWS resources when they are no longer needed.
5.5.5.1. Performing a backup with OADP and OpenShift Container Platform Copia collegamentoCollegamento copiato negli appunti!
The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup by using OpenShift API for Data Protection (OADP) with OpenShift Container Platform.
Either Data Protection Application (DPA) configuration will work.
Procedure
Create a workload to back up by running the following commands:
oc create namespace hello-world
$ oc create namespace hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshiftCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expose the route by running the following command:
oc expose service/hello-openshift -n hello-world
$ oc expose service/hello-openshift -n hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the application is working by running the following command:
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see an output similar to the following example:
Hello OpenShift!
Hello OpenShift!Copy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the workload by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait until the backup is complete, and then run the following command:
watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see an output similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the demo workload by running the following command:
oc delete ns hello-world
$ oc delete ns hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the workload from the backup by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the Restore to finish by running the following command:
watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
$ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see an output similar to the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the workload is restored by running the following command:
oc -n hello-world get pods
$ oc -n hello-world get podsCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see an output similar to the following example:
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s
NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the JSONPath by running the following command:
curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`Copy to Clipboard Copied! Toggle word wrap Toggle overflow You should see an output similar to the following example:
Hello OpenShift!
Hello OpenShift!Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For troubleshooting tips, see the troubleshooting documentation.
5.5.5.2. Cleaning up a cluster after a backup with OADP and ROSA STS Copia collegamentoCollegamento copiato negli appunti!
If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions.
Procedure
Delete the workload by running the following command:
oc delete ns hello-world
$ oc delete ns hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the Data Protection Application (DPA) by running the following command:
oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa$ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the cloud storage by running the following command:
oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp$ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadpCopy to Clipboard Copied! Toggle word wrap Toggle overflow WarningIf this command hangs, you might need to delete the finalizer by running the following command:
oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge$ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=mergeCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the Operator is no longer required, remove it by running the following command:
oc -n openshift-adp delete subscription oadp-operator
$ oc -n openshift-adp delete subscription oadp-operatorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the namespace from the Operator:
oc delete ns openshift-adp
$ oc delete ns openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the backup and restore resources are no longer required, remove them from the cluster by running the following command:
oc delete backups.velero.io hello-world
$ oc delete backups.velero.io hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow To delete backup, restore and remote objects in AWS S3 run the following command:
velero backup delete hello-world
$ velero backup delete hello-worldCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command:
for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done$ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the AWS S3 bucket by running the following commands:
aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive$ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursiveCopy to Clipboard Copied! Toggle word wrap Toggle overflow aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp$ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Detach the policy from the role by running the following command:
aws iam detach-role-policy --role-name "${ROLE_NAME}" --policy-arn "${POLICY_ARN}"$ aws iam detach-role-policy --role-name "${ROLE_NAME}" --policy-arn "${POLICY_ARN}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the role by running the following command:
aws iam delete-role --role-name "${ROLE_NAME}"$ aws iam delete-role --role-name "${ROLE_NAME}"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Installing OADP Copia collegamentoCollegamento copiato negli appunti!
5.6.1. About installing OADP Copia collegamentoCollegamento copiato negli appunti!
As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.16.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types:
- Amazon Web Services
- Microsoft Azure
- Google Cloud
- Multicloud Object Gateway
- IBM Cloud® Object Storage S3
- AWS S3 compatible object storage, such as Multicloud Object Gateway or MinIO
You can configure multiple backup storage locations within the same namespace for each individual OADP deployment.
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB).
To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers:
- Amazon Web Services
- Microsoft Azure
- Google Cloud
- CSI snapshot-enabled cloud provider, such as OpenShift Data Foundation
If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1.x.
OADP 1.0.x does not support CSI backup on OCP 4.11 and later. OADP 1.0.x includes Velero 1.7.x and expects the API group snapshot.storage.k8s.io/v1beta1, which is not present on OCP 4.11 and later.
If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage.
You create a default Secret and then you install the Data Protection Application.
5.6.1.1. AWS S3 compatible backup storage providers Copia collegamentoCollegamento copiato negli appunti!
OADP works with many S3-compatible object storage providers. Several object storage providers are certified and tested with every release of OADP. Various S3 providers are known to work with OADP but are not specifically tested and certified. These providers will be supported on a best-effort basis. Additionally, there are a few S3 object storage providers with known issues and limitations that are listed in this documentation.
Red Hat will provide support for OADP on any S3-compatible storage, but support will stop if the S3 endpoint is determined to be the root cause of an issue.
5.6.1.1.1. Certified backup storage providers Copia collegamentoCollegamento copiato negli appunti!
The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations:
- MinIO
- Multicloud Object Gateway (MCG)
- Amazon Web Services (AWS) S3
- IBM Cloud® Object Storage S3
- Ceph RADOS Gateway (Ceph Object Gateway)
- Red Hat Container Storage
- Red Hat OpenShift Data Foundation
- NetApp ONTAP S3 Object Storage
- Scality ARTESCA S3 object storage
The following compatible object storage providers are supported and have their own Velero object store plugins:
- Google Cloud
- Microsoft Azure
5.6.1.1.2. Unsupported backup storage providers Copia collegamentoCollegamento copiato negli appunti!
The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat:
- Oracle Cloud
- DigitalOcean
- NooBaa, unless installed using Multicloud Object Gateway (MCG)
- Tencent Cloud
- Ceph RADOS v12.2.7
- Quobyte
- Cloudian HyperStore
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
5.6.1.1.3. Backup storage providers with known limitations Copia collegamentoCollegamento copiato negli appunti!
The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set:
- Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore.
5.6.1.2. Configuring Multicloud Object Gateway (MCG) for disaster recovery on OpenShift Data Foundation Copia collegamentoCollegamento copiato negli appunti!
If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store.
Failure to configure MCG as an external object store might lead to backups not being available.
Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.
For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.
Procedure
- Configure MCG as an external object store as described in Adding storage resources for hybrid or Multicloud.
5.6.1.3. About OADP update channels Copia collegamentoCollegamento copiato negli appunti!
When you install an OADP Operator, you choose an update channel. This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time.
The following update channels are available:
-
The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP
ClusterServiceVersionforOADP.v1.1.zand older versions fromOADP.v1.0.z. - The stable-1.0 channel is deprecated and is not supported.
- The stable-1.1 channel is deprecated and is not supported.
- The stable-1.2 channel is deprecated and is not supported.
-
The stable-1.3 channel contains
OADP.v1.3.z, the most recent OADP 1.3ClusterServiceVersion. -
The stable-1.4 channel contains
OADP.v1.4.z, the most recent OADP 1.4ClusterServiceVersion.
For more information, see OpenShift Operator Life Cycles.
Which update channel is right for you?
-
The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from
OADP.v1.1.z. - Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z.
When must you switch update channels?
- If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z.
- If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z.
- If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z.
You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it.
5.6.1.4. Installation of OADP on multiple namespaces Copia collegamentoCollegamento copiato negli appunti!
You can install OpenShift API for Data Protection into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI).
You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements:
- All deployments of OADP on the same cluster must be the same version, for example, 1.4.0. Installing different versions of OADP on the same cluster is not supported.
-
Each individual deployment of OADP must have a unique set of credentials and at least one
BackupStorageLocationconfiguration. You can also use multipleBackupStorageLocationconfigurations within the same namespace. - By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to carefully review potential impacts, such as not backing up and restoring to and from the same namespace concurrently.
5.6.1.5. OADP support for backup data immutability Copia collegamentoCollegamento copiato negli appunti!
Starting with OADP 1.4, you can store OADP backups in an AWS S3 bucket with enabled versioning. The versioning support is only for AWS S3 buckets and not for S3-compatible buckets.
See the following list for specific cloud provider limitations:
- AWS S3 service supports backups because an S3 object lock applies only to versioned buckets. You can still update the object data for the new version. However, when backups are deleted, old versions of the objects are not deleted.
- OADP backups are not supported and might not work as expected when you enable immutability on Azure Storage Blob.
- Google Cloud storage policy only supports bucket-level immutability. Therefore, it is not feasible to implement it in the Google Cloud environment.
Depending on your storage provider, the immutability options are called differently:
- S3 object lock
- Object retention
- Bucket versioning
- Write Once Read Many (WORM) buckets
The primary reason for the absence of support for other S3-compatible object storage is that OADP initially saves the state of a backup as finalizing and then verifies whether any asynchronous operations are in progress.
5.6.1.6. Velero CPU and memory requirements based on collected data Copia collegamentoCollegamento copiato negli appunti!
The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources.
5.6.1.6.1. CPU and memory requirement for configurations Copia collegamentoCollegamento copiato negli appunti!
| Configuration types | [1] Average usage | [2] Large usage | resourceTimeouts |
|---|---|---|---|
| CSI | Velero: CPU- Request 200m, Limits 1000m Memory - Request 256Mi, Limits 1024Mi | Velero: CPU- Request 200m, Limits 2000m Memory- Request 256Mi, Limits 2048Mi | N/A |
| Restic | [3] Restic: CPU- Request 1000m, Limits 2000m Memory - Request 16Gi, Limits 32Gi | [4] Restic: CPU - Request 2000m, Limits 8000m Memory - Request 16Gi, Limits 40Gi | 900m |
| [5] Data Mover | N/A | N/A | 10m - average usage 60m - large usage |
- Average usage - use these settings for most usage situations.
- Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets.
- Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations.
- Increasing the CPU has a significant impact on improving backup and restore times.
- Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m.
The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above.
5.6.1.6.2. NodeAgent CPU for large usage Copia collegamentoCollegamento copiato negli appunti!
Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP).
You can tune your OpenShift Container Platform environment based on your performance analysis and preference. Use CPU limits in the workloads when you use Kopia for file system backups.
If you do not use CPU limits on the pods, the pods can use excess CPU when it is available. If you specify CPU limits, the pods might be throttled if they exceed their limits. Therefore, the use of CPU limits on the pods is considered an anti-pattern.
Ensure that you are accurately specifying CPU requests so that pods can take advantage of excess CPU. Resource allocation is guaranteed based on CPU requests rather than CPU limits.
Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications.
In some environments, you might need to adjust Ceph MDS pod resources to avoid pod restarts, which occur when default settings cause resource saturation.
For more information about how to set the pod resources limit in Ceph MDS pods, see Changing the CPU and memory resources on the rook-ceph pods.
5.6.2. Installing the OADP Operator Copia collegamentoCollegamento copiato negli appunti!
Install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.19 by using Operator Lifecycle Manager (OLM).
The OADP Operator installs Velero 1.16.
5.6.2.1. Installing the OADP Operator Copia collegamentoCollegamento copiato negli appunti!
Install the OADP Operator by using the OpenShift Container Platform web console.
Prerequisites
You must be logged in as a user with
cluster-adminprivileges. .Procedure-
In the OpenShift Container Platform web console, click Operators
OperatorHub. - Use the Filter by keyword field to find the OADP Operator.
- Select the OADP Operator and click Install.
-
Click Install to install the Operator in the
openshift-adpproject. -
Click Operators
Installed Operators to verify the installation.
-
In the OpenShift Container Platform web console, click Operators
5.6.2.2. OADP-Velero-OpenShift Container Platform version relationship Copia collegamentoCollegamento copiato negli appunti!
Review the version relationship between OADP, Velero, and OpenShift Container Platform to decide compatible version combinations. This helps you select the appropriate OADP version for your cluster environment.
5.7. Configuring OADP with AWS S3 compatible storage Copia collegamentoCollegamento copiato negli appunti!
5.7.1. Configuring the OpenShift API for Data Protection with AWS S3 compatible storage Copia collegamentoCollegamento copiato negli appunti!
You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) S3 compatible storage by installing the OADP Operator. The Operator installs Velero 1.16.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure AWS for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.
5.7.1.1. About Amazon Simple Storage Service, Identity and Access Management, and GovCloud Copia collegamentoCollegamento copiato negli appunti!
Review Amazon Simple Storage Service (S3), Identity and Access Management (IAM), and AWS GovCloud requirements to configure backup storage with appropriate security controls. This helps you meet federal data security requirements and use correct endpoints.
AWS S3 is a storage solution of Amazon for the internet. As an authorized user, you can use this service to store and retrieve any amount of data whenever you want, from anywhere on the web.
You securely control access to Amazon S3 and other Amazon services by using the AWS Identity and Access Management (IAM) web service.
You can use IAM to manage permissions that control which AWS resources users can access. You use IAM to both authenticate, or verify that a user is who they claim to be, and to authorize, or grant permissions to use resources.
AWS GovCloud (US) is an Amazon storage solution developed to meet the stringent and specific data security requirements of the United States Federal Government. AWS GovCloud (US) works the same as Amazon S3 except for the following:
- You cannot copy the contents of an Amazon S3 bucket in the AWS GovCloud (US) regions directly to or from another AWS region.
If you use Amazon S3 policies, use the AWS GovCloud (US) Amazon Resource Name (ARN) identifier to unambiguously specify a resource across all of AWS, such as in IAM policies, Amazon S3 bucket names, and API calls.
In AWS GovCloud (US) regions, ARNs have an identifier that is different from the one in other standard AWS regions,
arn:aws-us-gov. If you need to specify the US-West or US-East region, use one the following ARNs:-
For US-West, use
us-gov-west-1. -
For US-East, use
us-gov-east-1.
-
For US-West, use
-
For all other standard regions, ARNs begin with:
arn:aws.
- In AWS GovCloud (US) regions, use the endpoints listed in the AWS GovCloud (US-East) and AWS GovCloud (US-West) rows of the "Amazon S3 endpoints" table on Amazon Simple Storage Service endpoints and quotas. If you are processing export-controlled data, use one of the SSL/TLS endpoints. If you have FIPS requirements, use a FIPS 140-2 endpoint such as https://s3-fips.us-gov-west-1.amazonaws.com or https://s3-fips.us-gov-east-1.amazonaws.com.
- To find the other AWS-imposed restrictions, see How Amazon Simple Storage Service Differs for AWS GovCloud (US).
5.7.1.2. Configuring Amazon Web Services Copia collegamentoCollegamento copiato negli appunti!
Configure Amazon Web Services (AWS) S3 storage and Identity and Access Management (IAM) credentials for backup storage with OADP. This provides the necessary permissions and storage infrastructure for data protection operations.
Prerequisites
- You must have the AWS CLI installed.
Procedure
Set the
BUCKETvariable:BUCKET=<your_bucket>
$ BUCKET=<your_bucket>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
REGIONvariable:REGION=<your_region>
$ REGION=<your_region>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an AWS S3 bucket:
aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGION$ aws s3api create-bucket \ --bucket $BUCKET \ --region $REGION \ --create-bucket-configuration LocationConstraint=$REGIONCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
LocationConstraint-
Specifies the bucket configuration location constraint.
us-east-1does not supportLocationConstraint. If your region isus-east-1, omit--create-bucket-configuration LocationConstraint=$REGION.
Create an IAM user:
aws iam create-user --user-name velero
$ aws iam create-user --user-name veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
velero- Specifies the user name. If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
Create a
velero-policy.jsonfile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the minimum necessary permissions:aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json
$ aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an access key for the
velerouser:aws iam create-access-key --user-name velero
$ aws iam create-access-key --user-name veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
credentials-velerofile:cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF
$ cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for AWS before you install the Data Protection Application.
5.7.1.3. About backup and snapshot locations and their secrets Copia collegamentoCollegamento copiato negli appunti!
Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.
5.7.1.3.1. Backup locations Copia collegamentoCollegamento copiato negli appunti!
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
5.7.1.3.2. Snapshot locations Copia collegamentoCollegamento copiato negli appunti!
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
5.7.1.3.3. Secrets Copia collegamentoCollegamento copiato negli appunti!
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
5.7.1.3.4. Creating a default Secret Copia collegamentoCollegamento copiato negli appunti!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.See the following example:
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Secretis referenced in thespec.backupLocations.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application.
5.7.1.3.5. Creating profiles for different credentials Copia collegamentoCollegamento copiato negli appunti!
If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file.
Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR).
Procedure
Create a
credentials-velerofile with separate profiles for the backup and snapshot locations, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
Secretobject with thecredentials-velerofile:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero1 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the profiles to the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.1.3.6. Creating an OADP SSE-C encryption key for additional data security Copia collegamentoCollegamento copiato negli appunti!
Configure server-side encryption with customer-provided keys (SSE-C) to add an additional layer of encryption for backup data stored in Amazon Web Services (AWS) S3. This protects backup data if AWS credentials become exposed.
Amazon Web Services (AWS) S3 applies server-side encryption with AWS S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3.
OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption.
The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption.
You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed.
Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key.
Prerequisites
To make OADP mount a secret that contains your SSE-C key to the Velero pod at
/credentials, use the following default secret name for AWS:cloud-credentials, and leave at least one of the following labels empty:-
dpa.spec.backupLocations[].velero.credential dpa.spec.snapshotLocations[].velero.credentialThis is a workaround for a known issue: https://issues.redhat.com/browse/OADP-3971.
-
The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting.
If you need the backup location to have credentials with a different name than
cloud-credentials, you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the following example does not contain a credential name, the snapshot location will usecloud-credentialsas its secret for taking snapshots.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
Create an SSE-C encryption key:
Generate a random number and save it as a file named
sse.keyby running the following command:dd if=/dev/urandom bs=1 count=32 > sse.key
$ dd if=/dev/urandom bs=1 count=32 > sse.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create an OpenShift Container Platform secret:
If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command:
oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse.key
$ oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse.keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are updating an existing installation, edit the values of the
cloud-credentialsecretblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Edit the value of the
customerKeyEncryptionFileattribute in thebackupLocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow WarningYou must restart the Velero pod to remount the secret credentials properly on an existing installation.
The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key.
Verification
To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it.
Create a test file by running the following command:
echo "encrypt me please" > test.txt
$ echo "encrypt me please" > test.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Upload the test file by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Try to download the file. In either the Amazon web console or the terminal, run the following command:
s3cmd get s3://<bucket>/test.txt test.txt
$ s3cmd get s3://<bucket>/test.txt test.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow The download fails because the file is encrypted with an additional key.
Download the file with the additional encryption key by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Read the file contents by running the following command:
cat downloaded.txt
$ cat downloaded.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow encrypt me please
encrypt me pleaseCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.1.3.6.1. Downloading a file with an SSE-C encryption key for files backed up by Velero Copia collegamentoCollegamento copiato negli appunti!
When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velero.
Procedure
Download the file with the additional encryption key for files backed up by Velero by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.1.4. Installing the Data Protection Application Copia collegamentoCollegamento copiato negli appunti!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials. If the backup and snapshot locations use different credentials, you must create a
Secretwith the default name,cloud-credentials, which contains separate profiles for the backup and snapshot location credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
-
Click Operators
Installed Operators and select the OADP Operator. - Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
namespace-
Specifies the default namespace for OADP which is
openshift-adp. The namespace is a variable and is configurable. openshift-
Specifies that the
openshiftplugin is mandatory. resourceTimeout- Specifies how many minutes to wait for several Velero resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability, before timeout occurs. The default is 10m.
nodeAgent- Specifies the administrative agent that routes the administrative requests to servers.
enable-
Set this value to
trueif you want to enablenodeAgentand perform File System Backup. uploaderType-
Specifies the uploader type. Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. nodeSelector- Specifies the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
bucket- Specifies a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
prefix-
Specifies a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes. s3ForcePathStyle- Specifies whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage.
s3Url- Specifies the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage.
name-
Specifies the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the backup location. snapshotLocations- Specifies a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs.
region- Specifies that the snapshot location must be in the same region as the PVs.
name-
Specifies the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials, is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in thecredentials-velerofile.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.1.4.1. Setting Velero CPU and memory resource allocations Copia collegamentoCollegamento copiato negli appunti!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
5.7.1.4.2. Enabling self-signed CA certificates Copia collegamentoCollegamento copiato negli appunti!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
caCert- Specifies the Base64-encoded CA certificate string.
insecureSkipTLSVerify-
Specifies the
insecureSkipTLSVerifyconfiguration. The configuration can be set to either"true"or"false". If set to"true", SSL/TLS security is disabled. If set to"false", SSL/TLS security is enabled.
5.7.1.4.3. Using CA certificates with the velero command aliased for Velero deployment Copia collegamentoCollegamento copiato negli appunti!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed. .ProcedureTo use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
velero version
$ velero versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP
Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADPCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
5.7.1.4.4. Configuring node agents and node labels Copia collegamentoCollegamento copiato negli appunti!
The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.
Procedure
Run the node agent on any node that you choose by adding a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny label specified must match the labels on each node.
Use the same custom label in the
DPA.spec.configuration.nodeAgent.podConfig.nodeSelectorfield, which you used for labeling nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example is an anti-pattern of
nodeSelectorand does not work unless both labels,node-role.kubernetes.io/infra: ""andnode-role.kubernetes.io/worker: "", are on the node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.1.5. Configuring the backup storage location with a MD5 checksum algorithm Copia collegamentoCollegamento copiato negli appunti!
You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA.
-
CRC32 -
CRC32C -
SHA1 -
SHA256
You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check. If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32.
Prerequisites
- You have installed the OADP Operator.
- You have configured Amazon S3, or S3-compatible object storage as a backup location.
Procedure
Configure the BSL in the DPA as shown in the following example:
Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
checksumAlgorithm-
Specifies the
checksumAlgorithm. In this example, thechecksumAlgorithmfield is set to an empty value. You can select an option from the following list:CRC32,CRC32C,SHA1,SHA256.
ImportantIf you are using Noobaa as the object storage provider, and you do not set the
spec.backupLocations.velero.config.checksumAlgorithmfield in the DPA, an empty value ofchecksumAlgorithmis added to the BSL configuration.The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method.
5.7.1.6. Configuring the DPA with client burst and QPS settings Copia collegamentoCollegamento copiato negli appunti!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
client-burst-
Specifies the
client-burstvalue. In this example, theclient-burstfield is set to 500. client-qps-
Specifies the
client-qpsvalue. In this example, theclient-qpsfield is set to 300.
5.7.1.7. Configuring node agent load affinity Copia collegamentoCollegamento copiato negli appunti!
You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR).
See the following example in which you can schedule the node agent pods on nodes with the label label.io/role: cpu-1 and other-label.io/other-role: cpu-2.
You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the DPA spec
nodegent.loadAffinityobject as shown in the following example.In the example, you ensure that the node agent pods are scheduled only on nodes with the label
label.io/role: cpu-1and the labellabel.io/hostnamematching with eithernode1ornode2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
loadAffinity-
Specifies the
loadAffinityobject by adding thematchLabelsandmatchExpressionsobjects. matchExpressions-
Specifies the
matchExpressionsobject to add restrictions on the node agent pods scheduling.
5.7.1.8. Node agent load affinity guidelines Copia collegamentoCollegamento copiato negli appunti!
Use the following guidelines to configure the node agent loadAffinity object in the DataProtectionApplication (DPA) custom resource (CR).
-
Use the
spec.nodeagent.podConfig.nodeSelectorobject for simple node matching. -
Use the
loadAffinity.nodeSelectorobject without thepodConfig.nodeSelectorobject for more complex scenarios. -
You can use both
podConfig.nodeSelectorandloadAffinity.nodeSelectorobjects, but theloadAffinityobject must be equal or more restrictive as compared to thepodConfigobject. In this scenario, thepodConfig.nodeSelectorlabels must be a subset of the labels used in theloadAffinity.nodeSelectorobject. -
You cannot use the
matchExpressionsandmatchLabelsfields if you have configured bothpodConfig.nodeSelectorandloadAffinity.nodeSelectorobjects in the DPA. See the following example to configure both
podConfig.nodeSelectorandloadAffinity.nodeSelectorobjects in the DPA.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.1.9. Configuring node agent load concurrency Copia collegamentoCollegamento copiato negli appunti!
You can control the maximum number of node agent operations that can run simultaneously on each node within your cluster.
You can configure it using one of the following fields of the Data Protection Application (DPA):
-
globalConfig: Defines a default concurrency limit for the node agent across all nodes. -
perNodeConfig: Specifies different concurrency limits for specific nodes based onnodeSelectorlabels. This provides flexibility for environments where certain nodes might have different resource capacities or roles.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
If you want to use load concurrency for specific nodes, add labels to those nodes:
oc label node/<node_name> label.io/instance-type='large'
$ oc label node/<node_name> label.io/instance-type='large'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the load concurrency fields for your DPA instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
globalConfig-
Specifies the global concurrent number. The default value is 1, which means there is no concurrency and only one load is allowed. The
globalConfigvalue does not have a limit. label.io/instance-type- Specifies the label for per-node concurrency.
number- Specifies the per-node concurrent number. You can specify many per-node concurrent numbers, for example, based on the instance type and size. The range of per-node concurrent number is the same as the global concurrent number. If the configuration file contains a per-node concurrent number and a global concurrent number, the per-node concurrent number takes precedence.
5.7.1.10. Configuring the node agent as a non-root and non-privileged user Copia collegamentoCollegamento copiato negli appunti!
To enhance the node agent security, you can configure the OADP Operator node agent daemonset to run as a non-root and non-privileged user by using the spec.configuration.velero.disableFsBackup setting in the DataProtectionApplication (DPA) custom resource (CR).
By setting the spec.configuration.velero.disableFsBackup setting to true, the node agent security context sets the root file system to read-only and sets the privileged flag to false.
Setting spec.configuration.velero.disableFsBackup to true enhances the node agent security by removing the need for privileged containers and enforcing a read-only root file system.
However, it also disables File System Backup (FSB) with Kopia. If your workloads rely on FSB for backing up volumes that do not support native snapshots, then you should evaluate whether the disableFsBackup configuration fits your use case.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
disableFsBackupfield in the DPA as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeAgent- Specifies to enable the node agent in the DPA.
disableFsBackup-
Specifies to set the
disableFsBackupfield totrue.
Verification
Verify that the node agent security context is set to run as non-root and the root file system is
readOnlyby running the following command:oc get daemonset node-agent -o yaml
$ oc get daemonset node-agent -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The example output is as following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
allowPrivilegeEscalation-
Specifies that the
allowPrivilegeEscalationfield is false. privileged-
Specifies that the
privilegedfield is false. readOnlyRootFilesystem- Specifies that the root file system is read-only.
runAsNonRoot- Specifies that the node agent is run as a non-root user.
5.7.1.11. Configuring repository maintenance Copia collegamentoCollegamento copiato negli appunti!
OADP repository maintenance is a background job, you can configure it independently of the node agent pods. This means that you can schedule the repository maintenance pod on a node where the node agent is or is not running.
You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.
You have the option to configure the load affinity at the global level affecting all repositories. Or you can configure the load affinity for each repository. You can also use a combination of global and per-repository configuration.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the
loadAffinityobject in the DPA spec by using either one or both of the following methods:Global configuration: Configure load affinity for all repositories as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
repositoryMaintenance-
Specifies the
repositoryMaintenanceobject as shown in the example. global-
Specifies the
globalobject to configure load affinity for all repositories.
Per-repository configuration: Configure load affinity per repository as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
myrepositoryname-
Specifies the
repositoryMaintenanceobject for each repository.
5.7.1.12. Configuring Velero load affinity Copia collegamentoCollegamento copiato negli appunti!
With each OADP deployment, there is one Velero pod and its main purpose is to schedule Velero workloads. To schedule the Velero pod, you can use the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DataProtectionApplication (DPA) custom resource (CR) spec.
Use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.
The OpenShift scheduler applies the rules and performs the scheduling of the Velero pod deployment.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the
velero.podConfig.nodeSelectorand thevelero.loadAffinityobjects in the DPA spec as shown in the following examples:velero.podConfig.nodeSelectorobject configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero.loadAffinityobject configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.7.1.13. Overriding the imagePullPolicy setting in the DPA Copia collegamentoCollegamento copiato negli appunti!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
imagePullPolicy-
Specifies the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
5.7.1.14. Enabling CSI in the DataProtectionApplication CR Copia collegamentoCollegamento copiato negli appunti!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
csi-
Specifies the
csidefault plugin.
5.7.1.15. Disabling the node agent in DataProtectionApplication Copia collegamentoCollegamento copiato negli appunti!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enable- Enables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enableEnables the node agent.
You can set up a job to enable and disable the
nodeAgentfield in theDataProtectionApplicationCR. For more information, see "Running tasks in pods using jobs".
5.8. Configuring OADP with IBM Cloud Copia collegamentoCollegamento copiato negli appunti!
5.8.1. Configuring the OpenShift API for Data Protection with IBM Cloud Copia collegamentoCollegamento copiato negli appunti!
You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups.
5.8.1.1. Configuring the COS instance Copia collegamentoCollegamento copiato negli appunti!
You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials.
Prerequisites
- You have an IBM Cloud Platform account.
- You installed the IBM Cloud CLI.
- You are logged in to IBM Cloud.
Procedure
Install the IBM Cloud Object Storage (COS) plugin by running the following command:
ibmcloud plugin install cos -f
$ ibmcloud plugin install cos -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bucket name by running the following command:
BUCKET=<bucket_name>
$ BUCKET=<bucket_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a bucket region by running the following command:
REGION=<bucket_region>
$ REGION=<bucket_region>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<bucket_region>-
Specifies the bucket region. For example,
eu-gb.
Create a resource group by running the following command:
ibmcloud resource group-create <resource_group_name>
$ ibmcloud resource group-create <resource_group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the target resource group by running the following command:
ibmcloud target -g <resource_group_name>
$ ibmcloud target -g <resource_group_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the target resource group is correctly set by running the following command:
ibmcloud target
$ ibmcloud targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default
API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: DefaultCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the example output, the resource group is set to
Default.Set a resource group name by running the following command:
RESOURCE_GROUP=<resource_group>
$ RESOURCE_GROUP=<resource_group>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<resource_group>-
Specifies the resource group name. For example,
"default".
Create an IBM Cloud
service-instanceresource by running the following command:ibmcloud resource service-instance-create \ <service_instance_name> \ <service_name> \ <service_plan> \ <region_name>
$ ibmcloud resource service-instance-create \ <service_instance_name> \ <service_name> \ <service_plan> \ <region_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<service_instance_name>-
Specifies a name for the
service-instanceresource. <service_name>- Specifies the service name. Alternatively, you can specify a service ID.
<service_plan>- Specifies the service plan for your IBM Cloud account.
<region_name>- Specifies the region name.
Refer to the following example command:
ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ standard \ global \ -d premium-global-deployment
$ ibmcloud resource service-instance-create test-service-instance cloud-object-storage \ standard \ global \ -d premium-global-deploymentCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
cloud-object-storage- Specifies the service name.
-d premium-global-deployment- Specifies the deployment name.
Extract the service instance ID by running the following command:
SERVICE_INSTANCE_ID=$(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')
$ SERVICE_INSTANCE_ID=$(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a COS bucket by running the following command:
ibmcloud cos bucket-create \ --bucket $BUCKET \ --ibm-service-instance-id $SERVICE_INSTANCE_ID \ --region $REGION
$ ibmcloud cos bucket-create \ --bucket $BUCKET \ --ibm-service-instance-id $SERVICE_INSTANCE_ID \ --region $REGIONCopy to Clipboard Copied! Toggle word wrap Toggle overflow Variables such as
$BUCKET,$SERVICE_INSTANCE_ID, and$REGIONare replaced by the values you set previously.Create
HMACcredentials by running the following command.ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true}$ ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the access key ID and the secret access key from the
HMACcredentials and save them in thecredentials-velerofile. You can use thecredentials-velerofile to create asecretfor the backup storage location. Run the following command:cat > credentials-velero << __EOF__ [default] aws_access_key_id=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__
$ cat > credentials-velero << __EOF__ [default] aws_access_key_id=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=$(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1.2. Creating a default Secret Copia collegamentoCollegamento copiato negli appunti!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
-
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider. Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Secretis referenced in thespec.backupLocations.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application.
5.8.1.3. Creating secrets for different credentials Copia collegamentoCollegamento copiato negli appunti!
Create separate Secret objects when your backup and snapshot locations require different credentials. This allows you to configure distinct authentication for each storage location while maintaining secure credential management.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
custom_secret-
Specifies the backup location
Secretwith custom name.
5.8.1.4. Installing the Data Protection Application Copia collegamentoCollegamento copiato negli appunti!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials.NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.
Procedure
-
Click Operators
Installed Operators and select the OADP Operator. - Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
provider-
Specifies that the provider is
awswhen you use IBM Cloud as a backup storage location. bucket- Specifies the IBM Cloud Object Storage (COS) bucket name.
region-
Specifies the COS region name, for example,
eu-gb. s3Url-
Specifies the S3 URL of the COS bucket. For example,
http://s3.eu-gb.cloud-object-storage.appdomain.cloud. Here,eu-gbis the region name. Replace the region name according to your bucket region. name-
Specifies the name of the secret you created by using the access key and the secret access key from the
HMACcredentials.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1.5. Setting Velero CPU and memory resource allocations Copia collegamentoCollegamento copiato negli appunti!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
5.8.1.6. Configuring node agents and node labels Copia collegamentoCollegamento copiato negli appunti!
The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.
Procedure
Run the node agent on any node that you choose by adding a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny label specified must match the labels on each node.
Use the same custom label in the
DPA.spec.configuration.nodeAgent.podConfig.nodeSelectorfield, which you used for labeling nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example is an anti-pattern of
nodeSelectorand does not work unless both labels,node-role.kubernetes.io/infra: ""andnode-role.kubernetes.io/worker: "", are on the node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1.7. Configuring the DPA with client burst and QPS settings Copia collegamentoCollegamento copiato negli appunti!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
client-burst-
Specifies the
client-burstvalue. In this example, theclient-burstfield is set to 500. client-qps-
Specifies the
client-qpsvalue. In this example, theclient-qpsfield is set to 300.
5.8.1.8. Configuring node agent load affinity Copia collegamentoCollegamento copiato negli appunti!
You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR).
See the following example in which you can schedule the node agent pods on nodes with the label label.io/role: cpu-1 and other-label.io/other-role: cpu-2.
You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the DPA spec
nodegent.loadAffinityobject as shown in the following example.In the example, you ensure that the node agent pods are scheduled only on nodes with the label
label.io/role: cpu-1and the labellabel.io/hostnamematching with eithernode1ornode2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
loadAffinity-
Specifies the
loadAffinityobject by adding thematchLabelsandmatchExpressionsobjects. matchExpressions-
Specifies the
matchExpressionsobject to add restrictions on the node agent pods scheduling.
5.8.1.9. Node agent load affinity guidelines Copia collegamentoCollegamento copiato negli appunti!
Use the following guidelines to configure the node agent loadAffinity object in the DataProtectionApplication (DPA) custom resource (CR).
-
Use the
spec.nodeagent.podConfig.nodeSelectorobject for simple node matching. -
Use the
loadAffinity.nodeSelectorobject without thepodConfig.nodeSelectorobject for more complex scenarios. -
You can use both
podConfig.nodeSelectorandloadAffinity.nodeSelectorobjects, but theloadAffinityobject must be equal or more restrictive as compared to thepodConfigobject. In this scenario, thepodConfig.nodeSelectorlabels must be a subset of the labels used in theloadAffinity.nodeSelectorobject. -
You cannot use the
matchExpressionsandmatchLabelsfields if you have configured bothpodConfig.nodeSelectorandloadAffinity.nodeSelectorobjects in the DPA. See the following example to configure both
podConfig.nodeSelectorandloadAffinity.nodeSelectorobjects in the DPA.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1.10. Configuring node agent load concurrency Copia collegamentoCollegamento copiato negli appunti!
You can control the maximum number of node agent operations that can run simultaneously on each node within your cluster.
You can configure it using one of the following fields of the Data Protection Application (DPA):
-
globalConfig: Defines a default concurrency limit for the node agent across all nodes. -
perNodeConfig: Specifies different concurrency limits for specific nodes based onnodeSelectorlabels. This provides flexibility for environments where certain nodes might have different resource capacities or roles.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
If you want to use load concurrency for specific nodes, add labels to those nodes:
oc label node/<node_name> label.io/instance-type='large'
$ oc label node/<node_name> label.io/instance-type='large'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the load concurrency fields for your DPA instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
globalConfig-
Specifies the global concurrent number. The default value is 1, which means there is no concurrency and only one load is allowed. The
globalConfigvalue does not have a limit. label.io/instance-type- Specifies the label for per-node concurrency.
number- Specifies the per-node concurrent number. You can specify many per-node concurrent numbers, for example, based on the instance type and size. The range of per-node concurrent number is the same as the global concurrent number. If the configuration file contains a per-node concurrent number and a global concurrent number, the per-node concurrent number takes precedence.
5.8.1.11. Configuring repository maintenance Copia collegamentoCollegamento copiato negli appunti!
OADP repository maintenance is a background job, you can configure it independently of the node agent pods. This means that you can schedule the repository maintenance pod on a node where the node agent is or is not running.
You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.
You have the option to configure the load affinity at the global level affecting all repositories. Or you can configure the load affinity for each repository. You can also use a combination of global and per-repository configuration.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the
loadAffinityobject in the DPA spec by using either one or both of the following methods:Global configuration: Configure load affinity for all repositories as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
repositoryMaintenance-
Specifies the
repositoryMaintenanceobject as shown in the example. global-
Specifies the
globalobject to configure load affinity for all repositories.
Per-repository configuration: Configure load affinity per repository as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
myrepositoryname-
Specifies the
repositoryMaintenanceobject for each repository.
5.8.1.12. Configuring Velero load affinity Copia collegamentoCollegamento copiato negli appunti!
With each OADP deployment, there is one Velero pod and its main purpose is to schedule Velero workloads. To schedule the Velero pod, you can use the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DataProtectionApplication (DPA) custom resource (CR) spec.
Use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.
The OpenShift scheduler applies the rules and performs the scheduling of the Velero pod deployment.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the
velero.podConfig.nodeSelectorand thevelero.loadAffinityobjects in the DPA spec as shown in the following examples:velero.podConfig.nodeSelectorobject configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero.loadAffinityobject configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.8.1.13. Overriding the imagePullPolicy setting in the DPA Copia collegamentoCollegamento copiato negli appunti!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
imagePullPolicy-
Specifies the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
5.8.1.14. Configuring the DPA with more than one BSL Copia collegamentoCollegamento copiato negli appunti!
Configure the DataProtectionApplication (DPA) custom resource (CR) with multiple BackupStorageLocation (BSL) resources to store backups across different locations using provider-specific credentials. This provides backup distribution and location-specific restore capabilities.
For example, you have configured the following two BSLs:
- Configured one BSL in the DPA and set it as the default BSL.
-
Created another BSL independently by using the
BackupStorageLocationCR.
As you have already set the BSL created through the DPA as the default, you cannot set the independently created BSL again as the default. This means, at any given time, you can set only one BSL as the default BSL.
Prerequisites
- You must install the OADP Operator.
- You must create the secrets by using the credentials provided by the cloud provider.
Procedure
Configure the
DataProtectionApplicationCR with more than oneBackupStorageLocationCR. See the following example:Example DPA
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
name: aws- Specifies a name for the first BSL.
default: true-
Indicates that this BSL is the default BSL. If a BSL is not set in the
Backup CR, the default BSL is used. You can set only one BSL as the default. <bucket_name>- Specifies the bucket name.
<prefix>-
Specifies a prefix for Velero backups. For example,
velero. <region_name>- Specifies the AWS region for the bucket.
cloud-credentials-
Specifies the name of the default
Secretobject that you created. name: odf- Specifies a name for the second BSL.
<url>- Specifies the URL of the S3 endpoint.
<custom_secret_name_odf>-
Specifies the correct name for the
Secret. For example,custom_secret_name_odf. If you do not specify aSecretname, the default name is used.
Specify the BSL to be used in the backup CR. See the following example.
Example backup CR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<namespace>- Specifies the namespace to back up.
<backup_storage_location>- Specifies the storage location.
5.8.1.15. Disabling the node agent in DataProtectionApplication Copia collegamentoCollegamento copiato negli appunti!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enable- Enables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enableEnables the node agent.
You can set up a job to enable and disable the
nodeAgentfield in theDataProtectionApplicationCR. For more information, see "Running tasks in pods using jobs".
5.9. Configuring OADP with Azure Copia collegamentoCollegamento copiato negli appunti!
5.9.1. Configuring the OpenShift API for Data Protection with Microsoft Azure Copia collegamentoCollegamento copiato negli appunti!
Configure the OpenShift API for Data Protection (OADP) with Microsoft Azure to back up and restore cluster resources by using Azure storage. This provides data protection capabilities for your OpenShift Container Platform clusters.
The OADP Operator installs Velero 1.16.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure Azure for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.
5.9.1.1. Configuring Microsoft Azure Copia collegamentoCollegamento copiato negli appunti!
Configure Microsoft Azure storage and service principal credentials for backup storage with OADP. This provides the necessary authentication and storage infrastructure for data protection operations.
Prerequisites
- You must have the Azure CLI installed.
Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools.
This identity is used for access to resources.
- Create a service principal
- Sign in using a service principal and password
- Sign in using a service principal and certificate
- Manage service principal roles
- Create an Azure resource using a service principal
- Reset service principal credentials
For more details, see Create an Azure service principal with Azure CLI.
Procedure
Log in to Azure:
az login
$ az loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
AZURE_RESOURCE_GROUPvariable:AZURE_RESOURCE_GROUP=Velero_Backups
$ AZURE_RESOURCE_GROUP=Velero_BackupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure resource group:
az group create -n $AZURE_RESOURCE_GROUP --location CentralUS
$ az group create -n $AZURE_RESOURCE_GROUP --location CentralUSCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
CentralUS- Specifies your location.
Set the
AZURE_STORAGE_ACCOUNT_IDvariable:AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
$ AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure storage account:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BLOB_CONTAINERvariable:BLOB_CONTAINER=velero
$ BLOB_CONTAINER=veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure Blob storage container:
az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID
$ az storage container create \ -n $BLOB_CONTAINER \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service principal and credentials for
velero:AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`
$ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service principal with the
Contributorrole, assigning a specific--roleand--scopes:AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$AZURE_RESOURCE_GROUP`$ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$AZURE_RESOURCE_GROUP`Copy to Clipboard Copied! Toggle word wrap Toggle overflow The CLI generates a password for you. Ensure you capture the password.
After creating the service principal, obtain the client id.
AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`
$ AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor this to be successful, you must know your Azure application ID.
Save the service principal credentials in the
credentials-velerofile:Copy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to add Azure as a replication repository.
5.9.1.2. About backup and snapshot locations and their secrets Copia collegamentoCollegamento copiato negli appunti!
Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.
5.9.1.2.1. Backup locations Copia collegamentoCollegamento copiato negli appunti!
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
5.9.1.2.2. Snapshot locations Copia collegamentoCollegamento copiato negli appunti!
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
5.9.1.2.3. Secrets Copia collegamentoCollegamento copiato negli appunti!
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
5.9.1.3. About authenticating OADP with Azure Copia collegamentoCollegamento copiato negli appunti!
Review authentication methods for OADP with Azure to select the appropriate authentication approach for your security requirements.
You can authenticate OADP with Azure by using the following methods:
- A Velero-specific service principal with secret-based authentication.
- A Velero-specific storage account access key with secret-based authentication.
- Azure Security Token Service.
5.9.1.4. Using a service principal or a storage account access key Copia collegamentoCollegamento copiato negli appunti!
You create a default Secret object and reference it in the backup storage location custom resource. The credentials file for the Secret object can contain information about the Azure service principal or a storage account access key.
The default name of the Secret is cloud-credentials-azure.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
-
You have access to the OpenShift cluster as a user with
cluster-adminprivileges. - You have an Azure subscription with appropriate permissions.
- You have installed OADP.
- You have configured an object storage for storing the backups.
Procedure
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider.You can use one of the following two methods to authenticate OADP with Azure.
Use the service principal with secret-based authentication. See the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a storage account access key. See the following example:
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<azure_storage_account_access_key> AZURE_SUBSCRIPTION_ID=<azure_subscription_id> AZURE_RESOURCE_GROUP=<azure_resource_group> AZURE_CLOUD_NAME=<azure_cloud_name>
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<azure_storage_account_access_key> AZURE_SUBSCRIPTION_ID=<azure_subscription_id> AZURE_RESOURCE_GROUP=<azure_resource_group> AZURE_CLOUD_NAME=<azure_cloud_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reference the
Secretin thespec.backupLocations.velero.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<custom_secret>-
Specifies the backup location
Secretwith custom name.
5.9.1.5. Using OADP with Azure Security Token Service authentication Copia collegamentoCollegamento copiato negli appunti!
You can use Microsoft Entra Workload ID to access Azure storage for OADP backup and restore operations. This approach uses the signed Kubernetes service account tokens of the OpenShift cluster. These token are automatically rotated every hour and exchanged with the Azure Active Directory (AD) access tokens, eliminating the need for long-term client secrets.
To use the Azure Security Token Service (STS) configuration, you need the credentialsMode field set to Manual during cluster installation. This approach uses the Cloud Credential Operator (ccoctl) to set up the workload identity infrastructure, including the OpenID Connect (OIDC) provider, issuer configuration, and user-assigned managed identities.
OADP with Azure STS configuration does not support restic File System Backups (FSB) and restores.
Prerequisites
- You have an OpenShift cluster installed on Microsoft Azure with Microsoft Entra Workload ID configured. For more details see, Configuring an Azure cluster to use short-term credentials.
-
You have the Azure CLI (
az) installed and configured. -
You have access to the OpenShift cluster as a user with
cluster-adminprivileges. - You have an Azure subscription with appropriate permissions.
If your OpenShift cluster was not originally installed with Microsoft Entra Workload ID, you can enable short-term credentials after installation. This post-installation configuration is supported specifically for Azure clusters.
Procedure
If your cluster was installed with long-term credentials, you can switch to Microsoft Entra Workload ID authentication after installation. For more details, see Enabling Microsoft Entra Workload ID on an existing cluster.
ImportantAfter enabling Microsoft Entra Workload ID on an existing Azure cluster, you must update all cluster components that use cloud credentials, including OADP, to use the new authentication method.
Set the environment variables for your Azure STS configuration as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure Managed Identity for OADP as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Grant the required Azure roles to the managed identity as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an Azure storage account and a container as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Get the OIDC issuer URL from your OpenShift cluster as shown in the following example:
export SERVICE_ACCOUNT_ISSUER=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer) echo "OIDC Issuer: $SERVICE_ACCOUNT_ISSUER"
export SERVICE_ACCOUNT_ISSUER=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer) echo "OIDC Issuer: $SERVICE_ACCOUNT_ISSUER"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure Microsoft Entra Workload ID Federation as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the OADP namespace if it does not already exist by running the following command:
oc create namespace openshift-adp
oc create namespace openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use the
CloudStorageCR to create an Azure cloud storage resource, run the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
DataProtectionApplication(DPA) custom resource (CR) and configure the Azure STS details as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
<cloud_storage_cr>-
Specifies the
CloudStorageCR name. <storage_account_name>- Specifies the Azure storage account name.
<resource_group>- Specifies the resource group.
<subscription_ID>- Specifies the subscription ID.
Verification
Verify that the OADP operator pods are running:
oc get pods -n openshift-adp
$ oc get pods -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the Azure role assignments:
az role assignment list --assignee ${IDENTITY_PRINCIPAL_ID} --all --query "[].roleDefinitionName" -o tsvaz role assignment list --assignee ${IDENTITY_PRINCIPAL_ID} --all --query "[].roleDefinitionName" -o tsvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify Microsoft Entra Workload ID authentication:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a backup of an application and verify the backup is stored successfully in Azure storage.
5.9.1.6. Setting Velero CPU and memory resource allocations Copia collegamentoCollegamento copiato negli appunti!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
5.9.1.7. Enabling self-signed CA certificates Copia collegamentoCollegamento copiato negli appunti!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
caCert- Specifies the Base64-encoded CA certificate string.
insecureSkipTLSVerify-
Specifies the
insecureSkipTLSVerifyconfiguration. The configuration can be set to either"true"or"false". If set to"true", SSL/TLS security is disabled. If set to"false", SSL/TLS security is enabled.
5.9.1.8. Using CA certificates with the velero command aliased for Velero deployment Copia collegamentoCollegamento copiato negli appunti!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed. .ProcedureTo use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
velero version
$ velero versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP
Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADPCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
5.9.1.9. Installing the Data Protection Application Copia collegamentoCollegamento copiato negli appunti!
You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.
Prerequisites
- You must install the OADP Operator.
- You must configure object storage as a backup location.
- If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
-
If the backup and snapshot locations use the same credentials, you must create a
Secretwith the default name,cloud-credentials-azure. If the backup and snapshot locations use different credentials, you must create two
Secrets:-
Secretwith a custom name for the backup location. You add thisSecretto theDataProtectionApplicationCR. -
Secretwith another custom name for the snapshot location. You add thisSecretto theDataProtectionApplicationCR.
NoteIf you do not want to specify backup or snapshot locations during the installation, you can create a default
Secretwith an emptycredentials-velerofile. If there is no defaultSecret, the installation will fail.-
Procedure
-
Click Operators
Installed Operators and select the OADP Operator. - Under Provided APIs, click Create instance in the DataProtectionApplication box.
Click YAML View and update the parameters of the
DataProtectionApplicationmanifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
namespace-
Specifies the default namespace for OADP which is
openshift-adp. The namespace is a variable and is configurable. openshift-
Specifies that the
openshiftplugin is mandatory. resourceTimeout- Specifies how many minutes to wait for several Velero resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability, before timeout occurs. The default is 10m.
nodeAgent- Specifies the administrative agent that routes the administrative requests to servers.
enable-
Set this value to
trueif you want to enablenodeAgentand perform File System Backup. uploaderType-
Specifies the uploader type. Enter
kopiaorresticas your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. ThenodeAgentdeploys a daemon set, which means that thenodeAgentpods run on each working node. You can configure File System Backup by addingspec.defaultVolumesToFsBackup: trueto theBackupCR. nodeSelector- Specifies the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
resourceGroup- Specifies the Azure resource group.
storageAccount- Specifies the Azure storage account ID.
subscriptionId- Specifies the Azure subscription ID.
name-
Specifies the name of the
Secretobject. If you do not specify this value, the default name,cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location. bucket- Specifies a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
prefix-
Specifies a prefix for Velero backups, for example,
velero, if the bucket is used for multiple purposes. snapshotLocations- Specifies the snapshot location. You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs.
name-
Specifies the name of the
Secretobject that you created. If you do not specify this value, the default name,cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location.
- Click Create.
Verification
Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:
oc get all -n openshift-adp
$ oc get all -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
DataProtectionApplication(DPA) is reconciled by running the following command:oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'$ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}{"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the
typeis set toReconciled. Verify the backup storage location and confirm that the
PHASEisAvailableby running the following command:oc get backupstoragelocations.velero.io -n openshift-adp
$ oc get backupstoragelocations.velero.io -n openshift-adpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true
NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9.1.10. Configuring the DPA with client burst and QPS settings Copia collegamentoCollegamento copiato negli appunti!
The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.
You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
client-burstand theclient-qpsfields in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
client-burst-
Specifies the
client-burstvalue. In this example, theclient-burstfield is set to 500. client-qps-
Specifies the
client-qpsvalue. In this example, theclient-qpsfield is set to 300.
5.9.1.11. Configuring node agents and node labels Copia collegamentoCollegamento copiato negli appunti!
The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.
Procedure
Run the node agent on any node that you choose by adding a custom label:
oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
$ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny label specified must match the labels on each node.
Use the same custom label in the
DPA.spec.configuration.nodeAgent.podConfig.nodeSelectorfield, which you used for labeling nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The following example is an anti-pattern of
nodeSelectorand does not work unless both labels,node-role.kubernetes.io/infra: ""andnode-role.kubernetes.io/worker: "", are on the node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9.1.12. Configuring node agent load affinity Copia collegamentoCollegamento copiato negli appunti!
You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR).
See the following example in which you can schedule the node agent pods on nodes with the label label.io/role: cpu-1 and other-label.io/other-role: cpu-2.
You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the DPA spec
nodegent.loadAffinityobject as shown in the following example.In the example, you ensure that the node agent pods are scheduled only on nodes with the label
label.io/role: cpu-1and the labellabel.io/hostnamematching with eithernode1ornode2.Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
loadAffinity-
Specifies the
loadAffinityobject by adding thematchLabelsandmatchExpressionsobjects. matchExpressions-
Specifies the
matchExpressionsobject to add restrictions on the node agent pods scheduling.
5.9.1.13. Node agent load affinity guidelines Copia collegamentoCollegamento copiato negli appunti!
Use the following guidelines to configure the node agent loadAffinity object in the DataProtectionApplication (DPA) custom resource (CR).
-
Use the
spec.nodeagent.podConfig.nodeSelectorobject for simple node matching. -
Use the
loadAffinity.nodeSelectorobject without thepodConfig.nodeSelectorobject for more complex scenarios. -
You can use both
podConfig.nodeSelectorandloadAffinity.nodeSelectorobjects, but theloadAffinityobject must be equal or more restrictive as compared to thepodConfigobject. In this scenario, thepodConfig.nodeSelectorlabels must be a subset of the labels used in theloadAffinity.nodeSelectorobject. -
You cannot use the
matchExpressionsandmatchLabelsfields if you have configured bothpodConfig.nodeSelectorandloadAffinity.nodeSelectorobjects in the DPA. See the following example to configure both
podConfig.nodeSelectorandloadAffinity.nodeSelectorobjects in the DPA.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9.1.14. Configuring node agent load concurrency Copia collegamentoCollegamento copiato negli appunti!
You can control the maximum number of node agent operations that can run simultaneously on each node within your cluster.
You can configure it using one of the following fields of the Data Protection Application (DPA):
-
globalConfig: Defines a default concurrency limit for the node agent across all nodes. -
perNodeConfig: Specifies different concurrency limits for specific nodes based onnodeSelectorlabels. This provides flexibility for environments where certain nodes might have different resource capacities or roles.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges.
Procedure
If you want to use load concurrency for specific nodes, add labels to those nodes:
oc label node/<node_name> label.io/instance-type='large'
$ oc label node/<node_name> label.io/instance-type='large'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the load concurrency fields for your DPA instance:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
globalConfig-
Specifies the global concurrent number. The default value is 1, which means there is no concurrency and only one load is allowed. The
globalConfigvalue does not have a limit. label.io/instance-type- Specifies the label for per-node concurrency.
number- Specifies the per-node concurrent number. You can specify many per-node concurrent numbers, for example, based on the instance type and size. The range of per-node concurrent number is the same as the global concurrent number. If the configuration file contains a per-node concurrent number and a global concurrent number, the per-node concurrent number takes precedence.
5.9.1.15. Configuring the node agent as a non-root and non-privileged user Copia collegamentoCollegamento copiato negli appunti!
To enhance the node agent security, you can configure the OADP Operator node agent daemonset to run as a non-root and non-privileged user by using the spec.configuration.velero.disableFsBackup setting in the DataProtectionApplication (DPA) custom resource (CR).
By setting the spec.configuration.velero.disableFsBackup setting to true, the node agent security context sets the root file system to read-only and sets the privileged flag to false.
Setting spec.configuration.velero.disableFsBackup to true enhances the node agent security by removing the need for privileged containers and enforcing a read-only root file system.
However, it also disables File System Backup (FSB) with Kopia. If your workloads rely on FSB for backing up volumes that do not support native snapshots, then you should evaluate whether the disableFsBackup configuration fits your use case.
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
disableFsBackupfield in the DPA as shown in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeAgent- Specifies to enable the node agent in the DPA.
disableFsBackup-
Specifies to set the
disableFsBackupfield totrue.
Verification
Verify that the node agent security context is set to run as non-root and the root file system is
readOnlyby running the following command:oc get daemonset node-agent -o yaml
$ oc get daemonset node-agent -o yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow The example output is as following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
allowPrivilegeEscalation-
Specifies that the
allowPrivilegeEscalationfield is false. privileged-
Specifies that the
privilegedfield is false. readOnlyRootFilesystem- Specifies that the root file system is read-only.
runAsNonRoot- Specifies that the node agent is run as a non-root user.
5.9.1.16. Configuring repository maintenance Copia collegamentoCollegamento copiato negli appunti!
OADP repository maintenance is a background job, you can configure it independently of the node agent pods. This means that you can schedule the repository maintenance pod on a node where the node agent is or is not running.
You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.
You have the option to configure the load affinity at the global level affecting all repositories. Or you can configure the load affinity for each repository. You can also use a combination of global and per-repository configuration.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the
loadAffinityobject in the DPA spec by using either one or both of the following methods:Global configuration: Configure load affinity for all repositories as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
repositoryMaintenance-
Specifies the
repositoryMaintenanceobject as shown in the example. global-
Specifies the
globalobject to configure load affinity for all repositories.
Per-repository configuration: Configure load affinity per repository as shown in the following example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
myrepositoryname-
Specifies the
repositoryMaintenanceobject for each repository.
5.9.1.17. Configuring Velero load affinity Copia collegamentoCollegamento copiato negli appunti!
With each OADP deployment, there is one Velero pod and its main purpose is to schedule Velero workloads. To schedule the Velero pod, you can use the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DataProtectionApplication (DPA) custom resource (CR) spec.
Use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.
The OpenShift scheduler applies the rules and performs the scheduling of the Velero pod deployment.
Prerequisites
-
You must be logged in as a user with
cluster-adminprivileges. - You have installed the OADP Operator.
- You have configured the DPA CR.
Procedure
Configure the
velero.podConfig.nodeSelectorand thevelero.loadAffinityobjects in the DPA spec as shown in the following examples:velero.podConfig.nodeSelectorobject configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero.loadAffinityobject configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.9.1.18. Overriding the imagePullPolicy setting in the DPA Copia collegamentoCollegamento copiato negli appunti!
In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.
In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:
-
If the image has the digest, the Operator sets
imagePullPolicytoIfNotPresent. -
If the image does not have the digest, the Operator sets
imagePullPolicytoAlways.
You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).
Prerequisites
- You have installed the OADP Operator.
Procedure
Configure the
spec.imagePullPolicyfield in the DPA as shown in the following example:Example Data Protection Application
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
imagePullPolicy-
Specifies the value for
imagePullPolicy. In this example, theimagePullPolicyfield is set toNever.
5.9.1.18.1. Enabling CSI in the DataProtectionApplication CR Copia collegamentoCollegamento copiato negli appunti!
You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.
Prerequisites
- The cloud provider must support CSI snapshots.
Procedure
Edit the
DataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
csi-
Specifies the
csidefault plugin.
5.9.1.18.2. Disabling the node agent in DataProtectionApplication Copia collegamentoCollegamento copiato negli appunti!
If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.
Procedure
To disable the
nodeAgent, set theenableflag tofalse. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enable- Enables the node agent.
To enable the
nodeAgent, set theenableflag totrue. See the following example:Example
DataProtectionApplicationCRCopy to Clipboard Copied! Toggle word wrap Toggle overflow where:
enableEnables the node agent.
You can set up a job to enable and disable the
nodeAgentfield in theDataProtectionApplicationCR. For more information, see "Running tasks in pods using jobs".
5.10. Configuring OADP with Google Cloud Copia collegamentoCollegamento copiato negli appunti!
5.10.1. Configuring the OpenShift API for Data Protection with Google Cloud Copia collegamentoCollegamento copiato negli appunti!
You install the OpenShift API for Data Protection (OADP) with Google Cloud by installing the OADP Operator. The Operator installs Velero 1.16.
Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.
You configure Google Cloud for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.
To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.
5.10.1.1. Configuring Google Cloud Copia collegamentoCollegamento copiato negli appunti!
You configure Google Cloud for the OpenShift API for Data Protection (OADP).
Prerequisites
-
You must have the
gcloudandgsutilCLI tools installed. See the Google cloud documentation for details.
Procedure
Log in to Google Cloud:
gcloud auth login
$ gcloud auth loginCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
BUCKETvariable:BUCKET=<bucket>
$ BUCKET=<bucket>Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
bucket- Specifies the bucket name.
Create the storage bucket:
gsutil mb gs://$BUCKET/
$ gsutil mb gs://$BUCKET/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
PROJECT_IDvariable to your active project:PROJECT_ID=$(gcloud config get-value project)
$ PROJECT_ID=$(gcloud config get-value project)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a service account:
gcloud iam service-accounts create velero \ --display-name "Velero service account"$ gcloud iam service-accounts create velero \ --display-name "Velero service account"Copy to Clipboard Copied! Toggle word wrap Toggle overflow List your service accounts:
gcloud iam service-accounts list
$ gcloud iam service-accounts listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
SERVICE_ACCOUNT_EMAILvariable to match itsemailvalue:SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')$ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)')Copy to Clipboard Copied! Toggle word wrap Toggle overflow Attach the policies to give the
velerouser the minimum necessary permissions:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
velero.servercustom role:gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"$ gcloud iam roles create velero.server \ --project $PROJECT_ID \ --title "Velero Server" \ --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add IAM policy binding to the project:
gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.server$ gcloud projects add-iam-policy-binding $PROJECT_ID \ --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \ --role projects/$PROJECT_ID/roles/velero.serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the IAM service account:
gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}$ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}Copy to Clipboard Copied! Toggle word wrap Toggle overflow Save the IAM service account keys to the
credentials-velerofile in the current directory:gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAIL$ gcloud iam service-accounts keys create credentials-velero \ --iam-account $SERVICE_ACCOUNT_EMAILCopy to Clipboard Copied! Toggle word wrap Toggle overflow You use the
credentials-velerofile to create aSecretobject for Google Cloud before you install the Data Protection Application.
5.10.1.2. About backup and snapshot locations and their secrets Copia collegamentoCollegamento copiato negli appunti!
Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.
5.10.1.2.1. Backup locations Copia collegamentoCollegamento copiato negli appunti!
You can specify one of the following AWS S3-compatible object storage solutions as a backup location:
- Multicloud Object Gateway (MCG)
- Red Hat Container Storage
- Ceph RADOS Gateway; also known as Ceph Object Gateway
- Red Hat OpenShift Data Foundation
- MinIO
Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.
5.10.1.2.2. Snapshot locations Copia collegamentoCollegamento copiato negli appunti!
If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.
If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.
If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.
5.10.1.2.3. Secrets Copia collegamentoCollegamento copiato negli appunti!
If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.
If the backup and snapshot locations use different credentials, you create two secret objects:
-
Custom
Secretfor the backup location, which you specify in theDataProtectionApplicationCR. -
Default
Secretfor the snapshot location, which is not referenced in theDataProtectionApplicationCR.
The Data Protection Application requires a default Secret. Otherwise, the installation will fail.
If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.
5.10.1.2.4. Creating a default Secret Copia collegamentoCollegamento copiato negli appunti!
You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.
The default name of the Secret is cloud-credentials-gcp.
The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.
If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.
Prerequisites
- Your object storage and cloud storage, if any, must use the same credentials.
- You must configure object storage for Velero.
Procedure
-
Create a
credentials-velerofile for the backup storage location in the appropriate format for your cloud provider. Create a
Secretcustom resource (CR) with the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
Secretis referenced in thespec.backupLocations.credentialblock of theDataProtectionApplicationCR when you install the Data Protection Application.
5.10.1.2.5. Creating secrets for different credentials Copia collegamentoCollegamento copiato negli appunti!
Create separate Secret objects when your backup and snapshot locations require different credentials. This allows you to configure distinct authentication for each storage location while maintaining secure credential management.
Procedure
-
Create a
credentials-velerofile for the snapshot location in the appropriate format for your cloud provider. Create a
Secretfor the snapshot location with the default name:oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a
credentials-velerofile for the backup location in the appropriate format for your object storage. Create a
Secretfor the backup location with a custom name:oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
$ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-veleroCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
Secretwith the custom name to theDataProtectionApplicationCR, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
custom_secret-
Specifies the backup location
Secretwith custom name.
5.10.1.2.6. Setting Velero CPU and memory resource allocations Copia collegamentoCollegamento copiato negli appunti!
You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the values in the
spec.configuration.velero.podConfig.ResourceAllocationsblock of theDataProtectionApplicationCR manifest, as in the following example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
nodeSelector- Specifies the node selector to be supplied to Velero podSpec.
resourceAllocationsSpecifies the resource allocations listed for average usage.
NoteKopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.
Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.
Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.
5.11. Enabling self-signed CA certificates Copia collegamentoCollegamento copiato negli appunti!
You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.
Prerequisites
- You must have the OpenShift API for Data Protection (OADP) Operator installed.
Procedure
Edit the
spec.backupLocations.velero.objectStorage.caCertparameter andspec.backupLocations.velero.configparameters of theDataProtectionApplicationCR manifest:Copy to Clipboard Copied! Toggle word wrap Toggle overflow where:
caCert- Specifies the Base64-encoded CA certificate string.
insecureSkipTLSVerify-
Specifies the
insecureSkipTLSVerifyconfiguration. The configuration can be set to either"true"or"false". If set to"true", SSL/TLS security is disabled. If set to"false", SSL/TLS security is enabled.
5.12. Using CA certificates with the velero command aliased for Velero deployment Copia collegamentoCollegamento copiato negli appunti!
You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.
Prerequisites
-
You must be logged in to the OpenShift Container Platform cluster as a user with the
cluster-adminrole. You must have the OpenShift CLI (
oc) installed. .ProcedureTo use an aliased Velero command, run the following command:
alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the alias is working by running the following command:
velero version
$ velero versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP
Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADPCopy to Clipboard Copied! Toggle word wrap Toggle overflow To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:
CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')Copy to Clipboard Copied! Toggle word wrap Toggle overflow [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"Copy to Clipboard Copied! Toggle word wrap Toggle overflow velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
$ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow To fetch the backup logs, run the following command:
velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>
$ velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can use these logs to view failures and warnings for the resources that you cannot back up.
-
If the Velero pod restarts, the
/tmp/your-cacert.txtfile disappears, and you must re-create the/tmp/your-cacert.txtfile by re-running the commands from the previous step. You can check if the
/tmp/your-cacert.txtfile still exists, in the file location where you stored it, by running the following command:oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txt
$ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt" /tmp/your-cacert.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.