Questo contenuto non è disponibile nella lingua selezionata.

Chapter 5. OADP Application backup and restore


5.1. Introduction to OpenShift API for Data Protection

The OpenShift API for Data Protection (OADP) product safeguards customer applications on OpenShift Container Platform. It offers comprehensive disaster recovery protection, covering OpenShift Container Platform applications, application-related cluster resources, persistent volumes, and internal images. OADP is also capable of backing up both containerized applications and virtual machines (VMs).

However, OADP does not serve as a disaster recovery solution for etcd or OpenShift Operators.

Important

OADP support is applicable to customer workload namespaces and cluster scope resources.

Full cluster backup and restore are not supported.

5.1.1. OpenShift API for Data Protection APIs

OADP provides APIs that enable multiple approaches to customizing backups and preventing the inclusion of unnecessary or inappropriate resources.

OADP provides the following APIs:

5.1.1.1. Support for OpenShift API for Data Protection

Expand
Table 5.1. Supported versions of OADP

Version

OCP version

General availability

Full support ends

Maintenance ends

Extended Update Support (EUS)

Extended Update Support Term 2 (EUS Term 2)

1.5

  • 4.19

17 June 2025

Release of 1.6

Release of 1.7

EUS must be on OCP 4.20

EUS Term 2 must be on OCP 4.20

1.4

  • 4.14
  • 4.15
  • 4.16
  • 4.17
  • 4.18

10 Jul 2024

Release of 1.5

Release of 1.6

27 Jun 2026

EUS must be on OCP 4.16

27 Jun 2027

EUS Term 2 must be on OCP 4.16

1.3

  • 4.12
  • 4.13
  • 4.14
  • 4.15

29 Nov 2023

10 Jul 2024

Release of 1.5

31 Oct 2025

EUS must be on OCP 4.14

31 Oct 2026

EUS Term 2 must be on OCP 4.14

5.1.1.1.1. Unsupported versions of the OADP Operator
Expand
Table 5.2. Previous versions of the OADP Operator which are no longer supported

Version

General availability

Full support ended

Maintenance ended

1.2

14 Jun 2023

29 Nov 2023

10 Jul 2024

1.1

01 Sep 2022

14 Jun 2023

29 Nov 2023

1.0

09 Feb 2022

01 Sep 2022

14 Jun 2023

For more details about EUS, see Extended Update Support.

For more details about EUS Term 2, see Extended Update Support Term 2.

5.2. OADP release notes

5.2.1. OADP 1.5 release notes

The release notes for OpenShift API for Data Protection (OADP) describe new features and enhancements, deprecated features, product recommendations, known issues, and resolved issues.

Note

For additional information about OADP, see OpenShift API for Data Protection (OADP) FAQs

5.2.1.1. OADP 1.5.4 release notes

OpenShift API for Data Protection (OADP) 1.5.4 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.5.3. OADP 1.5.4 introduces a known issue and fixes several Common Vulnerabilities and Exposures (CVEs).

5.2.1.1.1. Known issues
Simultaneous updates to the same NonAdminBackupStorageLocationRequest objects cause resource conflicts

Simultaneous updates by several controllers or processes to the same NonAdminBackupStorageLocationRequest objects cause resource conflicts during backup creation in OADP self-service. As a consequence, reconciliation attempts fail with object has been modified errors. No known workaround exists.

OADP-6700

5.2.1.1.2. Resolved issues
OADP 1.5.4 fixes the following CVEs

5.2.1.2. OADP 1.5.3 release notes

OpenShift API for Data Protection (OADP) 1.5.3 is a Container Grade Only (CGO) release, which is released to refresh the health grades of the containers. No code was changed in the product itself compared to that of OADP 1.5.2.

5.2.1.3. OADP 1.5.2 release notes

The OpenShift API for Data Protection (OADP) 1.5.2 release notes lists resolved issues.

5.2.1.3.1. Resolved issues

Self-signed certificate for internal image backup should not break other BSLs

Before this update, OADP would only process the first custom CA certificate found among all backup storage locations (BSLs) and apply it globally. This behavior prevented multiple BSLs with different CA certificates from working correctly. Additionally, system-trusted certificates were not included, causing failures when connecting to standard services. With this update, OADP now:

  • Concatenates all unique CA certificates from AWS BSLs into a single bundle.
  • Includes system-trusted certificates automatically.
  • Enables multiple BSLs with different custom CA certificates to operate simultaneously.
  • Only processes CA certificates when image backup is enabled (default behavior).

This enhancement improves compatibility for environments using multiple storage providers with different certificate requirements, particularly when backing up internal images to AWS S3-compatible storage with self-signed certificates.

OADP-6765

5.2.1.4. OADP 1.5.1 release notes

The OpenShift API for Data Protection (OADP) 1.5.1 release notes lists new features, resolved issues, known issues, and deprecated features.

5.2.1.4.1. New features

CloudStorage API is fully supported

The CloudStorage API feature, available as a Technology Preview before this update, is fully supported from OADP 1.5.1. The CloudStorage API automates the creation of a bucket for object storage.

OADP-3307

New DataProtectionTest custom resource is available

The DataProtectionTest (DPT) is a custom resource (CR) that provides a framework to validate your OADP configuration. The DPT CR checks and reports information for the following parameters:

  • The upload performance of the backups to the object storage.
  • The Container Storage Interface (CSI) snapshot readiness for persistent volume claims.
  • The storage bucket configuration, such as encryption and versioning.

Using this information in the DPT CR, you can ensure that your data protection environment is properly configured and performing according to the set configuration.

Note that you must configure STORAGE_ACCOUNT_ID when using DPT with OADP on Azure.

OADP-6300

New node agent load affinity configurations are available

  • Node agent load affinity: You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR). You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.
  • Repository maintenance job affinity configurations: You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.

    You have the option to configure the load affinity at the global level affecting all repositories, or for each repository. You can also use a combination of global and per-repository configuration.

  • Velero load affinity: You can use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.

OADP-5832

Node agent load concurrency is available

With this update, users can control the maximum number of node agent operations that can run simultaneously on each node within their cluster. It also enables better resource management, optimizing backup and restore workflows for improved performance and a more streamlined experience.

5.2.1.4.2. Resolved issues

DataProtectionApplicationSpec overflowed annotation limit, causing potential misconfiguration in deployments

Before this update, the DataProtectionApplicationSpec used deprecated PodAnnotations, which led to an annotation limit overflow. This caused potential misconfigurations in deployments. In this release, we have added PodConfig for annotations in pods deployed by the Operator, ensuring consistent annotations and improved manageability for end users. As a result, deployments should now be more reliable and easier to manage.

OADP-6454

Root file system for OADP controller manager is now read-only

Before this update, the manager container of the openshift-adp-controller-manager-* pod was configured to run with a writable root file system. As a consequence, this could allow for tampering with the container’s file system or the writing of foreign executables. With this release, the container’s security context has been updated to set the root file system to read-only while ensuring necessary functions that require write access, such as the Kopia cache, continue to operate correctly. As a result, the container is hardened against potential threats.

nonAdmin.enable: false in multiple DPAs no longer causes reconcile issues

Before this update, when a user attempted to create a second non-admin DataProtectionApplication (DPA) on a cluster where one already existed, the new DPA failed to reconcile. With this release, the restriction on Non-Admin Controller installation to one per cluster has been removed. As a result, users can install multiple Non-Admin Controllers across the cluster without encountering errors.

OADP-6500

OADP supports self-signed certificates

Before this update, using a self-signed certificate for backup images with a storage provider such as Minio resulted in an x509: certificate signed by unknown authority error during the backup process. With this release, certificate validation has been updated to support self-signed certificates in OADP, ensuring successful backups.

OADP-641

velero describe includes defaultVolumesToFsBackup

Before this update, the velero describe output command omitted the defaultVolumesToFsBackup flag. This affected the visibility of backup configuration details for users. With this release, the velero describe output includes the defaultVolumesToFsBackup flag information, improving the visibility of backup settings.

OADP-5762

DPT CR no longer fail when s3Url is secured

Before this update, DataProtectionTest (DPT) failed to run when s3Url was secured due to an unverified certificate because the DPT CR lacked the ability to skip or add the caCert in the spec field. As a consequence, data upload failure occurred due to an unverified certificate. With this release, DPT CR has been updated to accept and skip CA cert in spec field, resolving SSL verification errors. As a result, DPT no longer fails when using secured s3Url.

OADP-6235

Adding a backupLocation to DPA with an existing backupLocation name is not rejected

Before this update, adding a second backupLocation with the same name in DataProtectionApplication (DPA) caused OADP to enter an invalid state, leading to Backup and Restore failures due to Velero’s inability to read Secret credentials. As a consequence, Backup and Restore operations failed. With this release, the duplicate backupLocation names in DPA are no longer allowed, preventing Backup and Restore failures. As a result, duplicate backupLocation names are rejected, ensuring seamless data protection.

OADP-6459

5.2.1.4.3. Known issues

The restore fails for backups created on OpenStack using the Cinder CSI driver

When you start a restore operation for a backup that was created on an OpenStack platform using the Cinder Container Storage Interface (CSI) driver, the initial backup only succeeds after the source application is manually scaled down. The restore job fails, preventing you from successfully recovering your application’s data and state from the backup. No known workaround exists.

OADP-5552

Datamover pods scheduled on unexpected nodes during backup if the nodeAgent.loadAffinity parameter has many elements

Due to an issue in Velero 1.14 and later, the OADP node-agent only processes the first nodeSelector element within the loadAffinity array. As a consequence, if you define multiple nodeSelector objects, all objects except the first are ignored, potentially causing datamover pods to be scheduled on unexpected nodes during a backup.

To work around this problem, consolidate all required matchExpressions from multiple nodeSelector objects into the first nodeSelector object. As a result, all node affinity rules are correctly applied, ensuring datamover pods are scheduled to the appropriate nodes.

OADP-6469

OADP Backup fails when using CA certificates with aliased command

The CA certificate is not stored as a file on the running Velero container. As a consequence, the user experience degraded due to missing caCert in Velero container, requiring manual setup and downloads. To work around this problem, manually add cert to the Velero deployment. For instructions, see Using cacert with velero command aliased via velero deployment.

OADP-4668

The nodeSelector spec is not supported for the Data Mover restore action

When a Data Protection Application (DPA) is created with the nodeSelector field set in the nodeAgent parameter, Data Mover restore partially fails instead of completing the restore operation. No known workaround exists.

OADP-4743

Image streams backups are partially failing when the DPA is configured with caCert

An unverified certificate in the S3 connection during backups with caCert in DataProtectionApplication (DPA) causes the ocp-django application’s backup to partially fail and result in data loss. No known workaround exists.

OADP-4817

Kopia does not delete cache on worker node

When the ephemeral-storage parameter is configured and running file system restore, the cache is not automatically deleted from the worker node. As a consequence, the /var partition overflows during backup restore, causing increased storage usage and potential resource exhaustion. To work around this problem, restart the node agent pod, which clears the cache. As a result, cache is deleted.

OADP-4855

Google Cloud VSL backups fail with Workload Identity because of invalid project configuration

When performing a volumeSnapshotLocation (VSL) backup on Google Cloud Workload Identity, the Velero Google Cloud plugin creates an invalid API request if the Google Cloud project is also specified in the snapshotLocations configuration of DataProtectionApplication (DPA). As a consequence, the Google Cloud API returns a RESOURCE_PROJECT_INVALID error, and the backup job finishes with a PartiallyFailed status. No known workaround exists.

OADP-6697

VSL backups fail for CloudStorage API on AWS with STS

The volumeSnapshotLocation (VSL) backup fails because of missing the AZURE_RESOURCE_GROUP parameter in the credentials file, even if AZURE_RESOURCE_GROUP is already mentioned in the DataProtectionApplication (DPA) config for VSL. No known workaround exists.

OADP-6676

Backups of applications with ImageStreams fail on Azure with STS

When backing up applications that include image stream resources on an Azure cluster using STS, the OADP plugin incorrectly attempts to locate a secret-based credential for the container registry. As a consequence, the required secret is not found in the STS environment, causing the ImageStream custom backup action to fail. This results in the overall backup status marked as PartiallyFailed. No known workaround exists.

OADP-6675

DPA reconciliation fails for CloudStorageRef configuration

When a user creates a bucket and uses the backupLocations.bucket.cloudStorageRef configuration, bucket credentials are not present in the DataProtectionApplication (DPA) custom resource (CR). As a result, the DPA reconciliation fails even if bucket credentials are present in the CloudStorage CR. To work around this problem, add the same credentials to the backupLocations section of the DPA CR.

OADP-6669

5.2.1.4.4. Deprecated features

The configuration.restic specification field has been deprecated

With OADP 1.5.0, the configuration.restic specification field has been deprecated. Use the nodeAgent section with the uploaderType field for selecting kopia or restic as a uploaderType. Note that Restic is deprecated in OADP 1.5.0.

OADP-5158

5.2.1.5. OADP 1.5.0 release notes

The OpenShift API for Data Protection (OADP) 1.5.0 release notes lists resolved issues and known issues.

5.2.1.5.1. New features

OADP 1.5.0 introduces a new Self-Service feature

OADP 1.5.0 introduces a new feature named OADP Self-Service, enabling namespace admin users to back up and restore applications on the OpenShift Container Platform. In the earlier versions of OADP, you needed the cluster-admin role to perform OADP operations such as backing up and restoring an application, creating a backup storage location, and so on.

From OADP 1.5.0 onward, you do not need the cluster-admin role to perform the backup and restore operations. You can use OADP with the namespace admin role. The namespace admin role has administrator access only to the namespace the user is assigned to. You can use the Self-Service feature only after the cluster administrator installs the OADP Operator and provides the necessary permissions.

OADP-4001

Collecting logs with the must-gather tool has been improved with a Markdown summary

You can collect logs, and information about OpenShift API for Data Protection (OADP) custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. This tool generates a Markdown output file with the collected information, which is located in the must-gather logs clusters directory.

OADP-5384

dataMoverPrepareTimeout and resourceTimeout parameters are now added to nodeAgent within the DPA

The nodeAgent field in Data Protection Application (DPA) now includes the following parameters:

  • dataMoverPrepareTimeout: Defines the duration the DataUpload or DataDownload process will wait. The default value is 30 minutes.
  • resourceTimeout: Sets the timeout for resource processes not addressed by other specific timeout parameters. The default value is 10 minutes.

OADP-3736

Use the spec.configuration.nodeAgent parameter in DPA for configuring nodeAgent daemon set

Velero no longer uses the node-agent-config config map for configuring the nodeAgent daemon set. With this update, you must use the new spec.configuration.nodeAgent parameter in a Data Protection Application (DPA) for configuring the nodeAgent daemon set.

OADP-5042

Configuring DPA with the backup repository configuration config map is now possible

With Velero 1.15 and later, you can now configure the total size of a cache per repository. This prevents pods from being removed due to running out of ephemeral storage. See the following new parameters added to the NodeAgentConfig field in DPA:

  • cacheLimitMB: Sets the local data cache size limit in megabytes.
  • fullMaintenanceInterval: The default value is 24 hours. Controls the removal rate of deleted Velero backups from the Kopia repository using the following override options:

    • normalGC: 24 hours
    • fastGC: 12 hours
    • eagerGC: 6 hours

OADP-5900

Enhancing the node-agent security

With this update, the following changes are added:

  • A new configuration option is now added to the velero field in DPA.
  • The default value for the disableFsBackup parameter is false or non-existing. With this update, the following options are added to the SecurityContext field:

    • Privileged: true
    • AllowPrivilegeEscalation: true
  • If you set the disableFsBackup parameter to true, it removes the following mounts from the node-agent:

    • host-pods
    • host-plugins
  • Modifies that the node-agent is always run as a non-root user.
  • Changes the root file system to read only.
  • Updates the following mount points with the write access:

    • /home/velero
    • tmp/credentials
  • Uses the SeccompProfileTypeRuntimeDefault option for the SeccompProfile parameter.

OADP-5031

Adds DPA support for parallel item backup

By default, only one thread processes an item block. Velero 1.16 supports a parallel item backup, where multiple items within a backup can be processed in parallel.

You can use the optional Velero server parameter --item-block-worker-count to run additional worker threads to process items in parallel. To enable this in OADP, set the dpa.Spec.Configuration.Velero.ItemBlockWorkerCount parameter to an integer value greater than zero.

Note

Running multiple full backups in parallel is not yet supported.

OADP-5635

OADP logs are now available in the JSON format

With the of release OADP 1.5.0, the logs are now available in the JSON format. It helps to have pre-parsed data in their Elastic logs management system.

OADP-3391

The oc get dpa command now displays RECONCILED status

With this release, the oc get dpa command now displays RECONCILED status instead of displaying only NAME and AGE to improve user experience. For example:

$ oc get dpa -n openshift-adp
NAME            RECONCILED   AGE
velero-sample   True         2m51s
Copy to Clipboard Toggle word wrap

OADP-1338

5.2.1.5.2. Resolved issues

Containers now use FallbackToLogsOnError for terminationMessagePolicy

With this release, the terminationMessagePolicy field can now set the FallbackToLogsOnError value for the OpenShift API for Data Protection (OADP) Operator containers such as operator-manager, velero, node-agent, and non-admin-controller.

This change ensures that if a container exits with an error and the termination message file is empty, OpenShift uses the last portion of the container logs output as the termination message.

OADP-5183

Namespace admin can now access the application after restore

Previously, the namespace admin could not execute an application after the restore operation with the following errors:

  • exec operation is not allowed because the pod’s security context exceeds your permissions
  • unable to validate against any security context constraint
  • not usable by user or serviceaccount, provider restricted-v2

With this update, this issue is now resolved and the namespace admin can access the application successfully after the restore.

OADP-5611

Specifying status restoration at the individual resource instance level using the annotation is now possible

Previously, status restoration was only configured at the resource type using the restoreStatus field in the Restore custom resource (CR).

With this release, you can now specify the status restoration at the individual resource instance level using the following annotation:

metadata:
  annotations:
    velero.io/restore-status: "true"
Copy to Clipboard Toggle word wrap

OADP-5968

Restore is now successful with excludedClusterScopedResources

Previously, on performing the backup of an application with the excludedClusterScopedResources field set to storageclasses, Namespace parameter, the backup was successful but the restore partially failed. With this update, the restore is successful.

OADP-5239

Backup is completed even if it gets restarted during the waitingForPluginOperations phase

Previously, a backup was marked as failed with the following error message:

failureReason: found a backup with status "InProgress" during the server starting,
mark it as "Failed"
Copy to Clipboard Toggle word wrap

With this update, the backup is completed if it gets restarted during the waitingForPluginOperations phase.

OADP-2941

Error messages are now more informative when the` disableFsbackup` parameter is set to true in DPA

Previously, when the spec.configuration.velero.disableFsBackup field from a Data Protection Application (DPA) was set to true, the backup partially failed with an error, which was not informative.

This update makes error messages more useful for troubleshooting issues. For example, error messages indicating that disableFsBackup: true is the issue in a DPA or not having access to a DPA if it is for non-administrator users.

OADP-5952

Handles AWS STS credentials in the parseAWSSecret

Previously, AWS credentials using STS authentication were not properly validated.

With this update, the parseAWSSecret function detects STS-specific fields, and updates the ensureSecretDataExists function to handle STS profiles correctly.

OADP-6105

The repositoryMaintenance job affinity config is available to configure

Previously, the new configurations for repository maintenance job pod affinity was missing from a DPA specification.

With this update, the repositoryMaintenance job affinity config is now available to map a BackupRepository identifier to its configuration.

OADP-6134

The ValidationErrors field fades away once the CR specification is correct

Previously, when a schedule CR was created with a wrong spec.schedule value and the same was later patched with a correct value, the ValidationErrors field still existed. Consequently, the ValidationErrors field was displaying incorrect information even though the spec was correct.

With this update, the ValidationErrors field fades away once the CR specification is correct.

OADP-5419

The volumeSnapshotContents custom resources are restored when the includedNamesapces field is used in restoreSpec

Previously, when a restore operation was triggered with the includedNamespace field in a restore specification, restore operation was completed successfully but no volumeSnapshotContents custom resources (CR) were created and the PVCs were in a Pending status.

With this update, volumeSnapshotContents CR are restored even when the includedNamesapces field is used in restoreSpec. As a result, an application pod is in a Running state after restore.

OADP-5939

OADP operator successfully creates bucket on top of AWS

Previously, the container was configured with the readOnlyRootFilesystem: true setting for security, but the code attempted to create temporary files in the /tmp directory using the os.CreateTemp() function. Consequently, while using the AWS STS authentication with the Cloud Credential Operator (CCO) flow, OADP failed to create temporary files that were required for AWS credential handling with the following error:

ERROR unable to determine if bucket exists. {"error": "open /tmp/aws-shared-credentials1211864681: read-only file system"}
Copy to Clipboard Toggle word wrap

With this update, the following changes are added to address this issue:

  • A new emptyDir volume named tmp-dir to the controller pod specification.
  • A volume mount to the container, which mounts this volume to the /tmp directory.
  • For security best practices, the readOnlyRootFilesystem: true is maintained.
  • Replaced the deprecated ioutil.TempFile() function with the recommended os.CreateTemp() function.
  • Removed the unnecessary io/ioutil import, which is no longer needed.

OADP-6019

For a complete list of all issues resolved in this release, see the list of OADP 1.5.0 resolved issues in Jira.

5.2.1.5.3. Known issues

Kopia does not delete all the artifacts after backup expiration

Even after deleting a backup, Kopia does not delete the volume artifacts from the ${bucket_name}/kopia/$openshift-adp on the S3 location after the backup expired. Information related to the expired and removed data files remains in the metadata. To ensure that OpenShift API for Data Protection (OADP) functions properly, the data is not deleted, and it exists in the /kopia/ directory, for example:

  • kopia.repository: Main repository format information such as encryption, version, and other details.
  • kopia.blobcfg: Configuration for how data blobs are named.
  • kopia.maintenance: Tracks maintenance owner, schedule, and last successful build.
  • log: Log blobs.

OADP-5131

For a complete list of all known issues in this release, see the list of OADP 1.5.0 known issues in Jira.

5.2.1.5.4. Deprecated features

The configuration.restic specification field has been deprecated

With OpenShift API for Data Protection (OADP) 1.5.0, the configuration.restic specification field has been deprecated. Use the nodeAgent section with the uploaderType field for selecting kopia or restic as a uploaderType. Note that, Restic is deprecated in OpenShift API for Data Protection (OADP) 1.5.0.

OADP-5158

5.2.1.5.5. Technology Preview

Support for HyperShift hosted OpenShift clusters is available as a Technology Preview

OADP can support and facilitate application migrations within HyperShift hosted OpenShift clusters as a Technology Preview. It ensures a seamless backup and restore operation for applications in hosted clusters.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

OADP-3930

5.2.1.6. Upgrading OADP 1.4 to 1.5

Note

Always upgrade to the next minor version. Do not skip versions. To update to a later version, upgrade only one channel at a time. For example, to upgrade from OADP 1.1 to 1.3, upgrade first to 1.2, and then to 1.3.

5.2.1.6.1. Changes from OADP 1.4 to 1.5

The Velero server has been updated from version 1.14 to 1.16.

This changes the following:

Version Support changes
OpenShift API for Data Protection implements a streamlined version support policy. Red Hat supports only one version of OpenShift API for Data Protection (OADP) on one OpenShift version to ensure better stability and maintainability. OADP 1.5.0 is only supported on OpenShift 4.19 version.
OADP Self-Service

OADP 1.5.0 introduces a new feature named OADP Self-Service, enabling namespace admin users to back up and restore applications on the OpenShift Container Platform. In the earlier versions of OADP, you needed the cluster-admin role to perform OADP operations such as backing up and restoring an application, creating a backup storage location, and so on.

From OADP 1.5.0 onward, you do not need the cluster-admin role to perform the backup and restore operations. You can use OADP with the namespace admin role. The namespace admin role has administrator access only to the namespace the user is assigned to. You can use the Self-Service feature only after the cluster administrator installs the OADP Operator and provides the necessary permissions.

backupPVC and restorePVC configurations

A backupPVC resource is an intermediate persistent volume claim (PVC) to access data during the data movement backup operation. You create a readonly backup PVC by using the nodeAgent.backupPVC section of the DataProtectionApplication (DPA) custom resource.

A restorePVC resource is an intermediate PVC that is used to write data during the Data Mover restore operation.

You can configure restorePVC in the DPA by using the ignoreDelayBinding field.

5.2.1.6.2. Backing up the DPA configuration

You must back up your current DataProtectionApplication (DPA) configuration.

Procedure

  • Save your current DPA configuration by running the following command:

    Example command

    $ oc get dpa -n openshift-adp -o yaml > dpa.orig.backup
    Copy to Clipboard Toggle word wrap

5.2.1.6.3. Upgrading the OADP Operator

You can upgrade the OpenShift API for Data Protection (OADP) Operator using the following procedure.

Note

Do not install OADP 1.5.0 on a OpenShift 4.18 cluster.

Prerequisites

  • You have installed the latest OADP 1.4.6.
  • You have backed up your data.

Procedure

  1. Upgrade OpenShift 4.18 to OpenShift 4.19.

    Note

    OpenShift API for Data Protection (OADP) 1.4 is not supported on OpenShift 4.19.

  2. Change your subscription channel for the OADP Operator from stable-1.4 to stable.
  3. Wait for the Operator and containers to update and restart.
5.2.1.6.4. Converting DPA to the new version for OADP 1.5.0

The OpenShift API for Data Protection (OADP) 1.4 is not supported on OpenShift 4.19. You can convert Data Protection Application (DPA) to the new OADP 1.5 version by using the new spec.configuration.nodeAgent field and its sub-fields.

Procedure

  1. To configure nodeAgent daemon set, use the spec.configuration.nodeAgent parameter in DPA. See the following example:

    Example DataProtectionApplication configuration

    ...
     spec:
       configuration:
         nodeAgent:
           enable: true
           uploaderType: kopia
    ...
    Copy to Clipboard Toggle word wrap

  2. To configure nodeAgent daemon set by using the ConfigMap resource named node-agent-config, see the following example configuration:

    Example config map

    ...
     spec:
       configuration:
         nodeAgent:
           backupPVC:
             ...
           loadConcurrency:
             ...
           podResources:
             ...
           restorePVC:
            ...
    ...
    Copy to Clipboard Toggle word wrap

5.2.1.6.5. Verifying the upgrade

You can verify the OpenShift API for Data Protection (OADP) upgrade by using the following procedure.

Procedure

  1. Verify that the DataProtectionApplication (DPA) has been reconciled successfully:

    $ oc get dpa dpa-sample -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    NAME            RECONCILED   AGE
    dpa-sample      True         2m51s
    Copy to Clipboard Toggle word wrap

    Note

    The RECONCILED column must be True.

  2. Verify that the installation finished by viewing the OADP resources by running the following command:

    $ oc get all -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                                    READY   STATUS    RESTARTS   AGE
    pod/node-agent-9pjz9                                    1/1     Running   0          3d17h
    pod/node-agent-fmn84                                    1/1     Running   0          3d17h
    pod/node-agent-xw2dg                                    1/1     Running   0          3d17h
    pod/openshift-adp-controller-manager-76b8bc8d7b-kgkcw   1/1     Running   0          3d17h
    pod/velero-64475b8c5b-nh2qc                             1/1     Running   0          3d17h
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/openshift-adp-controller-manager-metrics-service   ClusterIP   172.30.194.192   <none>        8443/TCP   3d17h
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.190.174   <none>        8085/TCP   3d17h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent   3         3         3       3            3           <none>          3d17h
    
    NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/openshift-adp-controller-manager   1/1     1            1           3d17h
    deployment.apps/velero                             1/1     1            1           3d17h
    
    NAME                                                          DESIRED   CURRENT   READY   AGE
    replicaset.apps/openshift-adp-controller-manager-76b8bc8d7b   1         1         1       3d17h
    replicaset.apps/openshift-adp-controller-manager-85fff975b8   0         0         0       3d17h
    replicaset.apps/velero-64475b8c5b                             1         1         1       3d17h
    replicaset.apps/velero-8b5bc54fd                              0         0         0       3d17h
    replicaset.apps/velero-f5c9ffb66                              0         0         0       3d17h
    Copy to Clipboard Toggle word wrap

    Note

    The node-agent pods are created only while using restic or kopia in DataProtectionApplication (DPA). In OADP 1.4.0 and OADP 1.3.0 version, the node-agent pods are labeled as restic.

  3. Verify the backup storage location and confirm that the PHASE is Available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true
    Copy to Clipboard Toggle word wrap

5.3. OADP performance

5.4. OADP features and plugins

OpenShift API for Data Protection (OADP) features provide options for backing up and restoring applications.

The default plugins enable Velero to integrate with certain cloud providers and to back up and restore OpenShift Container Platform resources.

5.4.1. OADP features

OpenShift API for Data Protection (OADP) supports the following features:

Backup

You can use OADP to back up all applications on the OpenShift Platform, or you can filter the resources by type, namespace, or label.

OADP backs up Kubernetes objects and internal images by saving them as an archive file on object storage. OADP backs up persistent volumes (PVs) by creating snapshots with the native cloud snapshot API or with the Container Storage Interface (CSI). For cloud providers that do not support snapshots, OADP backs up resources and PV data with Restic.

Note

You must exclude Operators from the backup of an application for backup and restore to succeed.

Restore

You can restore resources and PVs from a backup. You can restore all objects in a backup or filter the objects by namespace, PV, or label.

Note

You must exclude Operators from the backup of an application for backup and restore to succeed.

Schedule
You can schedule backups at specified intervals.
Hooks
You can use hooks to run commands in a container on a pod, for example, fsfreeze to freeze a file system. You can configure a hook to run before or after a backup or restore. Restore hooks can run in an init container or in the application container.

5.4.2. OADP plugins

The OpenShift API for Data Protection (OADP) provides default Velero plugins that are integrated with storage providers to support backup and snapshot operations. You can create custom plugins based on the Velero plugins.

OADP also provides plugins for OpenShift Container Platform resource backups, OpenShift Virtualization resource backups, and Container Storage Interface (CSI) snapshots.

Expand
Table 5.3. OADP plugins
OADP pluginFunctionStorage location

aws

Backs up and restores Kubernetes objects.

AWS S3

Backs up and restores volumes with snapshots.

AWS EBS

azure

Backs up and restores Kubernetes objects.

Microsoft Azure Blob storage

Backs up and restores volumes with snapshots.

Microsoft Azure Managed Disks

gcp

Backs up and restores Kubernetes objects.

Google Cloud Storage

Backs up and restores volumes with snapshots.

Google Compute Engine Disks

openshift

Backs up and restores OpenShift Container Platform resources. [1]

Object store

kubevirt

Backs up and restores OpenShift Virtualization resources. [2]

Object store

csi

Backs up and restores volumes with CSI snapshots. [3]

Cloud storage that supports CSI snapshots

hypershift

Backs up and restores HyperShift hosted cluster resources. [4]

Object store

  1. Mandatory.
  2. Virtual machine disks are backed up with CSI snapshots or Restic.
  3. The csi plugin uses the Kubernetes CSI snapshot API.

    • OADP 1.1 or later uses snapshot.storage.k8s.io/v1
    • OADP 1.0 uses snapshot.storage.k8s.io/v1beta1
  4. Do not add the hypershift plugin in the DataProtectionApplication custom resource if the cluster is not a HyperShift hosted cluster.

5.4.3. About OADP Velero plugins

You can configure two types of plugins when you install Velero:

  • Default cloud provider plugins
  • Custom plugins

Both types of plugin are optional, but most users configure at least one cloud provider plugin.

5.4.3.1. Default Velero cloud provider plugins

You can install any of the following default Velero cloud provider plugins when you configure the oadp_v1alpha1_dpa.yaml file during deployment:

  • aws (Amazon Web Services)
  • gcp (Google Cloud)
  • azure (Microsoft Azure)
  • openshift (OpenShift Velero plugin)
  • csi (Container Storage Interface)
  • kubevirt (KubeVirt)

You specify the desired default plugins in the oadp_v1alpha1_dpa.yaml file during deployment.

Example file

The following .yaml file installs the openshift, aws, azure, and gcp plugins:

 apiVersion: oadp.openshift.io/v1alpha1
 kind: DataProtectionApplication
 metadata:
   name: dpa-sample
 spec:
   configuration:
     velero:
       defaultPlugins:
       - openshift
       - aws
       - azure
       - gcp
Copy to Clipboard Toggle word wrap

5.4.3.2. Custom Velero plugins

You can install a custom Velero plugin by specifying the plugin image and name when you configure the oadp_v1alpha1_dpa.yaml file during deployment.

You specify the desired custom plugins in the oadp_v1alpha1_dpa.yaml file during deployment.

Example file

The following .yaml file installs the default openshift, azure, and gcp plugins and a custom plugin that has the name custom-plugin-example and the image quay.io/example-repo/custom-velero-plugin:

apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
 name: dpa-sample
spec:
 configuration:
   velero:
     defaultPlugins:
     - openshift
     - azure
     - gcp
     customPlugins:
     - name: custom-plugin-example
       image: quay.io/example-repo/custom-velero-plugin
Copy to Clipboard Toggle word wrap

5.4.4. Supported architectures for OADP

OpenShift API for Data Protection (OADP) supports the following architectures:

  • AMD64
  • ARM64
  • PPC64le
  • s390x
Note

OADP 1.2.0 and later versions support the ARM64 architecture.

5.4.5. OADP support for IBM Power and IBM Z

OpenShift API for Data Protection (OADP) is platform neutral. The information that follows relates only to IBM Power® and to IBM Z®.

  • OADP 1.3.6 was tested successfully against OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.3.6 in terms of backup locations for these systems.
  • OADP 1.4.6 was tested successfully against OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.4.6 in terms of backup locations for these systems.
  • OADP 1.5.4 was tested successfully against OpenShift Container Platform 4.19 for both IBM Power® and IBM Z®. The sections that follow give testing and support information for OADP 1.5.4 in terms of backup locations for these systems.

5.4.5.1. OADP support for target backup locations using IBM Power

  • IBM Power® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and OADP 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.13, 4.14, and 4.15, and OADP 1.3.6 against all S3 backup location targets, which are not AWS, as well.
  • IBM Power® running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and OADP 1.4.6 against all S3 backup location targets, which are not AWS, as well.
  • IBM Power® running with OpenShift Container Platform 4.19 and OADP 1.5.4 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Power® with OpenShift Container Platform 4.19 and OADP 1.5.4 against all S3 backup location targets, which are not AWS, as well.

5.4.5.2. OADP testing and support for target backup locations using IBM Z

  • IBM Z® running with OpenShift Container Platform 4.12, 4.13, 4.14, and 4.15, and 1.3.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.13 4.14, and 4.15, and 1.3.6 against all S3 backup location targets, which are not AWS, as well.
  • IBM Z® running with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.6 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.14, 4.15, 4.16, and 4.17, and 1.4.6 against all S3 backup location targets, which are not AWS, as well.
  • IBM Z® running with OpenShift Container Platform 4.19 and OADP 1.5.4 was tested successfully against an AWS S3 backup location target. Although the test involved only an AWS S3 target, Red Hat supports running IBM Z® with OpenShift Container Platform 4.19 and OADP 1.5.4 against all S3 backup location targets, which are not AWS, as well.
5.4.5.2.1. Known issue of OADP using IBM Power(R) and IBM Z(R) platforms
  • Currently, there are backup method restrictions for Single-node OpenShift clusters deployed on IBM Power® and IBM Z® platforms. Only NFS storage is currently compatible with Single-node OpenShift clusters on these platforms. In addition, only the File System Backup (FSB) methods such as Kopia and Restic are supported for backup and restore operations. There is currently no workaround for this issue.

5.4.6. OADP and FIPS

Federal Information Processing Standards (FIPS) are a set of computer security standards developed by the United States federal government in line with the Federal Information Security Management Act (FISMA).

OpenShift API for Data Protection (OADP) has been tested and works on FIPS-enabled OpenShift Container Platform clusters.

5.4.7. Avoiding the Velero plugin panic error

A missing secret can cause a panic error for the Velero plugin during image stream backups.

When the backup and the Backup Storage Location (BSL) are managed outside the scope of the Data Protection Application (DPA), the OADP controller does not create the relevant oadp-<bsl_name>-<bsl_provider>-registry-secret parameter.

During the backup operation, the OpenShift Velero plugin panics on the imagestream backup, with the following panic error:

024-02-27T10:46:50.028951744Z time="2024-02-27T10:46:50Z" level=error msg="Error backing up item"
backup=openshift-adp/<backup name> error="error executing custom action (groupResource=imagestreams.image.openshift.io,
namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked:
runtime error: index out of range with length 1, stack trace: goroutine 94…
Copy to Clipboard Toggle word wrap

Use the following workaround to avoid the Velero plugin panic error.

Procedure

  1. Label the custom BSL with the relevant label by using the following command:

    $ oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl
    Copy to Clipboard Toggle word wrap
  2. After the BSL is labeled, wait until the DPA reconciles.

    Note

    You can force the reconciliation by making any minor change to the DPA itself.

Verification

  • After the DPA is reconciled, confirm that the parameter has been created and that the correct registry data has been populated into it by entering the following command:

    $ oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'
    Copy to Clipboard Toggle word wrap

5.4.8. Workaround for OpenShift ADP Controller segmentation fault

If you configure a Data Protection Application (DPA) with both cloudstorage and restic enabled, the openshift-adp-controller-manager pod crashes and restarts indefinitely until the pod fails with a crash loop segmentation fault.

Define either velero or cloudstorage when you configure a DPA. Otherwise, the openshift-adp-controller-manager pod fails with a crash loop segmentation fault due to the following settings:

  • If you define both velero and cloudstorage, the openshift-adp-controller-manager fails.
  • If you do not define both velero and cloudstorage, the openshift-adp-controller-manager fails.

For more information about this issue, see OADP-1054.

5.5. OADP use cases

Following is a use case for using OADP and ODF to back up an application.

5.5.1.1. Backing up an application using OADP and ODF

In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF).

  • You create an object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
  • You use the NooBaa MCG service with OADP by using the aws provider plugin.
  • You configure the Data Protection Application (DPA) with the backup storage location (BSL).
  • You create a backup custom resource (CR) and specify the application namespace to back up.
  • You create and verify the backup.

Prerequisites

  • You installed the OADP Operator.
  • You installed the ODF Operator.
  • You have an application with a database running in a separate namespace.

Procedure

  1. Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: test-obc
      namespace: openshift-adp
    spec:
      storageClassName: openshift-storage.noobaa.io
      generateBucketName: test-backup-bucket
    Copy to Clipboard Toggle word wrap

    where:

    test-obc
    Specifies the name of the object bucket claim.
    test-backup-bucket
    Specifies the name of the bucket.
  2. Create the OBC by running the following command:

    $ oc create -f <obc_file_name>
    Copy to Clipboard Toggle word wrap

    where:

    <obc_file_name>
    Specifies the file name of the object bucket claim manifest.
  3. When you create an OBC, ODF creates a secret and a config map with the same name as the object bucket claim. The secret has the bucket credentials, and the config map has information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:

    $ oc extract --to=- cm/test-obc
    Copy to Clipboard Toggle word wrap

    test-obc is the name of the OBC.

    Example output

    # BUCKET_NAME
    backup-c20...41fd
    # BUCKET_PORT
    443
    # BUCKET_REGION
    
    # BUCKET_SUBREGION
    
    # BUCKET_HOST
    s3.openshift-storage.svc
    Copy to Clipboard Toggle word wrap

  4. To get the bucket credentials from the generated secret, run the following command:

    $ oc extract --to=- secret/test-obc
    Copy to Clipboard Toggle word wrap

    Example output

    # AWS_ACCESS_KEY_ID
    ebYR....xLNMc
    # AWS_SECRET_ACCESS_KEY
    YXf...+NaCkdyC3QPym
    Copy to Clipboard Toggle word wrap

  5. Get the public URL for the S3 endpoint from the s3 route in the openshift-storage namespace by running the following command:

    $ oc get route s3 -n openshift-storage
    Copy to Clipboard Toggle word wrap
  6. Create a cloud-credentials file with the object bucket credentials as shown in the following command:

    [default]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    Copy to Clipboard Toggle word wrap
  7. Create the cloud-credentials secret with the cloud-credentials file content as shown in the following command:

    $ oc create secret generic \
      cloud-credentials \
      -n openshift-adp \
      --from-file cloud=cloud-credentials
    Copy to Clipboard Toggle word wrap
  8. Configure the Data Protection Application (DPA) as shown in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: oadp-backup
      namespace: openshift-adp
    spec:
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
            - aws
            - openshift
            - csi
          defaultSnapshotMoveData: true
      backupLocations:
        - velero:
            config:
              profile: "default"
              region: noobaa
              s3Url: https://s3.openshift-storage.svc
              s3ForcePathStyle: "true"
              insecureSkipTLSVerify: "true"
            provider: aws
            default: true
            credential:
              key: cloud
              name:  cloud-credentials
            objectStorage:
              bucket: <bucket_name>
              prefix: oadp
    Copy to Clipboard Toggle word wrap

    where:

    defaultSnapshotMoveData
    Set to true to use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage.
    s3Url
    Specifies the S3 URL of ODF storage.
    <bucket_name>
    Specifies the bucket name.
  9. Create the DPA by running the following command:

    $ oc apply -f <dpa_filename>
    Copy to Clipboard Toggle word wrap
  10. Verify that the DPA is created successfully by running the following command. In the example output, you can see the status object has type field set to Reconciled. This means, the DPA is successfully created.

    $ oc get dpa -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    apiVersion: v1
    items:
    - apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        namespace: openshift-adp
        #...#
      spec:
        backupLocations:
        - velero:
            config:
              #...#
      status:
        conditions:
        - lastTransitionTime: "20....9:54:02Z"
          message: Reconcile complete
          reason: Complete
          status: "True"
          type: Reconciled
    kind: List
    metadata:
      resourceVersion: ""
    Copy to Clipboard Toggle word wrap

  11. Verify that the backup storage location (BSL) is available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    NAME           PHASE       LAST VALIDATED   AGE   DEFAULT
    dpa-sample-1   Available   3s               15s   true
    Copy to Clipboard Toggle word wrap

  12. Configure a backup CR as shown in the following example:

    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: test-backup
      namespace: openshift-adp
    spec:
      includedNamespaces:
      - <application_namespace>
    Copy to Clipboard Toggle word wrap

    where:

    <application_namespace>
    Specifies the namespace for the application to back up.
  13. Create the backup CR by running the following command:

    $ oc apply -f <backup_cr_filename>
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output.

    $ oc describe backup test-backup -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    Name:         test-backup
    Namespace:    openshift-adp
    # ....#
    Status:
      Backup Item Operations Attempted:  1
      Backup Item Operations Completed:  1
      Completion Timestamp:              2024-09-25T10:17:01Z
      Expiration:                        2024-10-25T10:16:31Z
      Format Version:                    1.1.0
      Hook Status:
      Phase:  Completed
      Progress:
        Items Backed Up:  34
        Total Items:      34
      Start Timestamp:    2024-09-25T10:16:31Z
      Version:            1
    Events:               <none>
    Copy to Clipboard Toggle word wrap

5.5.2. OpenShift API for Data Protection (OADP) restore use case

Following is a use case for using OADP to restore a backup to a different namespace.

5.5.2.1. Restoring an application to a different namespace using OADP

Restore a backup of an application by using OADP to a new target namespace, test-restore-application. To restore a backup, you create a restore custom resource (CR) as shown in the following example. In the restore CR, the source namespace refers to the application namespace that you included in the backup. You then verify the restore by changing your project to the new restored namespace and verifying the resources.

Prerequisites

  • You installed the OADP Operator.
  • You have the backup of an application to be restored.

Procedure

  1. Create a restore CR as shown in the following example:

    apiVersion: velero.io/v1
    kind: Restore
    metadata:
      name: test-restore
      namespace: openshift-adp
    spec:
      backupName: <backup_name>
      restorePVs: true
      namespaceMapping:
        <application_namespace>: test-restore-application
    Copy to Clipboard Toggle word wrap

    where:

    test-restore
    Specifies the name of the restore CR.
    <backup_name>
    Specifies the name of the backup.
    <application_namespace>
    Specifies the target namespace to restore to. namespaceMapping maps the source application namespace to the target application namespace. test-restore-application is the name of target namespace where you want to restore the backup.
  2. Apply the restore CR by running the following command:

    $ oc apply -f <restore_cr_filename>
    Copy to Clipboard Toggle word wrap

Verification

  1. Verify that the restore is in the Completed phase by running the following command:

    $ oc describe restores.velero.io <restore_name> -n openshift-adp
    Copy to Clipboard Toggle word wrap
  2. Change to the restored namespace test-restore-application by running the following command:

    $ oc project test-restore-application
    Copy to Clipboard Toggle word wrap
  3. Verify the restored resources such as persistent volume claim (pvc), service (svc), deployment, secret, and config map by running the following command:

    $ oc get pvc,svc,deployment,secret,configmap
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                          STATUS   VOLUME
    persistentvolumeclaim/mysql   Bound    pvc-9b3583db-...-14b86
    
    NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    service/mysql      ClusterIP   172....157     <none>        3306/TCP   2m56s
    service/todolist   ClusterIP   172.....15     <none>        8000/TCP   2m56s
    
    NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/mysql   0/1     1            0           2m55s
    
    NAME                                         TYPE                      DATA   AGE
    secret/builder-dockercfg-6bfmd               kubernetes.io/dockercfg   1      2m57s
    secret/default-dockercfg-hz9kz               kubernetes.io/dockercfg   1      2m57s
    secret/deployer-dockercfg-86cvd              kubernetes.io/dockercfg   1      2m57s
    secret/mysql-persistent-sa-dockercfg-rgp9b   kubernetes.io/dockercfg   1      2m57s
    
    NAME                                 DATA   AGE
    configmap/kube-root-ca.crt           1      2m57s
    configmap/openshift-service-ca.crt   1      2m57s
    Copy to Clipboard Toggle word wrap

5.5.3. Including a self-signed CA certificate during backup

You can include a self-signed Certificate Authority (CA) certificate in the Data Protection Application (DPA) and then back up an application. You store the backup in a NooBaa bucket provided by Red Hat OpenShift Data Foundation (ODF).

5.5.3.1. Backing up an application and its self-signed CA certificate

The s3.openshift-storage.svc service, provided by ODF, uses a Transport Layer Security protocol (TLS) certificate that is signed with the self-signed service CA.

To prevent a certificate signed by unknown authority error, you must include a self-signed CA certificate in the backup storage location (BSL) section of DataProtectionApplication custom resource (CR). For this situation, you must complete the following tasks:

  • Request a NooBaa bucket by creating an object bucket claim (OBC).
  • Extract the bucket details.
  • Include a self-signed CA certificate in the DataProtectionApplication CR.
  • Back up an application.

Prerequisites

  • You installed the OADP Operator.
  • You installed the ODF Operator.
  • You have an application with a database running in a separate namespace.

Procedure

  1. Create an OBC manifest to request a NooBaa bucket as shown in the following example:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: test-obc
      namespace: openshift-adp
    spec:
      storageClassName: openshift-storage.noobaa.io
      generateBucketName: test-backup-bucket
    Copy to Clipboard Toggle word wrap

    where:

    test-obc
    Specifies the name of the object bucket claim.
    test-backup-bucket
    Specifies the name of the bucket.
  2. Create the OBC by running the following command:

    $ oc create -f <obc_file_name>
    Copy to Clipboard Toggle word wrap
  3. When you create an OBC, ODF creates a secret and a ConfigMap with the same name as the object bucket claim. The secret object contains the bucket credentials, and the ConfigMap object contains information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:

    $ oc extract --to=- cm/test-obc
    Copy to Clipboard Toggle word wrap

    test-obc is the name of the OBC.

    Example output

    # BUCKET_NAME
    backup-c20...41fd
    # BUCKET_PORT
    443
    # BUCKET_REGION
    
    # BUCKET_SUBREGION
    
    # BUCKET_HOST
    s3.openshift-storage.svc
    Copy to Clipboard Toggle word wrap

  4. To get the bucket credentials from the secret object, run the following command:

    $ oc extract --to=- secret/test-obc
    Copy to Clipboard Toggle word wrap

    Example output

    # AWS_ACCESS_KEY_ID
    ebYR....xLNMc
    # AWS_SECRET_ACCESS_KEY
    YXf...+NaCkdyC3QPym
    Copy to Clipboard Toggle word wrap

  5. Create a cloud-credentials file with the object bucket credentials by using the following example configuration:

    [default]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    Copy to Clipboard Toggle word wrap
  6. Create the cloud-credentials secret with the cloud-credentials file content by running the following command:

    $ oc create secret generic \
      cloud-credentials \
      -n openshift-adp \
      --from-file cloud=cloud-credentials
    Copy to Clipboard Toggle word wrap
  7. Extract the service CA certificate from the openshift-service-ca.crt config map by running the following command. Ensure that you encode the certificate in Base64 format and note the value to use in the next step.

    $ oc get cm/openshift-service-ca.crt \
      -o jsonpath='{.data.service-ca\.crt}' | base64 -w0; echo
    Copy to Clipboard Toggle word wrap

    Example output

    LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0...
    ....gpwOHMwaG9CRmk5a3....FLS0tLS0K
    Copy to Clipboard Toggle word wrap

  8. Configure the DataProtectionApplication CR manifest file with the bucket name and CA certificate as shown in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: oadp-backup
      namespace: openshift-adp
    spec:
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
            - aws
            - openshift
            - csi
          defaultSnapshotMoveData: true
      backupLocations:
        - velero:
            config:
              profile: "default"
              region: noobaa
              s3Url: https://s3.openshift-storage.svc
              s3ForcePathStyle: "true"
              insecureSkipTLSVerify: "false"
            provider: aws
            default: true
            credential:
              key: cloud
              name:  cloud-credentials
            objectStorage:
              bucket: <bucket_name>
              prefix: oadp
              caCert: <ca_cert>
    Copy to Clipboard Toggle word wrap

    where:

    insecureSkipTLSVerify
    Specifies whether SSL/TLS security is enabled. If set to true, SSL/TLS security is disabled. If set to false, SSL/TLS security is enabled.
    <bucket_name>
    Specifies the name of the bucket extracted in an earlier step.
    <ca_cert>
    Specifies the Base64 encoded certificate from the previous step.
  9. Create the DataProtectionApplication CR by running the following command:

    $ oc apply -f <dpa_filename>
    Copy to Clipboard Toggle word wrap
  10. Verify that the DataProtectionApplication CR is created successfully by running the following command:

    $ oc get dpa -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    apiVersion: v1
    items:
    - apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        namespace: openshift-adp
        #...#
      spec:
        backupLocations:
        - velero:
            config:
              #...#
      status:
        conditions:
        - lastTransitionTime: "20....9:54:02Z"
          message: Reconcile complete
          reason: Complete
          status: "True"
          type: Reconciled
    kind: List
    metadata:
      resourceVersion: ""
    Copy to Clipboard Toggle word wrap

  11. Verify that the backup storage location (BSL) is available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    NAME           PHASE       LAST VALIDATED   AGE   DEFAULT
    dpa-sample-1   Available   3s               15s   true
    Copy to Clipboard Toggle word wrap

  12. Configure the Backup CR by using the following example:

    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: test-backup
      namespace: openshift-adp
    spec:
      includedNamespaces:
      - <application_namespace>
    Copy to Clipboard Toggle word wrap

    where:

    <application_namespace>
    Specifies the namespace for the application to back up.
  13. Create the Backup CR by running the following command:

    $ oc apply -f <backup_cr_filename>
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the Backup object is in the Completed phase by running the following command:

    $ oc describe backup test-backup -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    Name:         test-backup
    Namespace:    openshift-adp
    # ....#
    Status:
      Backup Item Operations Attempted:  1
      Backup Item Operations Completed:  1
      Completion Timestamp:              2024-09-25T10:17:01Z
      Expiration:                        2024-10-25T10:16:31Z
      Format Version:                    1.1.0
      Hook Status:
      Phase:  Completed
      Progress:
        Items Backed Up:  34
        Total Items:      34
      Start Timestamp:    2024-09-25T10:16:31Z
      Version:            1
    Events:               <none>
    Copy to Clipboard Toggle word wrap

5.5.4. Using the legacy-aws Velero plugin

If you are using an AWS S3-compatible backup storage location, you might get a SignatureDoesNotMatch error while backing up your application. This error occurs because some backup storage locations still use the older versions of the S3 APIs, which are incompatible with the newer AWS SDK for Go V2. To resolve this issue, you can use the legacy-aws Velero plugin in the DataProtectionApplication custom resource (CR). The legacy-aws Velero plugin uses the older AWS SDK for Go V1, which is compatible with the legacy S3 APIs, ensuring successful backups.

In the following use case, you configure the DataProtectionApplication CR with the legacy-aws Velero plugin and then back up an application.

Note

Depending on the backup storage location you choose, you can use either the legacy-aws or the aws plugin in your DataProtectionApplication CR. If you use both of the plugins in the DataProtectionApplication CR, the following error occurs: aws and legacy-aws can not be both specified in DPA spec.configuration.velero.defaultPlugins.

Prerequisites

  • You have installed the OADP Operator.
  • You have configured an AWS S3-compatible object storage as a backup location.
  • You have an application with a database running in a separate namespace.

Procedure

  1. Configure the DataProtectionApplication CR to use the legacy-aws Velero plugin as shown in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: oadp-backup
      namespace: openshift-adp
    spec:
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
            - legacy-aws
            - openshift
            - csi
          defaultSnapshotMoveData: true
      backupLocations:
        - velero:
            config:
              profile: "default"
              region: noobaa
              s3Url: https://s3.openshift-storage.svc
              s3ForcePathStyle: "true"
              insecureSkipTLSVerify: "true"
            provider: aws
            default: true
            credential:
              key: cloud
              name:  cloud-credentials
            objectStorage:
              bucket: <bucket_name>
              prefix: oadp
    Copy to Clipboard Toggle word wrap

    where:

    legacy-aws
    Specifies to use the legacy-aws plugin.
    <bucket_name>
    Specifies the bucket name.
  2. Create the DataProtectionApplication CR by running the following command:

    $ oc apply -f <dpa_filename>
    Copy to Clipboard Toggle word wrap
  3. Verify that the DataProtectionApplication CR is created successfully by running the following command. In the example output, you can see the status object has the type field set to Reconciled and the status field set to "True". That status indicates that the DataProtectionApplication CR is successfully created.

    $ oc get dpa -o yaml
    Copy to Clipboard Toggle word wrap

    Example output

    apiVersion: v1
    items:
    - apiVersion: oadp.openshift.io/v1alpha1
      kind: DataProtectionApplication
      metadata:
        namespace: openshift-adp
        #...#
      spec:
        backupLocations:
        - velero:
            config:
              #...#
      status:
        conditions:
        - lastTransitionTime: "20....9:54:02Z"
          message: Reconcile complete
          reason: Complete
          status: "True"
          type: Reconciled
    kind: List
    metadata:
      resourceVersion: ""
    Copy to Clipboard Toggle word wrap

  4. Verify that the backup storage location (BSL) is available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Copy to Clipboard Toggle word wrap

    You should see an output similar to the following example:

    NAME           PHASE       LAST VALIDATED   AGE   DEFAULT
    dpa-sample-1   Available   3s               15s   true
    Copy to Clipboard Toggle word wrap
  5. Configure a Backup CR as shown in the following example:

    apiVersion: velero.io/v1
    kind: Backup
    metadata:
      name: test-backup
      namespace: openshift-adp
    spec:
      includedNamespaces:
      - <application_namespace>
    Copy to Clipboard Toggle word wrap

    where:

    <application_namespace>
    Specifies the namespace for the application to back up.
  6. Create the Backup CR by running the following command:

    $ oc apply -f <backup_cr_filename>
    Copy to Clipboard Toggle word wrap

Verification

  • Verify that the backup object is in the Completed phase by running the following command. For more details, see the example output.

    $ oc describe backups.velero.io test-backup -n openshift-adp
    Copy to Clipboard Toggle word wrap

    Example output

    Name:         test-backup
    Namespace:    openshift-adp
    # ....#
    Status:
      Backup Item Operations Attempted:  1
      Backup Item Operations Completed:  1
      Completion Timestamp:              2024-09-25T10:17:01Z
      Expiration:                        2024-10-25T10:16:31Z
      Format Version:                    1.1.0
      Hook Status:
      Phase:  Completed
      Progress:
        Items Backed Up:  34
        Total Items:      34
      Start Timestamp:    2024-09-25T10:16:31Z
      Version:            1
    Events:               <none>
    Copy to Clipboard Toggle word wrap

5.5.5. Backing up workloads on OADP with OpenShift Container Platform

To back up and restore workloads on ROSA, you can use OADP. You can create a backup of a workload, restore it from the backup, and verify the restoration. You can also clean up the OADP Operator, backup storage, and AWS resources when they are no longer needed.

5.5.5.1. Performing a backup with OADP and OpenShift Container Platform

The following example hello-world application has no persistent volumes (PVs) attached. Perform a backup by using OpenShift API for Data Protection (OADP) with OpenShift Container Platform.

Either Data Protection Application (DPA) configuration will work.

Procedure

  1. Create a workload to back up by running the following commands:

    $ oc create namespace hello-world
    Copy to Clipboard Toggle word wrap
    $ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
    Copy to Clipboard Toggle word wrap
  2. Expose the route by running the following command:

    $ oc expose service/hello-openshift -n hello-world
    Copy to Clipboard Toggle word wrap
  3. Check that the application is working by running the following command:

    $ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
    Copy to Clipboard Toggle word wrap

    You should see an output similar to the following example:

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap
  4. Back up the workload by running the following command:

    $ cat << EOF | oc create -f -
      apiVersion: velero.io/v1
      kind: Backup
      metadata:
        name: hello-world
        namespace: openshift-adp
      spec:
        includedNamespaces:
        - hello-world
        storageLocation: ${CLUSTER_NAME}-dpa-1
        ttl: 720h0m0s
    EOF
    Copy to Clipboard Toggle word wrap
  5. Wait until the backup is complete, and then run the following command:

    $ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
    Copy to Clipboard Toggle word wrap

    You should see an output similar to the following example:

    {
      "completionTimestamp": "2022-09-07T22:20:44Z",
      "expiration": "2022-10-07T22:20:22Z",
      "formatVersion": "1.1.0",
      "phase": "Completed",
      "progress": {
        "itemsBackedUp": 58,
        "totalItems": 58
      },
      "startTimestamp": "2022-09-07T22:20:22Z",
      "version": 1
    }
    Copy to Clipboard Toggle word wrap
  6. Delete the demo workload by running the following command:

    $ oc delete ns hello-world
    Copy to Clipboard Toggle word wrap
  7. Restore the workload from the backup by running the following command:

    $ cat << EOF | oc create -f -
      apiVersion: velero.io/v1
      kind: Restore
      metadata:
        name: hello-world
        namespace: openshift-adp
      spec:
        backupName: hello-world
    EOF
    Copy to Clipboard Toggle word wrap
  8. Wait for the Restore to finish by running the following command:

    $ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
    Copy to Clipboard Toggle word wrap

    You should see an output similar to the following example:

    {
      "completionTimestamp": "2022-09-07T22:25:47Z",
      "phase": "Completed",
      "progress": {
        "itemsRestored": 38,
        "totalItems": 38
      },
      "startTimestamp": "2022-09-07T22:25:28Z",
      "warnings": 9
    }
    Copy to Clipboard Toggle word wrap
  9. Check that the workload is restored by running the following command:

    $ oc -n hello-world get pods
    Copy to Clipboard Toggle word wrap

    You should see an output similar to the following example:

    NAME                              READY   STATUS    RESTARTS   AGE
    hello-openshift-9f885f7c6-kdjpj   1/1     Running   0          90s
    Copy to Clipboard Toggle word wrap
  10. Check the JSONPath by running the following command:

    $ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
    Copy to Clipboard Toggle word wrap

    You should see an output similar to the following example:

    Hello OpenShift!
    Copy to Clipboard Toggle word wrap
Note

For troubleshooting tips, see the troubleshooting documentation.

5.5.5.2. Cleaning up a cluster after a backup with OADP and ROSA STS

If you need to uninstall the OpenShift API for Data Protection (OADP) Operator together with the backups and the S3 bucket from this example, follow these instructions.

Procedure

  1. Delete the workload by running the following command:

    $ oc delete ns hello-world
    Copy to Clipboard Toggle word wrap
  2. Delete the Data Protection Application (DPA) by running the following command:

    $ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa
    Copy to Clipboard Toggle word wrap
  3. Delete the cloud storage by running the following command:

    $ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp
    Copy to Clipboard Toggle word wrap
    Warning

    If this command hangs, you might need to delete the finalizer by running the following command:

    $ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge
    Copy to Clipboard Toggle word wrap
  4. If the Operator is no longer required, remove it by running the following command:

    $ oc -n openshift-adp delete subscription oadp-operator
    Copy to Clipboard Toggle word wrap
  5. Remove the namespace from the Operator:

    $ oc delete ns openshift-adp
    Copy to Clipboard Toggle word wrap
  6. If the backup and restore resources are no longer required, remove them from the cluster by running the following command:

    $ oc delete backups.velero.io hello-world
    Copy to Clipboard Toggle word wrap
  7. To delete backup, restore and remote objects in AWS S3 run the following command:

    $ velero backup delete hello-world
    Copy to Clipboard Toggle word wrap
  8. If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command:

    $ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done
    Copy to Clipboard Toggle word wrap
  9. Delete the AWS S3 bucket by running the following commands:

    $ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive
    Copy to Clipboard Toggle word wrap
    $ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp
    Copy to Clipboard Toggle word wrap
  10. Detach the policy from the role by running the following command:

    $ aws iam detach-role-policy --role-name "${ROLE_NAME}"  --policy-arn "${POLICY_ARN}"
    Copy to Clipboard Toggle word wrap
  11. Delete the role by running the following command:

    $ aws iam delete-role --role-name "${ROLE_NAME}"
    Copy to Clipboard Toggle word wrap

5.6. Installing OADP

5.6.1. About installing OADP

As a cluster administrator, you install the OpenShift API for Data Protection (OADP) by installing the OADP Operator. The OADP Operator installs Velero 1.16.

Note

Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.

To back up Kubernetes resources and internal images, you must have object storage as a backup location, such as one of the following storage types:

You can configure multiple backup storage locations within the same namespace for each individual OADP deployment.

Note

Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.

For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.

You can back up persistent volumes (PVs) by using snapshots or a File System Backup (FSB).

To back up PVs with snapshots, you must have a cloud provider that supports either a native snapshot API or Container Storage Interface (CSI) snapshots, such as one of the following cloud providers:

Note

If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1.x.

OADP 1.0.x does not support CSI backup on OCP 4.11 and later. OADP 1.0.x includes Velero 1.7.x and expects the API group snapshot.storage.k8s.io/v1beta1, which is not present on OCP 4.11 and later.

If your cloud provider does not support snapshots or if your storage is NFS, you can back up applications with Backing up applications with File System Backup: Kopia or Restic on object storage.

You create a default Secret and then you install the Data Protection Application.

5.6.1.1. AWS S3 compatible backup storage providers

OADP works with many S3-compatible object storage providers. Several object storage providers are certified and tested with every release of OADP. Various S3 providers are known to work with OADP but are not specifically tested and certified. These providers will be supported on a best-effort basis. Additionally, there are a few S3 object storage providers with known issues and limitations that are listed in this documentation.

Note

Red Hat will provide support for OADP on any S3-compatible storage, but support will stop if the S3 endpoint is determined to be the root cause of an issue.

5.6.1.1.1. Certified backup storage providers

The following AWS S3 compatible object storage providers are fully supported by OADP through the AWS plugin for use as backup storage locations:

  • MinIO
  • Multicloud Object Gateway (MCG)
  • Amazon Web Services (AWS) S3
  • IBM Cloud® Object Storage S3
  • Ceph RADOS Gateway (Ceph Object Gateway)
  • Red Hat Container Storage
  • Red Hat OpenShift Data Foundation
  • NetApp ONTAP S3 Object Storage
  • Scality ARTESCA S3 object storage
Note

The following compatible object storage providers are supported and have their own Velero object store plugins:

  • Google Cloud
  • Microsoft Azure
5.6.1.1.2. Unsupported backup storage providers

The following AWS S3 compatible object storage providers, are known to work with Velero through the AWS plugin, for use as backup storage locations, however, they are unsupported and have not been tested by Red Hat:

  • Oracle Cloud
  • DigitalOcean
  • NooBaa, unless installed using Multicloud Object Gateway (MCG)
  • Tencent Cloud
  • Ceph RADOS v12.2.7
  • Quobyte
  • Cloudian HyperStore
Note

Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.

For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.

5.6.1.1.3. Backup storage providers with known limitations

The following AWS S3 compatible object storage providers are known to work with Velero through the AWS plugin with a limited feature set:

  • Swift - It works for use as a backup storage location for backup storage, but is not compatible with Restic for filesystem-based volume backup and restore.

If you use cluster storage for your MCG bucket backupStorageLocation on OpenShift Data Foundation, configure MCG as an external object store.

Warning

Failure to configure MCG as an external object store might lead to backups not being available.

Note

Unless specified otherwise, "NooBaa" refers to the open source project that provides lightweight object storage, while "Multicloud Object Gateway (MCG)" refers to the Red Hat distribution of NooBaa.

For more information on the MCG, see Accessing the Multicloud Object Gateway with your applications.

Procedure

5.6.1.3. About OADP update channels

When you install an OADP Operator, you choose an update channel. This channel determines which upgrades to the OADP Operator and to Velero you receive. You can switch channels at any time.

The following update channels are available:

  • The stable channel is now deprecated. The stable channel contains the patches (z-stream updates) of OADP ClusterServiceVersion for OADP.v1.1.z and older versions from OADP.v1.0.z.
  • The stable-1.0 channel is deprecated and is not supported.
  • The stable-1.1 channel is deprecated and is not supported.
  • The stable-1.2 channel is deprecated and is not supported.
  • The stable-1.3 channel contains OADP.v1.3.z, the most recent OADP 1.3 ClusterServiceVersion.
  • The stable-1.4 channel contains OADP.v1.4.z, the most recent OADP 1.4 ClusterServiceVersion.

For more information, see OpenShift Operator Life Cycles.

Which update channel is right for you?

  • The stable channel is now deprecated. If you are already using the stable channel, you will continue to get updates from OADP.v1.1.z.
  • Choose the stable-1.y update channel to install OADP 1.y and to continue receiving patches for it. If you choose this channel, you will receive all z-stream patches for version 1.y.z.

When must you switch update channels?

  • If you have OADP 1.y installed, and you want to receive patches only for that y-stream, you must switch from the stable update channel to the stable-1.y update channel. You will then receive all z-stream patches for version 1.y.z.
  • If you have OADP 1.0 installed, want to upgrade to OADP 1.1, and then receive patches only for OADP 1.1, you must switch from the stable-1.0 update channel to the stable-1.1 update channel. You will then receive all z-stream patches for version 1.1.z.
  • If you have OADP 1.y installed, with y greater than 0, and want to switch to OADP 1.0, you must uninstall your OADP Operator and then reinstall it using the stable-1.0 update channel. You will then receive all z-stream patches for version 1.0.z.
Note

You cannot switch from OADP 1.y to OADP 1.0 by switching update channels. You must uninstall the Operator and then reinstall it.

5.6.1.4. Installation of OADP on multiple namespaces

You can install OpenShift API for Data Protection into multiple namespaces on the same cluster so that multiple project owners can manage their own OADP instance. This use case has been validated with File System Backup (FSB) and Container Storage Interface (CSI).

You install each instance of OADP as specified by the per-platform procedures contained in this document with the following additional requirements:

  • All deployments of OADP on the same cluster must be the same version, for example, 1.4.0. Installing different versions of OADP on the same cluster is not supported.
  • Each individual deployment of OADP must have a unique set of credentials and at least one BackupStorageLocation configuration. You can also use multiple BackupStorageLocation configurations within the same namespace.
  • By default, each OADP deployment has cluster-level access across namespaces. OpenShift Container Platform administrators need to carefully review potential impacts, such as not backing up and restoring to and from the same namespace concurrently.

5.6.1.5. OADP support for backup data immutability

Starting with OADP 1.4, you can store OADP backups in an AWS S3 bucket with enabled versioning. The versioning support is only for AWS S3 buckets and not for S3-compatible buckets.

See the following list for specific cloud provider limitations:

  • AWS S3 service supports backups because an S3 object lock applies only to versioned buckets. You can still update the object data for the new version. However, when backups are deleted, old versions of the objects are not deleted.
  • OADP backups are not supported and might not work as expected when you enable immutability on Azure Storage Blob.
  • Google Cloud storage policy only supports bucket-level immutability. Therefore, it is not feasible to implement it in the Google Cloud environment.

Depending on your storage provider, the immutability options are called differently:

  • S3 object lock
  • Object retention
  • Bucket versioning
  • Write Once Read Many (WORM) buckets

The primary reason for the absence of support for other S3-compatible object storage is that OADP initially saves the state of a backup as finalizing and then verifies whether any asynchronous operations are in progress.

5.6.1.6. Velero CPU and memory requirements based on collected data

The following recommendations are based on observations of performance made in the scale and performance lab. The backup and restore resources can be impacted by the type of plugin, the amount of resources required by that backup or restore, and the respective data contained in the persistent volumes (PVs) related to those resources.

5.6.1.6.1. CPU and memory requirement for configurations
Expand
Configuration types[1] Average usage[2] Large usageresourceTimeouts

CSI

Velero:

CPU- Request 200m, Limits 1000m

Memory - Request 256Mi, Limits 1024Mi

Velero:

CPU- Request 200m, Limits 2000m

Memory- Request 256Mi, Limits 2048Mi

N/A

Restic

[3] Restic:

CPU- Request 1000m, Limits 2000m

Memory - Request 16Gi, Limits 32Gi

[4] Restic:

CPU - Request 2000m, Limits 8000m

Memory - Request 16Gi, Limits 40Gi

900m

[5] Data Mover

N/A

N/A

10m - average usage

60m - large usage

  1. Average usage - use these settings for most usage situations.
  2. Large usage - use these settings for large usage situations, such as a large PV (500GB Usage), multiple namespaces (100+), or many pods within a single namespace (2000 pods+), and for optimal performance for backup and restore involving large datasets.
  3. Restic resource usage corresponds to the amount of data, and type of data. For example, many small files or large amounts of data can cause Restic to use large amounts of resources. The Velero documentation references 500m as a supplied default, for most of our testing we found a 200m request suitable with 1000m limit. As cited in the Velero documentation, exact CPU and memory usage is dependent on the scale of files and directories, in addition to environmental limitations.
  4. Increasing the CPU has a significant impact on improving backup and restore times.
  5. Data Mover - Data Mover default resourceTimeout is 10m. Our tests show that for restoring a large PV (500GB usage), it is required to increase the resourceTimeout to 60m.
Note

The resource requirements listed throughout the guide are for average usage only. For large usage, adjust the settings as described in the table above.

5.6.1.6.2. NodeAgent CPU for large usage

Testing shows that increasing NodeAgent CPU can significantly improve backup and restore times when using OpenShift API for Data Protection (OADP).

Important

You can tune your OpenShift Container Platform environment based on your performance analysis and preference. Use CPU limits in the workloads when you use Kopia for file system backups.

If you do not use CPU limits on the pods, the pods can use excess CPU when it is available. If you specify CPU limits, the pods might be throttled if they exceed their limits. Therefore, the use of CPU limits on the pods is considered an anti-pattern.

Ensure that you are accurately specifying CPU requests so that pods can take advantage of excess CPU. Resource allocation is guaranteed based on CPU requests rather than CPU limits.

Testing showed that running Kopia with 20 cores and 32 Gi memory supported backup and restore operations of over 100 GB of data, multiple namespaces, or over 2000 pods in a single namespace. Testing detected no CPU limiting or memory saturation with these resource specifications.

In some environments, you might need to adjust Ceph MDS pod resources to avoid pod restarts, which occur when default settings cause resource saturation.

For more information about how to set the pod resources limit in Ceph MDS pods, see Changing the CPU and memory resources on the rook-ceph pods.

5.6.2. Installing the OADP Operator

Install the OpenShift API for Data Protection (OADP) Operator on OpenShift Container Platform 4.19 by using Operator Lifecycle Manager (OLM).

The OADP Operator installs Velero 1.16.

5.6.2.1. Installing the OADP Operator

Install the OADP Operator by using the OpenShift Container Platform web console.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges. .Procedure

    1. In the OpenShift Container Platform web console, click Operators OperatorHub.
    2. Use the Filter by keyword field to find the OADP Operator.
    3. Select the OADP Operator and click Install.
    4. Click Install to install the Operator in the openshift-adp project.
    5. Click Operators Installed Operators to verify the installation.

5.6.2.2. OADP-Velero-OpenShift Container Platform version relationship

Review the version relationship between OADP, Velero, and OpenShift Container Platform to decide compatible version combinations. This helps you select the appropriate OADP version for your cluster environment.

Expand
OADP versionVelero versionOpenShift Container Platform version

1.3.0

1.12

4.12-4.15

1.3.1

1.12

4.12-4.15

1.3.2

1.12

4.12-4.15

1.3.3

1.12

4.12-4.15

1.3.4

1.12

4.12-4.15

1.3.5

1.12

4.12-4.15

1.4.0

1.14

4.14-4.18

1.4.1

1.14

4.14-4.18

1.4.2

1.14

4.14-4.18

1.4.3

1.14

4.14-4.18

1.5.0

1.16

4.19

5.7. Configuring OADP with AWS S3 compatible storage

You install the OpenShift API for Data Protection (OADP) with Amazon Web Services (AWS) S3 compatible storage by installing the OADP Operator. The Operator installs Velero 1.16.

Note

Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.

You configure AWS for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.

To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.

Review Amazon Simple Storage Service (S3), Identity and Access Management (IAM), and AWS GovCloud requirements to configure backup storage with appropriate security controls. This helps you meet federal data security requirements and use correct endpoints.

AWS S3 is a storage solution of Amazon for the internet. As an authorized user, you can use this service to store and retrieve any amount of data whenever you want, from anywhere on the web.

You securely control access to Amazon S3 and other Amazon services by using the AWS Identity and Access Management (IAM) web service.

You can use IAM to manage permissions that control which AWS resources users can access. You use IAM to both authenticate, or verify that a user is who they claim to be, and to authorize, or grant permissions to use resources.

AWS GovCloud (US) is an Amazon storage solution developed to meet the stringent and specific data security requirements of the United States Federal Government. AWS GovCloud (US) works the same as Amazon S3 except for the following:

  • You cannot copy the contents of an Amazon S3 bucket in the AWS GovCloud (US) regions directly to or from another AWS region.
  • If you use Amazon S3 policies, use the AWS GovCloud (US) Amazon Resource Name (ARN) identifier to unambiguously specify a resource across all of AWS, such as in IAM policies, Amazon S3 bucket names, and API calls.

    • In AWS GovCloud (US) regions, ARNs have an identifier that is different from the one in other standard AWS regions, arn:aws-us-gov. If you need to specify the US-West or US-East region, use one the following ARNs:

      • For US-West, use us-gov-west-1.
      • For US-East, use us-gov-east-1.
    • For all other standard regions, ARNs begin with: arn:aws.
  • In AWS GovCloud (US) regions, use the endpoints listed in the AWS GovCloud (US-East) and AWS GovCloud (US-West) rows of the "Amazon S3 endpoints" table on Amazon Simple Storage Service endpoints and quotas. If you are processing export-controlled data, use one of the SSL/TLS endpoints. If you have FIPS requirements, use a FIPS 140-2 endpoint such as https://s3-fips.us-gov-west-1.amazonaws.com or https://s3-fips.us-gov-east-1.amazonaws.com.
  • To find the other AWS-imposed restrictions, see How Amazon Simple Storage Service Differs for AWS GovCloud (US).

5.7.1.2. Configuring Amazon Web Services

Configure Amazon Web Services (AWS) S3 storage and Identity and Access Management (IAM) credentials for backup storage with OADP. This provides the necessary permissions and storage infrastructure for data protection operations.

Prerequisites

  • You must have the AWS CLI installed.

Procedure

  1. Set the BUCKET variable:

    $ BUCKET=<your_bucket>
    Copy to Clipboard Toggle word wrap
  2. Set the REGION variable:

    $ REGION=<your_region>
    Copy to Clipboard Toggle word wrap
  3. Create an AWS S3 bucket:

    $ aws s3api create-bucket \
        --bucket $BUCKET \
        --region $REGION \
        --create-bucket-configuration LocationConstraint=$REGION
    Copy to Clipboard Toggle word wrap

    where:

    LocationConstraint
    Specifies the bucket configuration location constraint. us-east-1 does not support LocationConstraint. If your region is us-east-1, omit --create-bucket-configuration LocationConstraint=$REGION.
  4. Create an IAM user:

    $ aws iam create-user --user-name velero
    Copy to Clipboard Toggle word wrap

    where:

    velero
    Specifies the user name. If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
  5. Create a velero-policy.json file:

    $ cat > velero-policy.json <<EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "ec2:DescribeVolumes",
                    "ec2:DescribeSnapshots",
                    "ec2:CreateTags",
                    "ec2:CreateVolume",
                    "ec2:CreateSnapshot",
                    "ec2:DeleteSnapshot"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:DeleteObject",
                    "s3:PutObject",
                    "s3:AbortMultipartUpload",
                    "s3:ListMultipartUploadParts"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}/*"
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:ListBucket",
                    "s3:GetBucketLocation",
                    "s3:ListBucketMultipartUploads"
                ],
                "Resource": [
                    "arn:aws:s3:::${BUCKET}"
                ]
            }
        ]
    }
    EOF
    Copy to Clipboard Toggle word wrap
  6. Attach the policies to give the velero user the minimum necessary permissions:

    $ aws iam put-user-policy \
      --user-name velero \
      --policy-name velero \
      --policy-document file://velero-policy.json
    Copy to Clipboard Toggle word wrap
  7. Create an access key for the velero user:

    $ aws iam create-access-key --user-name velero
    Copy to Clipboard Toggle word wrap
    {
      "AccessKey": {
            "UserName": "velero",
            "Status": "Active",
            "CreateDate": "2017-07-31T22:24:41.576Z",
            "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
            "AccessKeyId": <AWS_ACCESS_KEY_ID>
      }
    }
    Copy to Clipboard Toggle word wrap
  8. Create a credentials-velero file:

    $ cat << EOF > ./credentials-velero
    [default]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    EOF
    Copy to Clipboard Toggle word wrap

    You use the credentials-velero file to create a Secret object for AWS before you install the Data Protection Application.

5.7.1.3. About backup and snapshot locations and their secrets

Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.

5.7.1.3.1. Backup locations

You can specify one of the following AWS S3-compatible object storage solutions as a backup location:

  • Multicloud Object Gateway (MCG)
  • Red Hat Container Storage
  • Ceph RADOS Gateway; also known as Ceph Object Gateway
  • Red Hat OpenShift Data Foundation
  • MinIO

Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.

5.7.1.3.2. Snapshot locations

If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.

If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.

If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.

5.7.1.3.3. Secrets

If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.

If the backup and snapshot locations use different credentials, you create two secret objects:

  • Custom Secret for the backup location, which you specify in the DataProtectionApplication CR.
  • Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR.
Important

The Data Protection Application requires a default Secret. Otherwise, the installation will fail.

If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.

5.7.1.3.4. Creating a default Secret

You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.

The default name of the Secret is cloud-credentials.

Note

The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.

If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.

Prerequisites

  • Your object storage and cloud storage, if any, must use the same credentials.
  • You must configure object storage for Velero.

Procedure

  1. Create a credentials-velero file for the backup storage location in the appropriate format for your cloud provider.

    See the following example:

    [default]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    Copy to Clipboard Toggle word wrap
  2. Create a Secret custom resource (CR) with the default name:

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap

    The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.

5.7.1.3.5. Creating profiles for different credentials

If your backup and snapshot locations use different credentials, you create separate profiles in the credentials-velero file.

Then, you create a Secret object and specify the profiles in the DataProtectionApplication custom resource (CR).

Procedure

  1. Create a credentials-velero file with separate profiles for the backup and snapshot locations, as in the following example:

    [backupStorage]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    
    [volumeSnapshot]
    aws_access_key_id=<AWS_ACCESS_KEY_ID>
    aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>
    Copy to Clipboard Toggle word wrap
  2. Create a Secret object with the credentials-velero file:

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 
    1
    Copy to Clipboard Toggle word wrap
  3. Add the profiles to the DataProtectionApplication CR, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp
    spec:
    ...
      backupLocations:
        - name: default
          velero:
            provider: aws
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: <prefix>
            config:
              region: us-east-1
              profile: "backupStorage"
            credential:
              key: cloud
              name: cloud-credentials
      snapshotLocations:
        - velero:
            provider: aws
            config:
              region: us-west-2
              profile: "volumeSnapshot"
    Copy to Clipboard Toggle word wrap
5.7.1.3.6. Creating an OADP SSE-C encryption key for additional data security

Configure server-side encryption with customer-provided keys (SSE-C) to add an additional layer of encryption for backup data stored in Amazon Web Services (AWS) S3. This protects backup data if AWS credentials become exposed.

Amazon Web Services (AWS) S3 applies server-side encryption with AWS S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3.

OpenShift API for Data Protection (OADP) encrypts data by using SSL/TLS, HTTPS, and the velero-repo-credentials secret when transferring the data from a cluster to storage. To protect backup data in case of lost or stolen AWS credentials, apply an additional layer of encryption.

The velero-plugin-for-aws plugin provides several additional encryption methods. You should review its configuration options and consider implementing additional encryption.

You can store your own encryption keys by using server-side encryption with customer-provided keys (SSE-C). This feature provides additional security if your AWS credentials become exposed.

Warning

Be sure to store cryptographic keys in a secure and safe manner. Encrypted data and backups cannot be recovered if you do not have the encryption key.

Prerequisites

  • To make OADP mount a secret that contains your SSE-C key to the Velero pod at /credentials, use the following default secret name for AWS: cloud-credentials, and leave at least one of the following labels empty:

Note

The following procedure contains an example of a spec:backupLocations block that does not specify credentials. This example would trigger an OADP secret mounting.

  • If you need the backup location to have credentials with a different name than cloud-credentials, you must add a snapshot location, such as the one in the following example, that does not contain a credential name. Because the following example does not contain a credential name, the snapshot location will use cloud-credentials as its secret for taking snapshots.

     snapshotLocations:
      - velero:
          config:
            profile: default
            region: <region>
          provider: aws
    # ...
    Copy to Clipboard Toggle word wrap

Procedure

  1. Create an SSE-C encryption key:

    1. Generate a random number and save it as a file named sse.key by running the following command:

      $ dd if=/dev/urandom bs=1 count=32 > sse.key
      Copy to Clipboard Toggle word wrap
  2. Create an OpenShift Container Platform secret:

    • If you are initially installing and configuring OADP, create the AWS credential and encryption key secret at the same time by running the following command:

      $ oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse.key
      Copy to Clipboard Toggle word wrap
    • If you are updating an existing installation, edit the values of the cloud-credential secret block of the DataProtectionApplication CR manifest, as in the following example:

      apiVersion: v1
      data:
        cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo=
        customer-key: v+<snip>TFIiq6aaXPbj8dhos=
      kind: Secret
      # ...
      Copy to Clipboard Toggle word wrap
  3. Edit the value of the customerKeyEncryptionFile attribute in the backupLocations block of the DataProtectionApplication CR manifest, as in the following example:

    spec:
      backupLocations:
        - velero:
            config:
              customerKeyEncryptionFile: /credentials/customer-key
              profile: default
    # ...
    Copy to Clipboard Toggle word wrap
    Warning

    You must restart the Velero pod to remount the secret credentials properly on an existing installation.

    The installation is complete, and you can back up and restore OpenShift Container Platform resources. The data saved in AWS S3 storage is encrypted with the new key, and you cannot download it from the AWS S3 console or API without the additional encryption key.

Verification

To verify that you cannot download the encrypted files without the inclusion of an additional key, create a test file, upload it, and then try to download it.

  1. Create a test file by running the following command:

    $ echo "encrypt me please" > test.txt
    Copy to Clipboard Toggle word wrap
  2. Upload the test file by running the following command:

    $ aws s3api put-object \
      --bucket <bucket> \
      --key test.txt \
      --body test.txt \
      --sse-customer-key fileb://sse.key \
      --sse-customer-algorithm AES256
    Copy to Clipboard Toggle word wrap
  3. Try to download the file. In either the Amazon web console or the terminal, run the following command:

    $ s3cmd get s3://<bucket>/test.txt test.txt
    Copy to Clipboard Toggle word wrap

    The download fails because the file is encrypted with an additional key.

  4. Download the file with the additional encryption key by running the following command:

    $ aws s3api get-object \
        --bucket <bucket> \
        --key test.txt \
        --sse-customer-key fileb://sse.key \
        --sse-customer-algorithm AES256 \
        downloaded.txt
    Copy to Clipboard Toggle word wrap
  5. Read the file contents by running the following command:

    $ cat downloaded.txt
    Copy to Clipboard Toggle word wrap
    encrypt me please
    Copy to Clipboard Toggle word wrap

When you are verifying an SSE-C encryption key, you can also download the file with the additional encryption key for files that were backed up with Velero.

Procedure

  • Download the file with the additional encryption key for files backed up by Velero by running the following command:

    $ aws s3api get-object \
      --bucket <bucket> \
      --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz \
      --sse-customer-key fileb://sse.key \
      --sse-customer-algorithm AES256 \
      --debug \
      velero_download.tar.gz
    Copy to Clipboard Toggle word wrap

5.7.1.4. Installing the Data Protection Application

You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.

Prerequisites

  • You must install the OADP Operator.
  • You must configure object storage as a backup location.
  • If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
  • If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials.
  • If the backup and snapshot locations use different credentials, you must create a Secret with the default name, cloud-credentials, which contains separate profiles for the backup and snapshot location credentials.

    Note

    If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret, the installation will fail.

Procedure

  1. Click Operators Installed Operators and select the OADP Operator.
  2. Under Provided APIs, click Create instance in the DataProtectionApplication box.
  3. Click YAML View and update the parameters of the DataProtectionApplication manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp
    spec:
      configuration:
        velero:
          defaultPlugins:
            - openshift
            - aws
          resourceTimeout: 10m
        nodeAgent:
          enable: true
          uploaderType: kopia
          podConfig:
            nodeSelector: <node_selector>
      backupLocations:
        - name: default
          velero:
            provider: aws
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: <prefix>
            config:
              region: <region>
              profile: "default"
              s3ForcePathStyle: "true"
              s3Url: <s3_url>
            credential:
              key: cloud
              name: cloud-credentials
      snapshotLocations:
        - name: default
          velero:
            provider: aws
            config:
              region: <region>
              profile: "default"
            credential:
              key: cloud
              name: cloud-credentials
    Copy to Clipboard Toggle word wrap

    where:

    namespace
    Specifies the default namespace for OADP which is openshift-adp. The namespace is a variable and is configurable.
    openshift
    Specifies that the openshift plugin is mandatory.
    resourceTimeout
    Specifies how many minutes to wait for several Velero resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability, before timeout occurs. The default is 10m.
    nodeAgent
    Specifies the administrative agent that routes the administrative requests to servers.
    enable
    Set this value to true if you want to enable nodeAgent and perform File System Backup.
    uploaderType
    Specifies the uploader type. Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR.
    nodeSelector
    Specifies the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
    bucket
    Specifies a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
    prefix
    Specifies a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes.
    s3ForcePathStyle
    Specifies whether to force path style URLs for S3 objects (Boolean). Not Required for AWS S3. Required only for S3 compatible storage.
    s3Url
    Specifies the URL of the object store that you are using to store backups. Not required for AWS S3. Required only for S3 compatible storage.
    name
    Specifies the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials, is used. If you specify a custom name, the custom name is used for the backup location.
    snapshotLocations
    Specifies a snapshot location, unless you use CSI snapshots or a File System Backup (FSB) to back up PVs.
    region
    Specifies that the snapshot location must be in the same region as the PVs.
    name
    Specifies the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials, is used. If you specify a custom name, the custom name is used for the snapshot location. If your backup and snapshot locations use different credentials, create separate profiles in the credentials-velero file.
  4. Click Create.

Verification

  1. Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:

    $ oc get all -n openshift-adp
    Copy to Clipboard Toggle word wrap
    NAME                                                     READY   STATUS    RESTARTS   AGE
    pod/oadp-operator-controller-manager-67d9494d47-6l8z8    2/2     Running   0          2m8s
    pod/node-agent-9cq4q                                     1/1     Running   0          94s
    pod/node-agent-m4lts                                     1/1     Running   0          94s
    pod/node-agent-pv4kr                                     1/1     Running   0          95s
    pod/velero-588db7f655-n842v                              1/1     Running   0          95s
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/oadp-operator-controller-manager-metrics-service   ClusterIP   172.30.70.140    <none>        8443/TCP   2m8s
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.10.0      <none>        8085/TCP   8h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent    3         3         3       3            3           <none>          96s
    
    NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/oadp-operator-controller-manager    1/1     1            1           2m9s
    deployment.apps/velero                              1/1     1            1           96s
    
    NAME                                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/oadp-operator-controller-manager-67d9494d47    1         1         1       2m9s
    replicaset.apps/velero-588db7f655                              1         1         1       96s
    Copy to Clipboard Toggle word wrap
  2. Verify that the DataProtectionApplication (DPA) is reconciled by running the following command:

    $ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
    Copy to Clipboard Toggle word wrap
    {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
    Copy to Clipboard Toggle word wrap
  3. Verify the type is set to Reconciled.
  4. Verify the backup storage location and confirm that the PHASE is Available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Copy to Clipboard Toggle word wrap
    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true
    Copy to Clipboard Toggle word wrap
5.7.1.4.1. Setting Velero CPU and memory resource allocations

You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.

Prerequisites

  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure

  • Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      configuration:
        velero:
          podConfig:
            nodeSelector: <node_selector>
            resourceAllocations:
              limits:
                cpu: "1"
                memory: 1024Mi
              requests:
                cpu: 200m
                memory: 256Mi
    Copy to Clipboard Toggle word wrap

    where:

    nodeSelector
    Specifies the node selector to be supplied to Velero podSpec.
    resourceAllocations

    Specifies the resource allocations listed for average usage.

    Note

    Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.

    Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.

Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.

5.7.1.4.2. Enabling self-signed CA certificates

You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.

Prerequisites

  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure

  • Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      backupLocations:
        - name: default
          velero:
            provider: aws
            default: true
            objectStorage:
              bucket: <bucket>
              prefix: <prefix>
              caCert: <base64_encoded_cert_string>
            config:
              insecureSkipTLSVerify: "false"
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    caCert
    Specifies the Base64-encoded CA certificate string.
    insecureSkipTLSVerify
    Specifies the insecureSkipTLSVerify configuration. The configuration can be set to either "true" or "false". If set to "true", SSL/TLS security is disabled. If set to "false", SSL/TLS security is enabled.

You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.

Prerequisites

  • You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role.
  • You must have the OpenShift CLI (oc) installed. .Procedure

    1. To use an aliased Velero command, run the following command:

      $ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
      Copy to Clipboard Toggle word wrap
    2. Check that the alias is working by running the following command:

      $ velero version
      Copy to Clipboard Toggle word wrap
      Client:
      	Version: v1.12.1-OADP
      	Git commit: -
      Server:
      	Version: v1.12.1-OADP
      Copy to Clipboard Toggle word wrap
    3. To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:

      $ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')
      Copy to Clipboard Toggle word wrap
      $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
      Copy to Clipboard Toggle word wrap
      $ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
      Copy to Clipboard Toggle word wrap
    4. To fetch the backup logs, run the following command:

      $ velero backup logs  <backup_name>  --cacert /tmp/<your_cacert.txt>
      Copy to Clipboard Toggle word wrap

      You can use these logs to view failures and warnings for the resources that you cannot back up.

    5. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the previous step.
    6. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command:

      $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
      /tmp/your-cacert.txt
      Copy to Clipboard Toggle word wrap

      In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.

5.7.1.4.4. Configuring node agents and node labels

The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.

Procedure

  1. Run the node agent on any node that you choose by adding a custom label:

    $ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
    Copy to Clipboard Toggle word wrap
    Note

    Any label specified must match the labels on each node.

  2. Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector field, which you used for labeling nodes:

    configuration:
      nodeAgent:
        enable: true
        podConfig:
          nodeSelector:
            node-role.kubernetes.io/nodeAgent: ""
    Copy to Clipboard Toggle word wrap

    The following example is an anti-pattern of nodeSelector and does not work unless both labels, node-role.kubernetes.io/infra: "" and node-role.kubernetes.io/worker: "", are on the node:

        configuration:
          nodeAgent:
            enable: true
            podConfig:
              nodeSelector:
                node-role.kubernetes.io/infra: ""
                node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap

5.7.1.5. Configuring the backup storage location with a MD5 checksum algorithm

You can configure the Backup Storage Location (BSL) in the Data Protection Application (DPA) to use a MD5 checksum algorithm for both Amazon Simple Storage Service (Amazon S3) and S3-compatible storage providers. The checksum algorithm calculates the checksum for uploading and downloading objects to Amazon S3. You can use one of the following options to set the checksumAlgorithm field in the spec.backupLocations.velero.config.checksumAlgorithm section of the DPA.

  • CRC32
  • CRC32C
  • SHA1
  • SHA256

You can also set the checksumAlgorithm field to an empty value to skip the MD5 checksum check. If you do not set a value for the checksumAlgorithm field, then the default value is set to CRC32.

Prerequisites

  • You have installed the OADP Operator.
  • You have configured Amazon S3, or S3-compatible object storage as a backup location.

Procedure

  • Configure the BSL in the DPA as shown in the following example:

    Example Data Protection Application

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
      - name: default
        velero:
          config:
            checksumAlgorithm: ""
            insecureSkipTLSVerify: "true"
            profile: "default"
            region: <bucket_region>
            s3ForcePathStyle: "true"
            s3Url: <bucket_url>
          credential:
            key: cloud
            name: cloud-credentials
          default: true
          objectStorage:
            bucket: <bucket_name>
            prefix: velero
          provider: aws
      configuration:
        velero:
          defaultPlugins:
          - openshift
          - aws
          - csi
    Copy to Clipboard Toggle word wrap

    where:

    checksumAlgorithm
    Specifies the checksumAlgorithm. In this example, the checksumAlgorithm field is set to an empty value. You can select an option from the following list: CRC32, CRC32C, SHA1, SHA256.
    Important

    If you are using Noobaa as the object storage provider, and you do not set the spec.backupLocations.velero.config.checksumAlgorithm field in the DPA, an empty value of checksumAlgorithm is added to the BSL configuration.

    The empty value is only added for BSLs that are created using the DPA. This value is not added if you create the BSL by using any other method.

5.7.1.6. Configuring the DPA with client burst and QPS settings

The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.

You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the client-burst and the client-qps fields in the DPA as shown in the following example:

    Example Data Protection Application

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: restic
        velero:
          client-burst: 500
          client-qps: 300
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
    Copy to Clipboard Toggle word wrap

    where:

    client-burst
    Specifies the client-burst value. In this example, the client-burst field is set to 500.
    client-qps
    Specifies the client-qps value. In this example, the client-qps field is set to 300.

5.7.1.7. Configuring node agent load affinity

You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR).

See the following example in which you can schedule the node agent pods on nodes with the label label.io/role: cpu-1 and other-label.io/other-role: cpu-2.

...
spec:
  configuration:
    nodeAgent:
      enable: true
      uploaderType: kopia
      podConfig:
        nodeSelector:
          label.io/role: cpu-1
          other-label.io/other-role: cpu-2
        ...
Copy to Clipboard Toggle word wrap

You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the DPA spec nodegent.loadAffinity object as shown in the following example.

    In the example, you ensure that the node agent pods are scheduled only on nodes with the label label.io/role: cpu-1 and the label label.io/hostname matching with either node1 or node2.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          loadAffinity:
            - nodeSelector:
                matchLabels:
                  label.io/role: cpu-1
                matchExpressions:
                  - key: label.io/hostname
                    operator: In
                    values:
                      - node1
                      - node2
                      ...
    Copy to Clipboard Toggle word wrap

    where:

    loadAffinity
    Specifies the loadAffinity object by adding the matchLabels and matchExpressions objects.
    matchExpressions
    Specifies the matchExpressions object to add restrictions on the node agent pods scheduling.

5.7.1.8. Node agent load affinity guidelines

Use the following guidelines to configure the node agent loadAffinity object in the DataProtectionApplication (DPA) custom resource (CR).

  • Use the spec.nodeagent.podConfig.nodeSelector object for simple node matching.
  • Use the loadAffinity.nodeSelector object without the podConfig.nodeSelector object for more complex scenarios.
  • You can use both podConfig.nodeSelector and loadAffinity.nodeSelector objects, but the loadAffinity object must be equal or more restrictive as compared to the podConfig object. In this scenario, the podConfig.nodeSelector labels must be a subset of the labels used in the loadAffinity.nodeSelector object.
  • You cannot use the matchExpressions and matchLabels fields if you have configured both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.
  • See the following example to configure both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadAffinity:
            - nodeSelector:
                matchLabels:
                  label.io/location: 'US'
                  label.io/gpu: 'no'
          podConfig:
            nodeSelector:
              label.io/gpu: 'no'
    Copy to Clipboard Toggle word wrap

5.7.1.9. Configuring node agent load concurrency

You can control the maximum number of node agent operations that can run simultaneously on each node within your cluster.

You can configure it using one of the following fields of the Data Protection Application (DPA):

  • globalConfig: Defines a default concurrency limit for the node agent across all nodes.
  • perNodeConfig: Specifies different concurrency limits for specific nodes based on nodeSelector labels. This provides flexibility for environments where certain nodes might have different resource capacities or roles.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. If you want to use load concurrency for specific nodes, add labels to those nodes:

    $ oc label node/<node_name> label.io/instance-type='large'
    Copy to Clipboard Toggle word wrap
  2. Configure the load concurrency fields for your DPA instance:

      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadConcurrency:
            globalConfig: 1
            perNodeConfig:
            - nodeSelector:
                  matchLabels:
                     label.io/instance-type: large
              number: 3
    Copy to Clipboard Toggle word wrap

    where:

    globalConfig
    Specifies the global concurrent number. The default value is 1, which means there is no concurrency and only one load is allowed. The globalConfig value does not have a limit.
    label.io/instance-type
    Specifies the label for per-node concurrency.
    number
    Specifies the per-node concurrent number. You can specify many per-node concurrent numbers, for example, based on the instance type and size. The range of per-node concurrent number is the same as the global concurrent number. If the configuration file contains a per-node concurrent number and a global concurrent number, the per-node concurrent number takes precedence.

5.7.1.10. Configuring the node agent as a non-root and non-privileged user

To enhance the node agent security, you can configure the OADP Operator node agent daemonset to run as a non-root and non-privileged user by using the spec.configuration.velero.disableFsBackup setting in the DataProtectionApplication (DPA) custom resource (CR).

By setting the spec.configuration.velero.disableFsBackup setting to true, the node agent security context sets the root file system to read-only and sets the privileged flag to false.

Note

Setting spec.configuration.velero.disableFsBackup to true enhances the node agent security by removing the need for privileged containers and enforcing a read-only root file system.

However, it also disables File System Backup (FSB) with Kopia. If your workloads rely on FSB for backing up volumes that do not support native snapshots, then you should evaluate whether the disableFsBackup configuration fits your use case.

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the disableFsBackup field in the DPA as shown in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: ts-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
      - velero:
          credential:
            key: cloud
            name: cloud-credentials
          default: true
          objectStorage:
            bucket: <bucket_name>
            prefix: velero
          provider: gcp
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
          - csi
          - gcp
          - openshift
          disableFsBackup: true
    Copy to Clipboard Toggle word wrap

    where:

    nodeAgent
    Specifies to enable the node agent in the DPA.
    disableFsBackup
    Specifies to set the disableFsBackup field to true.

Verification

  1. Verify that the node agent security context is set to run as non-root and the root file system is readOnly by running the following command:

    $ oc get daemonset node-agent -o yaml
    Copy to Clipboard Toggle word wrap

    The example output is as following:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      ...
      name: node-agent
      namespace: openshift-adp
      ...
    spec:
      ...
      template:
        metadata:
          ...
        spec:
          containers:
          ...
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              readOnlyRootFilesystem: true
            ...
          nodeSelector:
            kubernetes.io/os: linux
          os:
            name: linux
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext:
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          serviceAccount: velero
          serviceAccountName: velero
          ....
    Copy to Clipboard Toggle word wrap

    where:

    allowPrivilegeEscalation
    Specifies that the allowPrivilegeEscalation field is false.
    privileged
    Specifies that the privileged field is false.
    readOnlyRootFilesystem
    Specifies that the root file system is read-only.
    runAsNonRoot
    Specifies that the node agent is run as a non-root user.

5.7.1.11. Configuring repository maintenance

OADP repository maintenance is a background job, you can configure it independently of the node agent pods. This means that you can schedule the repository maintenance pod on a node where the node agent is or is not running.

You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.

You have the option to configure the load affinity at the global level affecting all repositories. Or you can configure the load affinity for each repository. You can also use a combination of global and per-repository configuration.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the loadAffinity object in the DPA spec by using either one or both of the following methods:

    • Global configuration: Configure load affinity for all repositories as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance:
            global:
              podResources:
                cpuRequest: "100m"
                cpuLimit: "200m"
                memoryRequest: "100Mi"
                memoryLimit: "200Mi"
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/gpu: 'no'
                    matchExpressions:
                      - key: label.io/location
                        operator: In
                        values:
                          - US
                          - EU
      Copy to Clipboard Toggle word wrap

      where:

      repositoryMaintenance
      Specifies the repositoryMaintenance object as shown in the example.
      global
      Specifies the global object to configure load affinity for all repositories.
    • Per-repository configuration: Configure load affinity per repository as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance:
            myrepositoryname:
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/cpu: 'yes'
      Copy to Clipboard Toggle word wrap

      where:

      myrepositoryname
      Specifies the repositoryMaintenance object for each repository.

5.7.1.12. Configuring Velero load affinity

With each OADP deployment, there is one Velero pod and its main purpose is to schedule Velero workloads. To schedule the Velero pod, you can use the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DataProtectionApplication (DPA) custom resource (CR) spec.

Use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.

The OpenShift scheduler applies the rules and performs the scheduling of the Velero pod deployment.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DPA spec as shown in the following examples:

    • velero.podConfig.nodeSelector object configuration:

      ...
      spec:
        configuration:
          velero:
            podConfig:
              nodeSelector:
                some-label.io/custom-node-role: backup-core
      Copy to Clipboard Toggle word wrap
    • velero.loadAffinity object configuration:

      ...
      spec:
        configuration:
          velero:
            loadAffinity:
              - nodeSelector:
                  matchLabels:
                    label.io/gpu: 'no'
                  matchExpressions:
                    - key: label.io/location
                      operator: In
                      values:
                        - US
                        - EU
      Copy to Clipboard Toggle word wrap

5.7.1.13. Overriding the imagePullPolicy setting in the DPA

In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.

In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:

  • If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent.
  • If the image does not have the digest, the Operator sets imagePullPolicy to Always.

You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the spec.imagePullPolicy field in the DPA as shown in the following example:

    Example Data Protection Application

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
            - csi
      imagePullPolicy: Never
    Copy to Clipboard Toggle word wrap

    where:

    imagePullPolicy
    Specifies the value for imagePullPolicy. In this example, the imagePullPolicy field is set to Never.

5.7.1.14. Enabling CSI in the DataProtectionApplication CR

You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.

Prerequisites

  • The cloud provider must support CSI snapshots.

Procedure

  • Edit the DataProtectionApplication CR, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    ...
    spec:
      configuration:
        velero:
          defaultPlugins:
          - openshift
          - csi
    Copy to Clipboard Toggle word wrap

    where:

    csi
    Specifies the csi default plugin.

5.7.1.15. Disabling the node agent in DataProtectionApplication

If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.

Procedure

  1. To disable the nodeAgent, set the enable flag to false. See the following example:

    Example DataProtectionApplication CR

    # ...
    configuration:
      nodeAgent:
        enable: false
        uploaderType: kopia
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    enable
    Enables the node agent.
  2. To enable the nodeAgent, set the enable flag to true. See the following example:

    Example DataProtectionApplication CR

    # ...
    configuration:
      nodeAgent:
        enable: true
        uploaderType: kopia
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    enable

    Enables the node agent.

    You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".

5.8. Configuring OADP with IBM Cloud

5.8.1. Configuring the OpenShift API for Data Protection with IBM Cloud

You install the OpenShift API for Data Protection (OADP) Operator on an IBM Cloud cluster to back up and restore applications on the cluster. You configure IBM Cloud Object Storage (COS) to store the backups.

5.8.1.1. Configuring the COS instance

You create an IBM Cloud Object Storage (COS) instance to store the OADP backup data. After you create the COS instance, configure the HMAC service credentials.

Prerequisites

  • You have an IBM Cloud Platform account.
  • You installed the IBM Cloud CLI.
  • You are logged in to IBM Cloud.

Procedure

  1. Install the IBM Cloud Object Storage (COS) plugin by running the following command:

    $ ibmcloud plugin install cos -f
    Copy to Clipboard Toggle word wrap
  2. Set a bucket name by running the following command:

    $ BUCKET=<bucket_name>
    Copy to Clipboard Toggle word wrap
  3. Set a bucket region by running the following command:

    $ REGION=<bucket_region>
    Copy to Clipboard Toggle word wrap

    where:

    <bucket_region>
    Specifies the bucket region. For example, eu-gb.
  4. Create a resource group by running the following command:

    $ ibmcloud resource group-create <resource_group_name>
    Copy to Clipboard Toggle word wrap
  5. Set the target resource group by running the following command:

    $ ibmcloud target -g <resource_group_name>
    Copy to Clipboard Toggle word wrap
  6. Verify that the target resource group is correctly set by running the following command:

    $ ibmcloud target
    Copy to Clipboard Toggle word wrap

    Example output

    API endpoint:     https://cloud.ibm.com
    Region:
    User:             test-user
    Account:          Test Account (fb6......e95) <-> 2...122
    Resource group:   Default
    Copy to Clipboard Toggle word wrap

    In the example output, the resource group is set to Default.

  7. Set a resource group name by running the following command:

    $ RESOURCE_GROUP=<resource_group>
    Copy to Clipboard Toggle word wrap

    where:

    <resource_group>
    Specifies the resource group name. For example, "default".
  8. Create an IBM Cloud service-instance resource by running the following command:

    $ ibmcloud resource service-instance-create \
    <service_instance_name> \
    <service_name> \
    <service_plan> \
    <region_name>
    Copy to Clipboard Toggle word wrap

    where:

    <service_instance_name>
    Specifies a name for the service-instance resource.
    <service_name>
    Specifies the service name. Alternatively, you can specify a service ID.
    <service_plan>
    Specifies the service plan for your IBM Cloud account.
    <region_name>
    Specifies the region name.

    Refer to the following example command:

    $ ibmcloud resource service-instance-create test-service-instance cloud-object-storage \
    standard \
    global \
    -d premium-global-deployment
    Copy to Clipboard Toggle word wrap

    where:

    cloud-object-storage
    Specifies the service name.
    -d premium-global-deployment
    Specifies the deployment name.
  9. Extract the service instance ID by running the following command:

    $ SERVICE_INSTANCE_ID=$(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')
    Copy to Clipboard Toggle word wrap
  10. Create a COS bucket by running the following command:

    $ ibmcloud cos bucket-create \
    --bucket $BUCKET \
    --ibm-service-instance-id $SERVICE_INSTANCE_ID \
    --region $REGION
    Copy to Clipboard Toggle word wrap

    Variables such as $BUCKET, $SERVICE_INSTANCE_ID, and $REGION are replaced by the values you set previously.

  11. Create HMAC credentials by running the following command.

    $ ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\"HMAC\":true}
    Copy to Clipboard Toggle word wrap
  12. Extract the access key ID and the secret access key from the HMAC credentials and save them in the credentials-velero file. You can use the credentials-velero file to create a secret for the backup storage location. Run the following command:

    $ cat > credentials-velero << __EOF__
    [default]
    aws_access_key_id=$(ibmcloud resource service-key test-key -o json  | jq -r '.[0].credentials.cos_hmac_keys.access_key_id')
    aws_secret_access_key=$(ibmcloud resource service-key test-key -o json  | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key')
    __EOF__
    Copy to Clipboard Toggle word wrap

5.8.1.2. Creating a default Secret

You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.

Note

The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.

If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.

Prerequisites

  • Your object storage and cloud storage, if any, must use the same credentials.
  • You must configure object storage for Velero.

Procedure

  1. Create a credentials-velero file for the backup storage location in the appropriate format for your cloud provider.
  2. Create a Secret custom resource (CR) with the default name:

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap

    The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.

5.8.1.3. Creating secrets for different credentials

Create separate Secret objects when your backup and snapshot locations require different credentials. This allows you to configure distinct authentication for each storage location while maintaining secure credential management.

Procedure

  1. Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider.
  2. Create a Secret for the snapshot location with the default name:

    $ oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap
  3. Create a credentials-velero file for the backup location in the appropriate format for your object storage.
  4. Create a Secret for the backup location with a custom name:

    $ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap
  5. Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp
    spec:
    ...
      backupLocations:
        - velero:
            provider: <provider>
            default: true
            credential:
              key: cloud
              name: <custom_secret>
            objectStorage:
              bucket: <bucket_name>
              prefix: <prefix>
    Copy to Clipboard Toggle word wrap

    where:

    custom_secret
    Specifies the backup location Secret with custom name.

5.8.1.4. Installing the Data Protection Application

You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.

Prerequisites

  • You must install the OADP Operator.
  • You must configure object storage as a backup location.
  • If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
  • If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials.

    Note

    If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret, the installation will fail.

Procedure

  1. Click Operators Installed Operators and select the OADP Operator.
  2. Under Provided APIs, click Create instance in the DataProtectionApplication box.
  3. Click YAML View and update the parameters of the DataProtectionApplication manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      namespace: openshift-adp
      name: <dpa_name>
    spec:
      configuration:
        velero:
          defaultPlugins:
          - openshift
          - aws
          - csi
      backupLocations:
        - velero:
            provider: aws
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            config:
              insecureSkipTLSVerify: 'true'
              profile: default
              region: <region_name>
              s3ForcePathStyle: 'true'
              s3Url: <s3_url>
            credential:
              key: cloud
              name: cloud-credentials
    Copy to Clipboard Toggle word wrap

    where:

    provider
    Specifies that the provider is aws when you use IBM Cloud as a backup storage location.
    bucket
    Specifies the IBM Cloud Object Storage (COS) bucket name.
    region
    Specifies the COS region name, for example, eu-gb.
    s3Url
    Specifies the S3 URL of the COS bucket. For example, http://s3.eu-gb.cloud-object-storage.appdomain.cloud. Here, eu-gb is the region name. Replace the region name according to your bucket region.
    name
    Specifies the name of the secret you created by using the access key and the secret access key from the HMAC credentials.
  4. Click Create.

Verification

  1. Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:

    $ oc get all -n openshift-adp
    Copy to Clipboard Toggle word wrap
    NAME                                                     READY   STATUS    RESTARTS   AGE
    pod/oadp-operator-controller-manager-67d9494d47-6l8z8    2/2     Running   0          2m8s
    pod/node-agent-9cq4q                                     1/1     Running   0          94s
    pod/node-agent-m4lts                                     1/1     Running   0          94s
    pod/node-agent-pv4kr                                     1/1     Running   0          95s
    pod/velero-588db7f655-n842v                              1/1     Running   0          95s
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/oadp-operator-controller-manager-metrics-service   ClusterIP   172.30.70.140    <none>        8443/TCP   2m8s
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.10.0      <none>        8085/TCP   8h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent    3         3         3       3            3           <none>          96s
    
    NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/oadp-operator-controller-manager    1/1     1            1           2m9s
    deployment.apps/velero                              1/1     1            1           96s
    
    NAME                                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/oadp-operator-controller-manager-67d9494d47    1         1         1       2m9s
    replicaset.apps/velero-588db7f655                              1         1         1       96s
    Copy to Clipboard Toggle word wrap
  2. Verify that the DataProtectionApplication (DPA) is reconciled by running the following command:

    $ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
    Copy to Clipboard Toggle word wrap
    {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
    Copy to Clipboard Toggle word wrap
  3. Verify the type is set to Reconciled.
  4. Verify the backup storage location and confirm that the PHASE is Available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Copy to Clipboard Toggle word wrap
    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true
    Copy to Clipboard Toggle word wrap

5.8.1.5. Setting Velero CPU and memory resource allocations

You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.

Prerequisites

  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure

  • Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      configuration:
        velero:
          podConfig:
            nodeSelector: <node_selector>
            resourceAllocations:
              limits:
                cpu: "1"
                memory: 1024Mi
              requests:
                cpu: 200m
                memory: 256Mi
    Copy to Clipboard Toggle word wrap

    where:

    nodeSelector
    Specifies the node selector to be supplied to Velero podSpec.
    resourceAllocations

    Specifies the resource allocations listed for average usage.

    Note

    Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.

    Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.

5.8.1.6. Configuring node agents and node labels

The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.

Procedure

  1. Run the node agent on any node that you choose by adding a custom label:

    $ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
    Copy to Clipboard Toggle word wrap
    Note

    Any label specified must match the labels on each node.

  2. Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector field, which you used for labeling nodes:

    configuration:
      nodeAgent:
        enable: true
        podConfig:
          nodeSelector:
            node-role.kubernetes.io/nodeAgent: ""
    Copy to Clipboard Toggle word wrap

    The following example is an anti-pattern of nodeSelector and does not work unless both labels, node-role.kubernetes.io/infra: "" and node-role.kubernetes.io/worker: "", are on the node:

        configuration:
          nodeAgent:
            enable: true
            podConfig:
              nodeSelector:
                node-role.kubernetes.io/infra: ""
                node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap

5.8.1.7. Configuring the DPA with client burst and QPS settings

The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.

You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the client-burst and the client-qps fields in the DPA as shown in the following example:

    Example Data Protection Application

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: restic
        velero:
          client-burst: 500
          client-qps: 300
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
    Copy to Clipboard Toggle word wrap

    where:

    client-burst
    Specifies the client-burst value. In this example, the client-burst field is set to 500.
    client-qps
    Specifies the client-qps value. In this example, the client-qps field is set to 300.

5.8.1.8. Configuring node agent load affinity

You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR).

See the following example in which you can schedule the node agent pods on nodes with the label label.io/role: cpu-1 and other-label.io/other-role: cpu-2.

...
spec:
  configuration:
    nodeAgent:
      enable: true
      uploaderType: kopia
      podConfig:
        nodeSelector:
          label.io/role: cpu-1
          other-label.io/other-role: cpu-2
        ...
Copy to Clipboard Toggle word wrap

You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the DPA spec nodegent.loadAffinity object as shown in the following example.

    In the example, you ensure that the node agent pods are scheduled only on nodes with the label label.io/role: cpu-1 and the label label.io/hostname matching with either node1 or node2.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          loadAffinity:
            - nodeSelector:
                matchLabels:
                  label.io/role: cpu-1
                matchExpressions:
                  - key: label.io/hostname
                    operator: In
                    values:
                      - node1
                      - node2
                      ...
    Copy to Clipboard Toggle word wrap

    where:

    loadAffinity
    Specifies the loadAffinity object by adding the matchLabels and matchExpressions objects.
    matchExpressions
    Specifies the matchExpressions object to add restrictions on the node agent pods scheduling.

5.8.1.9. Node agent load affinity guidelines

Use the following guidelines to configure the node agent loadAffinity object in the DataProtectionApplication (DPA) custom resource (CR).

  • Use the spec.nodeagent.podConfig.nodeSelector object for simple node matching.
  • Use the loadAffinity.nodeSelector object without the podConfig.nodeSelector object for more complex scenarios.
  • You can use both podConfig.nodeSelector and loadAffinity.nodeSelector objects, but the loadAffinity object must be equal or more restrictive as compared to the podConfig object. In this scenario, the podConfig.nodeSelector labels must be a subset of the labels used in the loadAffinity.nodeSelector object.
  • You cannot use the matchExpressions and matchLabels fields if you have configured both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.
  • See the following example to configure both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadAffinity:
            - nodeSelector:
                matchLabels:
                  label.io/location: 'US'
                  label.io/gpu: 'no'
          podConfig:
            nodeSelector:
              label.io/gpu: 'no'
    Copy to Clipboard Toggle word wrap

5.8.1.10. Configuring node agent load concurrency

You can control the maximum number of node agent operations that can run simultaneously on each node within your cluster.

You can configure it using one of the following fields of the Data Protection Application (DPA):

  • globalConfig: Defines a default concurrency limit for the node agent across all nodes.
  • perNodeConfig: Specifies different concurrency limits for specific nodes based on nodeSelector labels. This provides flexibility for environments where certain nodes might have different resource capacities or roles.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. If you want to use load concurrency for specific nodes, add labels to those nodes:

    $ oc label node/<node_name> label.io/instance-type='large'
    Copy to Clipboard Toggle word wrap
  2. Configure the load concurrency fields for your DPA instance:

      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadConcurrency:
            globalConfig: 1
            perNodeConfig:
            - nodeSelector:
                  matchLabels:
                     label.io/instance-type: large
              number: 3
    Copy to Clipboard Toggle word wrap

    where:

    globalConfig
    Specifies the global concurrent number. The default value is 1, which means there is no concurrency and only one load is allowed. The globalConfig value does not have a limit.
    label.io/instance-type
    Specifies the label for per-node concurrency.
    number
    Specifies the per-node concurrent number. You can specify many per-node concurrent numbers, for example, based on the instance type and size. The range of per-node concurrent number is the same as the global concurrent number. If the configuration file contains a per-node concurrent number and a global concurrent number, the per-node concurrent number takes precedence.

5.8.1.11. Configuring repository maintenance

OADP repository maintenance is a background job, you can configure it independently of the node agent pods. This means that you can schedule the repository maintenance pod on a node where the node agent is or is not running.

You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.

You have the option to configure the load affinity at the global level affecting all repositories. Or you can configure the load affinity for each repository. You can also use a combination of global and per-repository configuration.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the loadAffinity object in the DPA spec by using either one or both of the following methods:

    • Global configuration: Configure load affinity for all repositories as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance:
            global:
              podResources:
                cpuRequest: "100m"
                cpuLimit: "200m"
                memoryRequest: "100Mi"
                memoryLimit: "200Mi"
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/gpu: 'no'
                    matchExpressions:
                      - key: label.io/location
                        operator: In
                        values:
                          - US
                          - EU
      Copy to Clipboard Toggle word wrap

      where:

      repositoryMaintenance
      Specifies the repositoryMaintenance object as shown in the example.
      global
      Specifies the global object to configure load affinity for all repositories.
    • Per-repository configuration: Configure load affinity per repository as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance:
            myrepositoryname:
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/cpu: 'yes'
      Copy to Clipboard Toggle word wrap

      where:

      myrepositoryname
      Specifies the repositoryMaintenance object for each repository.

5.8.1.12. Configuring Velero load affinity

With each OADP deployment, there is one Velero pod and its main purpose is to schedule Velero workloads. To schedule the Velero pod, you can use the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DataProtectionApplication (DPA) custom resource (CR) spec.

Use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.

The OpenShift scheduler applies the rules and performs the scheduling of the Velero pod deployment.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DPA spec as shown in the following examples:

    • velero.podConfig.nodeSelector object configuration:

      ...
      spec:
        configuration:
          velero:
            podConfig:
              nodeSelector:
                some-label.io/custom-node-role: backup-core
      Copy to Clipboard Toggle word wrap
    • velero.loadAffinity object configuration:

      ...
      spec:
        configuration:
          velero:
            loadAffinity:
              - nodeSelector:
                  matchLabels:
                    label.io/gpu: 'no'
                  matchExpressions:
                    - key: label.io/location
                      operator: In
                      values:
                        - US
                        - EU
      Copy to Clipboard Toggle word wrap

5.8.1.13. Overriding the imagePullPolicy setting in the DPA

In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.

In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:

  • If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent.
  • If the image does not have the digest, the Operator sets imagePullPolicy to Always.

You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the spec.imagePullPolicy field in the DPA as shown in the following example:

    Example Data Protection Application

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
            - csi
      imagePullPolicy: Never
    Copy to Clipboard Toggle word wrap

    where:

    imagePullPolicy
    Specifies the value for imagePullPolicy. In this example, the imagePullPolicy field is set to Never.

5.8.1.14. Configuring the DPA with more than one BSL

Configure the DataProtectionApplication (DPA) custom resource (CR) with multiple BackupStorageLocation (BSL) resources to store backups across different locations using provider-specific credentials. This provides backup distribution and location-specific restore capabilities.

For example, you have configured the following two BSLs:

  • Configured one BSL in the DPA and set it as the default BSL.
  • Created another BSL independently by using the BackupStorageLocation CR.

As you have already set the BSL created through the DPA as the default, you cannot set the independently created BSL again as the default. This means, at any given time, you can set only one BSL as the default BSL.

Prerequisites

  • You must install the OADP Operator.
  • You must create the secrets by using the credentials provided by the cloud provider.

Procedure

  1. Configure the DataProtectionApplication CR with more than one BackupStorageLocation CR. See the following example:

    Example DPA

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    #...
    backupLocations:
      - name: aws
        velero:
          provider: aws
          default: true
          objectStorage:
            bucket: <bucket_name>
            prefix: <prefix>
          config:
            region: <region_name>
            profile: "default"
          credential:
            key: cloud
            name: cloud-credentials
      - name: odf
        velero:
          provider: aws
          default: false
          objectStorage:
            bucket: <bucket_name>
            prefix: <prefix>
          config:
            profile: "default"
            region: <region_name>
            s3Url: <url>
            insecureSkipTLSVerify: "true"
            s3ForcePathStyle: "true"
          credential:
            key: cloud
            name: <custom_secret_name_odf>
    #...
    Copy to Clipboard Toggle word wrap

    where:

    name: aws
    Specifies a name for the first BSL.
    default: true
    Indicates that this BSL is the default BSL. If a BSL is not set in the Backup CR, the default BSL is used. You can set only one BSL as the default.
    <bucket_name>
    Specifies the bucket name.
    <prefix>
    Specifies a prefix for Velero backups. For example, velero.
    <region_name>
    Specifies the AWS region for the bucket.
    cloud-credentials
    Specifies the name of the default Secret object that you created.
    name: odf
    Specifies a name for the second BSL.
    <url>
    Specifies the URL of the S3 endpoint.
    <custom_secret_name_odf>
    Specifies the correct name for the Secret. For example, custom_secret_name_odf. If you do not specify a Secret name, the default name is used.
  2. Specify the BSL to be used in the backup CR. See the following example.

    Example backup CR

    apiVersion: velero.io/v1
    kind: Backup
    # ...
    spec:
      includedNamespaces:
      - <namespace>
      storageLocation: <backup_storage_location>
      defaultVolumesToFsBackup: true
    Copy to Clipboard Toggle word wrap

    where:

    <namespace>
    Specifies the namespace to back up.
    <backup_storage_location>
    Specifies the storage location.

5.8.1.15. Disabling the node agent in DataProtectionApplication

If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.

Procedure

  1. To disable the nodeAgent, set the enable flag to false. See the following example:

    Example DataProtectionApplication CR

    # ...
    configuration:
      nodeAgent:
        enable: false
        uploaderType: kopia
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    enable
    Enables the node agent.
  2. To enable the nodeAgent, set the enable flag to true. See the following example:

    Example DataProtectionApplication CR

    # ...
    configuration:
      nodeAgent:
        enable: true
        uploaderType: kopia
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    enable

    Enables the node agent.

    You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".

5.9. Configuring OADP with Azure

5.9.1. Configuring the OpenShift API for Data Protection with Microsoft Azure

Configure the OpenShift API for Data Protection (OADP) with Microsoft Azure to back up and restore cluster resources by using Azure storage. This provides data protection capabilities for your OpenShift Container Platform clusters.

The OADP Operator installs Velero 1.16.

Note

Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.

You configure Azure for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.

To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.

5.9.1.1. Configuring Microsoft Azure

Configure Microsoft Azure storage and service principal credentials for backup storage with OADP. This provides the necessary authentication and storage infrastructure for data protection operations.

Prerequisites

Tools that use Azure services should always have restricted permissions to make sure that Azure resources are safe. Therefore, instead of having applications sign in as a fully privileged user, Azure offers service principals. An Azure service principal is a name that can be used with applications, hosted services, or automated tools.

This identity is used for access to resources.

  • Create a service principal
  • Sign in using a service principal and password
  • Sign in using a service principal and certificate
  • Manage service principal roles
  • Create an Azure resource using a service principal
  • Reset service principal credentials

For more details, see Create an Azure service principal with Azure CLI.

Procedure

  1. Log in to Azure:

    $ az login
    Copy to Clipboard Toggle word wrap
  2. Set the AZURE_RESOURCE_GROUP variable:

    $ AZURE_RESOURCE_GROUP=Velero_Backups
    Copy to Clipboard Toggle word wrap
  3. Create an Azure resource group:

    $ az group create -n $AZURE_RESOURCE_GROUP --location CentralUS
    Copy to Clipboard Toggle word wrap

    where:

    CentralUS
    Specifies your location.
  4. Set the AZURE_STORAGE_ACCOUNT_ID variable:

    $ AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
    Copy to Clipboard Toggle word wrap
  5. Create an Azure storage account:

    $ az storage account create \
        --name $AZURE_STORAGE_ACCOUNT_ID \
        --resource-group $AZURE_RESOURCE_GROUP \
        --sku Standard_GRS \
        --encryption-services blob \
        --https-only true \
        --kind BlobStorage \
        --access-tier Hot
    Copy to Clipboard Toggle word wrap
  6. Set the BLOB_CONTAINER variable:

    $ BLOB_CONTAINER=velero
    Copy to Clipboard Toggle word wrap
  7. Create an Azure Blob storage container:

    $ az storage container create \
      -n $BLOB_CONTAINER \
      --public-access off \
      --account-name $AZURE_STORAGE_ACCOUNT_ID
    Copy to Clipboard Toggle word wrap
  8. Create a service principal and credentials for velero:

    $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv`
      AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`
    Copy to Clipboard Toggle word wrap
  9. Create a service principal with the Contributor role, assigning a specific --role and --scopes:

    $ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \
                                                    --role "Contributor" \
                                                    --query 'password' -o tsv \
                                                    --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID/resourceGroups/$AZURE_RESOURCE_GROUP`
    Copy to Clipboard Toggle word wrap

    The CLI generates a password for you. Ensure you capture the password.

  10. After creating the service principal, obtain the client id.

    $ AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`
    Copy to Clipboard Toggle word wrap
    Note

    For this to be successful, you must know your Azure application ID.

  11. Save the service principal credentials in the credentials-velero file:

    $ cat << EOF > ./credentials-velero
    AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
    AZURE_TENANT_ID=${AZURE_TENANT_ID}
    AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
    AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
    AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
    AZURE_CLOUD_NAME=AzurePublicCloud
    EOF
    Copy to Clipboard Toggle word wrap

    You use the credentials-velero file to add Azure as a replication repository.

5.9.1.2. About backup and snapshot locations and their secrets

Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.

5.9.1.2.1. Backup locations

You can specify one of the following AWS S3-compatible object storage solutions as a backup location:

  • Multicloud Object Gateway (MCG)
  • Red Hat Container Storage
  • Ceph RADOS Gateway; also known as Ceph Object Gateway
  • Red Hat OpenShift Data Foundation
  • MinIO

Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.

5.9.1.2.2. Snapshot locations

If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.

If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.

If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.

5.9.1.2.3. Secrets

If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.

If the backup and snapshot locations use different credentials, you create two secret objects:

  • Custom Secret for the backup location, which you specify in the DataProtectionApplication CR.
  • Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR.
Important

The Data Protection Application requires a default Secret. Otherwise, the installation will fail.

If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.

5.9.1.3. About authenticating OADP with Azure

Review authentication methods for OADP with Azure to select the appropriate authentication approach for your security requirements.

You can authenticate OADP with Azure by using the following methods:

  • A Velero-specific service principal with secret-based authentication.
  • A Velero-specific storage account access key with secret-based authentication.
  • Azure Security Token Service.

5.9.1.4. Using a service principal or a storage account access key

You create a default Secret object and reference it in the backup storage location custom resource. The credentials file for the Secret object can contain information about the Azure service principal or a storage account access key.

The default name of the Secret is cloud-credentials-azure.

Note

The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.

If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.

Prerequisites

  • You have access to the OpenShift cluster as a user with cluster-admin privileges.
  • You have an Azure subscription with appropriate permissions.
  • You have installed OADP.
  • You have configured an object storage for storing the backups.

Procedure

  1. Create a credentials-velero file for the backup storage location in the appropriate format for your cloud provider.

    You can use one of the following two methods to authenticate OADP with Azure.

    • Use the service principal with secret-based authentication. See the following example:

      AZURE_SUBSCRIPTION_ID=<azure_subscription_id>
      AZURE_TENANT_ID=<azure_tenant_id>
      AZURE_CLIENT_ID=<azure_client_id>
      AZURE_CLIENT_SECRET=<azure_client_secret>
      AZURE_RESOURCE_GROUP=<azure_resource_group>
      AZURE_CLOUD_NAME=<azure_cloud_name>
      Copy to Clipboard Toggle word wrap
    • Use a storage account access key. See the following example:

      AZURE_STORAGE_ACCOUNT_ACCESS_KEY=<azure_storage_account_access_key>
      AZURE_SUBSCRIPTION_ID=<azure_subscription_id>
      AZURE_RESOURCE_GROUP=<azure_resource_group>
      AZURE_CLOUD_NAME=<azure_cloud_name>
      Copy to Clipboard Toggle word wrap
  2. Create a Secret custom resource (CR) with the default name:

    $ oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap
  3. Reference the Secret in the spec.backupLocations.velero.credential block of the DataProtectionApplication CR when you install the Data Protection Application as shown in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp
    spec:
    ...
      backupLocations:
        - velero:
            config:
              resourceGroup: <azure_resource_group>
              storageAccount: <azure_storage_account_id>
              subscriptionId: <azure_subscription_id>
            credential:
              key: cloud
              name: <custom_secret>
            provider: azure
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: <prefix>
      snapshotLocations:
        - velero:
            config:
              resourceGroup: <azure_resource_group>
              subscriptionId: <azure_subscription_id>
              incremental: "true"
            provider: azure
    Copy to Clipboard Toggle word wrap

    where:

    <custom_secret>
    Specifies the backup location Secret with custom name.

5.9.1.5. Using OADP with Azure Security Token Service authentication

You can use Microsoft Entra Workload ID to access Azure storage for OADP backup and restore operations. This approach uses the signed Kubernetes service account tokens of the OpenShift cluster. These token are automatically rotated every hour and exchanged with the Azure Active Directory (AD) access tokens, eliminating the need for long-term client secrets.

To use the Azure Security Token Service (STS) configuration, you need the credentialsMode field set to Manual during cluster installation. This approach uses the Cloud Credential Operator (ccoctl) to set up the workload identity infrastructure, including the OpenID Connect (OIDC) provider, issuer configuration, and user-assigned managed identities.

Note

OADP with Azure STS configuration does not support restic File System Backups (FSB) and restores.

Prerequisites

  • You have an OpenShift cluster installed on Microsoft Azure with Microsoft Entra Workload ID configured. For more details see, Configuring an Azure cluster to use short-term credentials.
  • You have the Azure CLI (az) installed and configured.
  • You have access to the OpenShift cluster as a user with cluster-admin privileges.
  • You have an Azure subscription with appropriate permissions.
Note

If your OpenShift cluster was not originally installed with Microsoft Entra Workload ID, you can enable short-term credentials after installation. This post-installation configuration is supported specifically for Azure clusters.

Procedure

  1. If your cluster was installed with long-term credentials, you can switch to Microsoft Entra Workload ID authentication after installation. For more details, see Enabling Microsoft Entra Workload ID on an existing cluster.

    Important

    After enabling Microsoft Entra Workload ID on an existing Azure cluster, you must update all cluster components that use cloud credentials, including OADP, to use the new authentication method.

  2. Set the environment variables for your Azure STS configuration as shown in the following example:

    export API_URL=$(oc whoami --show-server) # Get cluster information
    export CLUSTER_NAME=$(echo "$API_URL" | sed 's|https://api\.||' | sed 's|\..*||')
    export CLUSTER_RESOURCE_GROUP="${CLUSTER_NAME}-rg"
    
    # Get Azure information
    export AZURE_SUBSCRIPTION_ID=$(az account show --query id -o tsv)
    export AZURE_TENANT_ID=$(az account show --query tenantId -o tsv)
    
    # Set names for resources
    export IDENTITY_NAME="velero"
    export APP_NAME="velero-${CLUSTER_NAME}"
    export STORAGE_ACCOUNT_NAME=$(echo "velero${CLUSTER_NAME}" | tr -d '-' | tr '[:upper:]' '[:lower:]' | cut -c1-24)
    export CONTAINER_NAME="velero"
    Copy to Clipboard Toggle word wrap
  3. Create an Azure Managed Identity for OADP as shown in the following example:

    az identity create \ # Create managed identity
        --subscription "$AZURE_SUBSCRIPTION_ID" \
        --resource-group "$CLUSTER_RESOURCE_GROUP" \
        --name "$IDENTITY_NAME"
    
    # Get identity details
    export IDENTITY_CLIENT_ID=$(az identity show -g "$CLUSTER_RESOURCE_GROUP" -n "$IDENTITY_NAME" --query clientId -o tsv)
    export IDENTITY_PRINCIPAL_ID=$(az identity show -g "$CLUSTER_RESOURCE_GROUP" -n "$IDENTITY_NAME" --query principalId -o tsv)
    Copy to Clipboard Toggle word wrap
  4. Grant the required Azure roles to the managed identity as shown in the following example:

    export SUBSCRIPTION_ID=$(az account show --query id -o tsv) # Get subscription ID for role assignments
    
    # Required roles for OADP operations
    REQUIRED_ROLES=(
        "Contributor"
        "Storage Blob Data Contributor"
        "Disk Snapshot Contributor"
    )
    
    for role in "${REQUIRED_ROLES[@]}"; do
        echo "Assigning role: $role"
        az role assignment create \
            --assignee "$IDENTITY_PRINCIPAL_ID" \
            --role "$role" \
            --scope "/subscriptions/$SUBSCRIPTION_ID"
    done
    Copy to Clipboard Toggle word wrap
  5. Create an Azure storage account and a container as shown in the following example:

    az storage account create \ # Create storage account
        --name "$STORAGE_ACCOUNT_NAME" \
        --resource-group "$CLUSTER_RESOURCE_GROUP" \
        --location "$(az group show -n $CLUSTER_RESOURCE_GROUP --query location -o tsv)" \
        --sku Standard_LRS \
        --kind StorageV2
    Copy to Clipboard Toggle word wrap
  6. Get the OIDC issuer URL from your OpenShift cluster as shown in the following example:

    export SERVICE_ACCOUNT_ISSUER=$(oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer)
    echo "OIDC Issuer: $SERVICE_ACCOUNT_ISSUER"
    Copy to Clipboard Toggle word wrap
  7. Configure Microsoft Entra Workload ID Federation as shown in the following example:

    az identity federated-credential create \ # Create federated identity credential for Velero service account
        --name "velero-federated-credential" \
        --identity-name "$IDENTITY_NAME" \
        --resource-group "$CLUSTER_RESOURCE_GROUP" \
        --issuer "$SERVICE_ACCOUNT_ISSUER" \
        --subject "system:serviceaccount:openshift-adp:velero" \
        --audiences "openshift"
    
    # Create federated identity credential for OADP controller manager
    az identity federated-credential create \
        --name "oadp-controller-federated-credential" \
        --identity-name "$IDENTITY_NAME" \
        --resource-group "$CLUSTER_RESOURCE_GROUP" \
        --issuer "$SERVICE_ACCOUNT_ISSUER" \
        --subject "system:serviceaccount:openshift-adp:openshift-adp-controller-manager" \
        --audiences "openshift"
    Copy to Clipboard Toggle word wrap
  8. Create the OADP namespace if it does not already exist by running the following command:

    oc create namespace openshift-adp
    Copy to Clipboard Toggle word wrap
  9. To use the CloudStorage CR to create an Azure cloud storage resource, run the following command:

    cat <<EOF | oc apply -f -
    apiVersion: oadp.openshift.io/v1alpha1
    kind: CloudStorage
    metadata:
      name: azure-backup-storage
      namespace: openshift-adp
    spec:
      name: ${CONTAINER_NAME}
      provider: azure
      creationSecret:
        name: cloud-credentials-azure
        key: azurekey
      config:
        storageAccount: ${STORAGE_ACCOUNT_NAME}
    EOF
    Copy to Clipboard Toggle word wrap
  10. Create the DataProtectionApplication (DPA) custom resource (CR) and configure the Azure STS details as shown in the following example:

    cat <<EOF | oc apply -f -
    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: dpa-azure-workload-id-cloudstorage
      namespace: openshift-adp
    spec:
      backupLocations:
      - bucket:
          cloudStorageRef:
            name: <cloud_storage_cr>
          config:
            storageAccount: <storage_account_name>
            useAAD: "true"
          credential:
            key: azurekey
            name: cloud-credentials-azure
          default: true
          prefix: velero
        name: default
      configuration:
        velero:
          defaultPlugins:
          - azure
          - openshift
          - csi
          disableFsBackup: false
      logFormat: text
      snapshotLocations:
      - name: default
        velero:
          config:
            resourceGroup: <resource_group>
            subscriptionId: <subscription_ID>
          credential:
            key: azurekey
            name: cloud-credentials-azure
          provider: azure
    EOF
    Copy to Clipboard Toggle word wrap

    where:

    <cloud_storage_cr>
    Specifies the CloudStorage CR name.
    <storage_account_name>
    Specifies the Azure storage account name.
    <resource_group>
    Specifies the resource group.
    <subscription_ID>
    Specifies the subscription ID.

Verification

  1. Verify that the OADP operator pods are running:

    $ oc get pods -n openshift-adp
    Copy to Clipboard Toggle word wrap
  2. Verify the Azure role assignments:

    az role assignment list --assignee ${IDENTITY_PRINCIPAL_ID} --all --query "[].roleDefinitionName" -o tsv
    Copy to Clipboard Toggle word wrap
  3. Verify Microsoft Entra Workload ID authentication:

    $ VELERO_POD=$(oc get pods -n openshift-adp -l app.kubernetes.io/name=velero -o jsonpath='{.items[0].metadata.name}') # Check Velero pod environment variables
    
    # Check AZURE_CLIENT_ID environment variable
    $ oc get pod ${VELERO_POD} -n openshift-adp -o jsonpath='{.spec.containers[0].env[?(@.name=="AZURE_CLIENT_ID")]}'
    
    # Check AZURE_FEDERATED_TOKEN_FILE environment variable
    $ oc get pod ${VELERO_POD} -n openshift-adp -o jsonpath='{.spec.containers[0].env[?(@.name=="AZURE_FEDERATED_TOKEN_FILE")]}'
    Copy to Clipboard Toggle word wrap
  4. Create a backup of an application and verify the backup is stored successfully in Azure storage.

5.9.1.6. Setting Velero CPU and memory resource allocations

You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.

Prerequisites

  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure

  • Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      configuration:
        velero:
          podConfig:
            nodeSelector: <node_selector>
            resourceAllocations:
              limits:
                cpu: "1"
                memory: 1024Mi
              requests:
                cpu: 200m
                memory: 256Mi
    Copy to Clipboard Toggle word wrap

    where:

    nodeSelector
    Specifies the node selector to be supplied to Velero podSpec.
    resourceAllocations

    Specifies the resource allocations listed for average usage.

    Note

    Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.

    Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.

Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.

5.9.1.7. Enabling self-signed CA certificates

You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.

Prerequisites

  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure

  • Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      backupLocations:
        - name: default
          velero:
            provider: aws
            default: true
            objectStorage:
              bucket: <bucket>
              prefix: <prefix>
              caCert: <base64_encoded_cert_string>
            config:
              insecureSkipTLSVerify: "false"
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    caCert
    Specifies the Base64-encoded CA certificate string.
    insecureSkipTLSVerify
    Specifies the insecureSkipTLSVerify configuration. The configuration can be set to either "true" or "false". If set to "true", SSL/TLS security is disabled. If set to "false", SSL/TLS security is enabled.

You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.

Prerequisites

  • You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role.
  • You must have the OpenShift CLI (oc) installed. .Procedure

    1. To use an aliased Velero command, run the following command:

      $ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
      Copy to Clipboard Toggle word wrap
    2. Check that the alias is working by running the following command:

      $ velero version
      Copy to Clipboard Toggle word wrap
      Client:
      	Version: v1.12.1-OADP
      	Git commit: -
      Server:
      	Version: v1.12.1-OADP
      Copy to Clipboard Toggle word wrap
    3. To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:

      $ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')
      Copy to Clipboard Toggle word wrap
      $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
      Copy to Clipboard Toggle word wrap
      $ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
      Copy to Clipboard Toggle word wrap
    4. To fetch the backup logs, run the following command:

      $ velero backup logs  <backup_name>  --cacert /tmp/<your_cacert.txt>
      Copy to Clipboard Toggle word wrap

      You can use these logs to view failures and warnings for the resources that you cannot back up.

    5. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the previous step.
    6. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command:

      $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
      /tmp/your-cacert.txt
      Copy to Clipboard Toggle word wrap

      In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.

5.9.1.9. Installing the Data Protection Application

You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API.

Prerequisites

  • You must install the OADP Operator.
  • You must configure object storage as a backup location.
  • If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots.
  • If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials-azure.
  • If the backup and snapshot locations use different credentials, you must create two Secrets:

    • Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR.
    • Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR.
    Note

    If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret, the installation will fail.

Procedure

  1. Click Operators Installed Operators and select the OADP Operator.
  2. Under Provided APIs, click Create instance in the DataProtectionApplication box.
  3. Click YAML View and update the parameters of the DataProtectionApplication manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp
    spec:
      configuration:
        velero:
          defaultPlugins:
            - azure
            - openshift
          resourceTimeout: 10m
        nodeAgent:
          enable: true
          uploaderType: kopia
          podConfig:
            nodeSelector: <node_selector>
      backupLocations:
        - velero:
            config:
              resourceGroup: <azure_resource_group>
              storageAccount: <azure_storage_account_id>
              subscriptionId: <azure_subscription_id>
            credential:
              key: cloud
              name: cloud-credentials-azure
            provider: azure
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: <prefix>
      snapshotLocations:
        - velero:
            config:
              resourceGroup: <azure_resource_group>
              subscriptionId: <azure_subscription_id>
              incremental: "true"
            name: default
            provider: azure
            credential:
              key: cloud
              name: cloud-credentials-azure
    Copy to Clipboard Toggle word wrap

    where:

    namespace
    Specifies the default namespace for OADP which is openshift-adp. The namespace is a variable and is configurable.
    openshift
    Specifies that the openshift plugin is mandatory.
    resourceTimeout
    Specifies how many minutes to wait for several Velero resources such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability, before timeout occurs. The default is 10m.
    nodeAgent
    Specifies the administrative agent that routes the administrative requests to servers.
    enable
    Set this value to true if you want to enable nodeAgent and perform File System Backup.
    uploaderType
    Specifies the uploader type. Enter kopia or restic as your uploader. You cannot change the selection after the installation. For the Built-in DataMover you must use Kopia. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR.
    nodeSelector
    Specifies the nodes on which Kopia or Restic are available. By default, Kopia or Restic run on all nodes.
    resourceGroup
    Specifies the Azure resource group.
    storageAccount
    Specifies the Azure storage account ID.
    subscriptionId
    Specifies the Azure subscription ID.
    name
    Specifies the name of the Secret object. If you do not specify this value, the default name, cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location.
    bucket
    Specifies a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix.
    prefix
    Specifies a prefix for Velero backups, for example, velero, if the bucket is used for multiple purposes.
    snapshotLocations
    Specifies the snapshot location. You do not need to specify a snapshot location if you use CSI snapshots or Restic to back up PVs.
    name
    Specifies the name of the Secret object that you created. If you do not specify this value, the default name, cloud-credentials-azure, is used. If you specify a custom name, the custom name is used for the backup location.
  4. Click Create.

Verification

  1. Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command:

    $ oc get all -n openshift-adp
    Copy to Clipboard Toggle word wrap
    NAME                                                     READY   STATUS    RESTARTS   AGE
    pod/oadp-operator-controller-manager-67d9494d47-6l8z8    2/2     Running   0          2m8s
    pod/node-agent-9cq4q                                     1/1     Running   0          94s
    pod/node-agent-m4lts                                     1/1     Running   0          94s
    pod/node-agent-pv4kr                                     1/1     Running   0          95s
    pod/velero-588db7f655-n842v                              1/1     Running   0          95s
    
    NAME                                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/oadp-operator-controller-manager-metrics-service   ClusterIP   172.30.70.140    <none>        8443/TCP   2m8s
    service/openshift-adp-velero-metrics-svc                   ClusterIP   172.30.10.0      <none>        8085/TCP   8h
    
    NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    daemonset.apps/node-agent    3         3         3       3            3           <none>          96s
    
    NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/oadp-operator-controller-manager    1/1     1            1           2m9s
    deployment.apps/velero                              1/1     1            1           96s
    
    NAME                                                           DESIRED   CURRENT   READY   AGE
    replicaset.apps/oadp-operator-controller-manager-67d9494d47    1         1         1       2m9s
    replicaset.apps/velero-588db7f655                              1         1         1       96s
    Copy to Clipboard Toggle word wrap
  2. Verify that the DataProtectionApplication (DPA) is reconciled by running the following command:

    $ oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'
    Copy to Clipboard Toggle word wrap
    {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]}
    Copy to Clipboard Toggle word wrap
  3. Verify the type is set to Reconciled.
  4. Verify the backup storage location and confirm that the PHASE is Available by running the following command:

    $ oc get backupstoragelocations.velero.io -n openshift-adp
    Copy to Clipboard Toggle word wrap
    NAME           PHASE       LAST VALIDATED   AGE     DEFAULT
    dpa-sample-1   Available   1s               3d16h   true
    Copy to Clipboard Toggle word wrap

5.9.1.10. Configuring the DPA with client burst and QPS settings

The burst setting determines how many requests can be sent to the velero server before the limit is applied. After the burst limit is reached, the queries per second (QPS) setting determines how many additional requests can be sent per second.

You can set the burst and QPS values of the velero server by configuring the Data Protection Application (DPA) with the burst and QPS values. You can use the dpa.configuration.velero.client-burst and dpa.configuration.velero.client-qps fields of the DPA to set the burst and QPS values.

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the client-burst and the client-qps fields in the DPA as shown in the following example:

    Example Data Protection Application

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: restic
        velero:
          client-burst: 500
          client-qps: 300
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
    Copy to Clipboard Toggle word wrap

    where:

    client-burst
    Specifies the client-burst value. In this example, the client-burst field is set to 500.
    client-qps
    Specifies the client-qps value. In this example, the client-qps field is set to 300.

5.9.1.11. Configuring node agents and node labels

The Data Protection Application (DPA) uses the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the recommended form of node selection constraint.

Procedure

  1. Run the node agent on any node that you choose by adding a custom label:

    $ oc label node/<node_name> node-role.kubernetes.io/nodeAgent=""
    Copy to Clipboard Toggle word wrap
    Note

    Any label specified must match the labels on each node.

  2. Use the same custom label in the DPA.spec.configuration.nodeAgent.podConfig.nodeSelector field, which you used for labeling nodes:

    configuration:
      nodeAgent:
        enable: true
        podConfig:
          nodeSelector:
            node-role.kubernetes.io/nodeAgent: ""
    Copy to Clipboard Toggle word wrap

    The following example is an anti-pattern of nodeSelector and does not work unless both labels, node-role.kubernetes.io/infra: "" and node-role.kubernetes.io/worker: "", are on the node:

        configuration:
          nodeAgent:
            enable: true
            podConfig:
              nodeSelector:
                node-role.kubernetes.io/infra: ""
                node-role.kubernetes.io/worker: ""
    Copy to Clipboard Toggle word wrap

5.9.1.12. Configuring node agent load affinity

You can schedule the node agent pods on specific nodes by using the spec.podConfig.nodeSelector object of the DataProtectionApplication (DPA) custom resource (CR).

See the following example in which you can schedule the node agent pods on nodes with the label label.io/role: cpu-1 and other-label.io/other-role: cpu-2.

...
spec:
  configuration:
    nodeAgent:
      enable: true
      uploaderType: kopia
      podConfig:
        nodeSelector:
          label.io/role: cpu-1
          other-label.io/other-role: cpu-2
        ...
Copy to Clipboard Toggle word wrap

You can add more restrictions on the node agent pods scheduling by using the nodeagent.loadAffinity object in the DPA spec.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the DPA spec nodegent.loadAffinity object as shown in the following example.

    In the example, you ensure that the node agent pods are scheduled only on nodes with the label label.io/role: cpu-1 and the label label.io/hostname matching with either node1 or node2.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          loadAffinity:
            - nodeSelector:
                matchLabels:
                  label.io/role: cpu-1
                matchExpressions:
                  - key: label.io/hostname
                    operator: In
                    values:
                      - node1
                      - node2
                      ...
    Copy to Clipboard Toggle word wrap

    where:

    loadAffinity
    Specifies the loadAffinity object by adding the matchLabels and matchExpressions objects.
    matchExpressions
    Specifies the matchExpressions object to add restrictions on the node agent pods scheduling.

5.9.1.13. Node agent load affinity guidelines

Use the following guidelines to configure the node agent loadAffinity object in the DataProtectionApplication (DPA) custom resource (CR).

  • Use the spec.nodeagent.podConfig.nodeSelector object for simple node matching.
  • Use the loadAffinity.nodeSelector object without the podConfig.nodeSelector object for more complex scenarios.
  • You can use both podConfig.nodeSelector and loadAffinity.nodeSelector objects, but the loadAffinity object must be equal or more restrictive as compared to the podConfig object. In this scenario, the podConfig.nodeSelector labels must be a subset of the labels used in the loadAffinity.nodeSelector object.
  • You cannot use the matchExpressions and matchLabels fields if you have configured both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.
  • See the following example to configure both podConfig.nodeSelector and loadAffinity.nodeSelector objects in the DPA.

    ...
    spec:
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadAffinity:
            - nodeSelector:
                matchLabels:
                  label.io/location: 'US'
                  label.io/gpu: 'no'
          podConfig:
            nodeSelector:
              label.io/gpu: 'no'
    Copy to Clipboard Toggle word wrap

5.9.1.14. Configuring node agent load concurrency

You can control the maximum number of node agent operations that can run simultaneously on each node within your cluster.

You can configure it using one of the following fields of the Data Protection Application (DPA):

  • globalConfig: Defines a default concurrency limit for the node agent across all nodes.
  • perNodeConfig: Specifies different concurrency limits for specific nodes based on nodeSelector labels. This provides flexibility for environments where certain nodes might have different resource capacities or roles.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. If you want to use load concurrency for specific nodes, add labels to those nodes:

    $ oc label node/<node_name> label.io/instance-type='large'
    Copy to Clipboard Toggle word wrap
  2. Configure the load concurrency fields for your DPA instance:

      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
          loadConcurrency:
            globalConfig: 1
            perNodeConfig:
            - nodeSelector:
                  matchLabels:
                     label.io/instance-type: large
              number: 3
    Copy to Clipboard Toggle word wrap

    where:

    globalConfig
    Specifies the global concurrent number. The default value is 1, which means there is no concurrency and only one load is allowed. The globalConfig value does not have a limit.
    label.io/instance-type
    Specifies the label for per-node concurrency.
    number
    Specifies the per-node concurrent number. You can specify many per-node concurrent numbers, for example, based on the instance type and size. The range of per-node concurrent number is the same as the global concurrent number. If the configuration file contains a per-node concurrent number and a global concurrent number, the per-node concurrent number takes precedence.

5.9.1.15. Configuring the node agent as a non-root and non-privileged user

To enhance the node agent security, you can configure the OADP Operator node agent daemonset to run as a non-root and non-privileged user by using the spec.configuration.velero.disableFsBackup setting in the DataProtectionApplication (DPA) custom resource (CR).

By setting the spec.configuration.velero.disableFsBackup setting to true, the node agent security context sets the root file system to read-only and sets the privileged flag to false.

Note

Setting spec.configuration.velero.disableFsBackup to true enhances the node agent security by removing the need for privileged containers and enforcing a read-only root file system.

However, it also disables File System Backup (FSB) with Kopia. If your workloads rely on FSB for backing up volumes that do not support native snapshots, then you should evaluate whether the disableFsBackup configuration fits your use case.

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the disableFsBackup field in the DPA as shown in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: ts-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
      - velero:
          credential:
            key: cloud
            name: cloud-credentials
          default: true
          objectStorage:
            bucket: <bucket_name>
            prefix: velero
          provider: gcp
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
          - csi
          - gcp
          - openshift
          disableFsBackup: true
    Copy to Clipboard Toggle word wrap

    where:

    nodeAgent
    Specifies to enable the node agent in the DPA.
    disableFsBackup
    Specifies to set the disableFsBackup field to true.

Verification

  1. Verify that the node agent security context is set to run as non-root and the root file system is readOnly by running the following command:

    $ oc get daemonset node-agent -o yaml
    Copy to Clipboard Toggle word wrap

    The example output is as following:

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      ...
      name: node-agent
      namespace: openshift-adp
      ...
    spec:
      ...
      template:
        metadata:
          ...
        spec:
          containers:
          ...
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              readOnlyRootFilesystem: true
            ...
          nodeSelector:
            kubernetes.io/os: linux
          os:
            name: linux
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext:
            runAsNonRoot: true
            seccompProfile:
              type: RuntimeDefault
          serviceAccount: velero
          serviceAccountName: velero
          ....
    Copy to Clipboard Toggle word wrap

    where:

    allowPrivilegeEscalation
    Specifies that the allowPrivilegeEscalation field is false.
    privileged
    Specifies that the privileged field is false.
    readOnlyRootFilesystem
    Specifies that the root file system is read-only.
    runAsNonRoot
    Specifies that the node agent is run as a non-root user.

5.9.1.16. Configuring repository maintenance

OADP repository maintenance is a background job, you can configure it independently of the node agent pods. This means that you can schedule the repository maintenance pod on a node where the node agent is or is not running.

You can use the repository maintenance job affinity configurations in the DataProtectionApplication (DPA) custom resource (CR) only if you use Kopia as the backup repository.

You have the option to configure the load affinity at the global level affecting all repositories. Or you can configure the load affinity for each repository. You can also use a combination of global and per-repository configuration.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the loadAffinity object in the DPA spec by using either one or both of the following methods:

    • Global configuration: Configure load affinity for all repositories as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance:
            global:
              podResources:
                cpuRequest: "100m"
                cpuLimit: "200m"
                memoryRequest: "100Mi"
                memoryLimit: "200Mi"
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/gpu: 'no'
                    matchExpressions:
                      - key: label.io/location
                        operator: In
                        values:
                          - US
                          - EU
      Copy to Clipboard Toggle word wrap

      where:

      repositoryMaintenance
      Specifies the repositoryMaintenance object as shown in the example.
      global
      Specifies the global object to configure load affinity for all repositories.
    • Per-repository configuration: Configure load affinity per repository as shown in the following example:

      ...
      spec:
        configuration:
          repositoryMaintenance:
            myrepositoryname:
              loadAffinity:
                - nodeSelector:
                    matchLabels:
                      label.io/cpu: 'yes'
      Copy to Clipboard Toggle word wrap

      where:

      myrepositoryname
      Specifies the repositoryMaintenance object for each repository.

5.9.1.17. Configuring Velero load affinity

With each OADP deployment, there is one Velero pod and its main purpose is to schedule Velero workloads. To schedule the Velero pod, you can use the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DataProtectionApplication (DPA) custom resource (CR) spec.

Use the podConfig.nodeSelector object to assign the Velero pod to specific nodes. You can also configure the velero.loadAffinity object for pod-level affinity and anti-affinity.

The OpenShift scheduler applies the rules and performs the scheduling of the Velero pod deployment.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.
  • You have installed the OADP Operator.
  • You have configured the DPA CR.

Procedure

  • Configure the velero.podConfig.nodeSelector and the velero.loadAffinity objects in the DPA spec as shown in the following examples:

    • velero.podConfig.nodeSelector object configuration:

      ...
      spec:
        configuration:
          velero:
            podConfig:
              nodeSelector:
                some-label.io/custom-node-role: backup-core
      Copy to Clipboard Toggle word wrap
    • velero.loadAffinity object configuration:

      ...
      spec:
        configuration:
          velero:
            loadAffinity:
              - nodeSelector:
                  matchLabels:
                    label.io/gpu: 'no'
                  matchExpressions:
                    - key: label.io/location
                      operator: In
                      values:
                        - US
                        - EU
      Copy to Clipboard Toggle word wrap

5.9.1.18. Overriding the imagePullPolicy setting in the DPA

In OADP 1.4.0 or earlier, the Operator sets the imagePullPolicy field of the Velero and node agent pods to Always for all images.

In OADP 1.4.1 or later, the Operator first checks if each image has the sha256 or sha512 digest and sets the imagePullPolicy field accordingly:

  • If the image has the digest, the Operator sets imagePullPolicy to IfNotPresent.
  • If the image does not have the digest, the Operator sets imagePullPolicy to Always.

You can also override the imagePullPolicy field by using the spec.imagePullPolicy field in the Data Protection Application (DPA).

Prerequisites

  • You have installed the OADP Operator.

Procedure

  • Configure the spec.imagePullPolicy field in the DPA as shown in the following example:

    Example Data Protection Application

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: test-dpa
      namespace: openshift-adp
    spec:
      backupLocations:
        - name: default
          velero:
            config:
              insecureSkipTLSVerify: "true"
              profile: "default"
              region: <bucket_region>
              s3ForcePathStyle: "true"
              s3Url: <bucket_url>
            credential:
              key: cloud
              name: cloud-credentials
            default: true
            objectStorage:
              bucket: <bucket_name>
              prefix: velero
            provider: aws
      configuration:
        nodeAgent:
          enable: true
          uploaderType: kopia
        velero:
          defaultPlugins:
            - openshift
            - aws
            - kubevirt
            - csi
      imagePullPolicy: Never
    Copy to Clipboard Toggle word wrap

    where:

    imagePullPolicy
    Specifies the value for imagePullPolicy. In this example, the imagePullPolicy field is set to Never.
5.9.1.18.1. Enabling CSI in the DataProtectionApplication CR

You enable the Container Storage Interface (CSI) in the DataProtectionApplication custom resource (CR) in order to back up persistent volumes with CSI snapshots.

Prerequisites

  • The cloud provider must support CSI snapshots.

Procedure

  • Edit the DataProtectionApplication CR, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    ...
    spec:
      configuration:
        velero:
          defaultPlugins:
          - openshift
          - csi
    Copy to Clipboard Toggle word wrap

    where:

    csi
    Specifies the csi default plugin.
5.9.1.18.2. Disabling the node agent in DataProtectionApplication

If you are not using Restic, Kopia, or DataMover for your backups, you can disable the nodeAgent field in the DataProtectionApplication custom resource (CR). Before you disable nodeAgent, ensure the OADP Operator is idle and not running any backups.

Procedure

  1. To disable the nodeAgent, set the enable flag to false. See the following example:

    Example DataProtectionApplication CR

    # ...
    configuration:
      nodeAgent:
        enable: false
        uploaderType: kopia
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    enable
    Enables the node agent.
  2. To enable the nodeAgent, set the enable flag to true. See the following example:

    Example DataProtectionApplication CR

    # ...
    configuration:
      nodeAgent:
        enable: true
        uploaderType: kopia
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    enable

    Enables the node agent.

    You can set up a job to enable and disable the nodeAgent field in the DataProtectionApplication CR. For more information, see "Running tasks in pods using jobs".

5.10. Configuring OADP with Google Cloud

5.10.1. Configuring the OpenShift API for Data Protection with Google Cloud

You install the OpenShift API for Data Protection (OADP) with Google Cloud by installing the OADP Operator. The Operator installs Velero 1.16.

Note

Starting from OADP 1.0.4, all OADP 1.0.z versions can only be used as a dependency of the Migration Toolkit for Containers Operator and are not available as a standalone Operator.

You configure Google Cloud for Velero, create a default Secret, and then install the Data Protection Application. For more details, see Installing the OADP Operator.

To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager in disconnected environments for details.

5.10.1.1. Configuring Google Cloud

You configure Google Cloud for the OpenShift API for Data Protection (OADP).

Prerequisites

Procedure

  1. Log in to Google Cloud:

    $ gcloud auth login
    Copy to Clipboard Toggle word wrap
  2. Set the BUCKET variable:

    $ BUCKET=<bucket>
    Copy to Clipboard Toggle word wrap

    where:

    bucket
    Specifies the bucket name.
  3. Create the storage bucket:

    $ gsutil mb gs://$BUCKET/
    Copy to Clipboard Toggle word wrap
  4. Set the PROJECT_ID variable to your active project:

    $ PROJECT_ID=$(gcloud config get-value project)
    Copy to Clipboard Toggle word wrap
  5. Create a service account:

    $ gcloud iam service-accounts create velero \
        --display-name "Velero service account"
    Copy to Clipboard Toggle word wrap
  6. List your service accounts:

    $ gcloud iam service-accounts list
    Copy to Clipboard Toggle word wrap
  7. Set the SERVICE_ACCOUNT_EMAIL variable to match its email value:

    $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
        --filter="displayName:Velero service account" \
        --format 'value(email)')
    Copy to Clipboard Toggle word wrap
  8. Attach the policies to give the velero user the minimum necessary permissions:

    $ ROLE_PERMISSIONS=(
        compute.disks.get
        compute.disks.create
        compute.disks.createSnapshot
        compute.snapshots.get
        compute.snapshots.create
        compute.snapshots.useReadOnly
        compute.snapshots.delete
        compute.zones.get
        storage.objects.create
        storage.objects.delete
        storage.objects.get
        storage.objects.list
        iam.serviceAccounts.signBlob
    )
    Copy to Clipboard Toggle word wrap
  9. Create the velero.server custom role:

    $ gcloud iam roles create velero.server \
        --project $PROJECT_ID \
        --title "Velero Server" \
        --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
    Copy to Clipboard Toggle word wrap
  10. Add IAM policy binding to the project:

    $ gcloud projects add-iam-policy-binding $PROJECT_ID \
        --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
        --role projects/$PROJECT_ID/roles/velero.server
    Copy to Clipboard Toggle word wrap
  11. Update the IAM service account:

    $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
    Copy to Clipboard Toggle word wrap
  12. Save the IAM service account keys to the credentials-velero file in the current directory:

    $ gcloud iam service-accounts keys create credentials-velero \
        --iam-account $SERVICE_ACCOUNT_EMAIL
    Copy to Clipboard Toggle word wrap

    You use the credentials-velero file to create a Secret object for Google Cloud before you install the Data Protection Application.

5.10.1.2. About backup and snapshot locations and their secrets

Review backup location, snapshot location, and secret configuration requirements for the DataProtectionApplication custom resource (CR). This helps you understand storage options and credential management for data protection operations.

5.10.1.2.1. Backup locations

You can specify one of the following AWS S3-compatible object storage solutions as a backup location:

  • Multicloud Object Gateway (MCG)
  • Red Hat Container Storage
  • Ceph RADOS Gateway; also known as Ceph Object Gateway
  • Red Hat OpenShift Data Foundation
  • MinIO

Velero backs up OpenShift Container Platform resources, Kubernetes objects, and internal images as an archive file on object storage.

5.10.1.2.2. Snapshot locations

If you use your cloud provider’s native snapshot API to back up persistent volumes, you must specify the cloud provider as the snapshot location.

If you use Container Storage Interface (CSI) snapshots, you do not need to specify a snapshot location because you will create a VolumeSnapshotClass CR to register the CSI driver.

If you use File System Backup (FSB), you do not need to specify a snapshot location because FSB backs up the file system on object storage.

5.10.1.2.3. Secrets

If the backup and snapshot locations use the same credentials or if you do not require a snapshot location, you create a default Secret.

If the backup and snapshot locations use different credentials, you create two secret objects:

  • Custom Secret for the backup location, which you specify in the DataProtectionApplication CR.
  • Default Secret for the snapshot location, which is not referenced in the DataProtectionApplication CR.
Important

The Data Protection Application requires a default Secret. Otherwise, the installation will fail.

If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file.

5.10.1.2.4. Creating a default Secret

You create a default Secret if your backup and snapshot locations use the same credentials or if you do not require a snapshot location.

The default name of the Secret is cloud-credentials-gcp.

Note

The DataProtectionApplication custom resource (CR) requires a default Secret. Otherwise, the installation will fail. If the name of the backup location Secret is not specified, the default name is used.

If you do not want to use the backup location credentials during the installation, you can create a Secret with the default name by using an empty credentials-velero file.

Prerequisites

  • Your object storage and cloud storage, if any, must use the same credentials.
  • You must configure object storage for Velero.

Procedure

  1. Create a credentials-velero file for the backup storage location in the appropriate format for your cloud provider.
  2. Create a Secret custom resource (CR) with the default name:

    $ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap

    The Secret is referenced in the spec.backupLocations.credential block of the DataProtectionApplication CR when you install the Data Protection Application.

5.10.1.2.5. Creating secrets for different credentials

Create separate Secret objects when your backup and snapshot locations require different credentials. This allows you to configure distinct authentication for each storage location while maintaining secure credential management.

Procedure

  1. Create a credentials-velero file for the snapshot location in the appropriate format for your cloud provider.
  2. Create a Secret for the snapshot location with the default name:

    $ oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap
  3. Create a credentials-velero file for the backup location in the appropriate format for your object storage.
  4. Create a Secret for the backup location with a custom name:

    $ oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero
    Copy to Clipboard Toggle word wrap
  5. Add the Secret with the custom name to the DataProtectionApplication CR, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
      namespace: openshift-adp
    spec:
    ...
      backupLocations:
        - velero:
            provider: gcp
            default: true
            credential:
              key: cloud
              name: <custom_secret>
            objectStorage:
              bucket: <bucket_name>
              prefix: <prefix>
      snapshotLocations:
        - velero:
            provider: gcp
            default: true
            config:
              project: <project>
              snapshotLocation: us-west1
    Copy to Clipboard Toggle word wrap

    where:

    custom_secret
    Specifies the backup location Secret with custom name.
5.10.1.2.6. Setting Velero CPU and memory resource allocations

You set the CPU and memory resource allocations for the Velero pod by editing the DataProtectionApplication custom resource (CR) manifest.

Prerequisites

  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure

  • Edit the values in the spec.configuration.velero.podConfig.ResourceAllocations block of the DataProtectionApplication CR manifest, as in the following example:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      configuration:
        velero:
          podConfig:
            nodeSelector: <node_selector>
            resourceAllocations:
              limits:
                cpu: "1"
                memory: 1024Mi
              requests:
                cpu: 200m
                memory: 256Mi
    Copy to Clipboard Toggle word wrap

    where:

    nodeSelector
    Specifies the node selector to be supplied to Velero podSpec.
    resourceAllocations

    Specifies the resource allocations listed for average usage.

    Note

    Kopia is an option in OADP 1.3 and later releases. You can use Kopia for file system backups, and Kopia is your only option for Data Mover cases with the built-in Data Mover.

    Kopia is more resource intensive than Restic, and you might need to adjust the CPU and memory requirements accordingly.

Use the nodeSelector field to select which nodes can run the node agent. The nodeSelector field is the simplest recommended form of node selection constraint. Any label specified must match the labels on each node.

5.11. Enabling self-signed CA certificates

You must enable a self-signed CA certificate for object storage by editing the DataProtectionApplication custom resource (CR) manifest to prevent a certificate signed by unknown authority error.

Prerequisites

  • You must have the OpenShift API for Data Protection (OADP) Operator installed.

Procedure

  • Edit the spec.backupLocations.velero.objectStorage.caCert parameter and spec.backupLocations.velero.config parameters of the DataProtectionApplication CR manifest:

    apiVersion: oadp.openshift.io/v1alpha1
    kind: DataProtectionApplication
    metadata:
      name: <dpa_sample>
    spec:
    # ...
      backupLocations:
        - name: default
          velero:
            provider: aws
            default: true
            objectStorage:
              bucket: <bucket>
              prefix: <prefix>
              caCert: <base64_encoded_cert_string>
            config:
              insecureSkipTLSVerify: "false"
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    caCert
    Specifies the Base64-encoded CA certificate string.
    insecureSkipTLSVerify
    Specifies the insecureSkipTLSVerify configuration. The configuration can be set to either "true" or "false". If set to "true", SSL/TLS security is disabled. If set to "false", SSL/TLS security is enabled.

You might want to use the Velero CLI without installing it locally on your system by creating an alias for it.

Prerequisites

  • You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role.
  • You must have the OpenShift CLI (oc) installed. .Procedure

    1. To use an aliased Velero command, run the following command:

      $ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'
      Copy to Clipboard Toggle word wrap
    2. Check that the alias is working by running the following command:

      $ velero version
      Copy to Clipboard Toggle word wrap
      Client:
      	Version: v1.12.1-OADP
      	Git commit: -
      Server:
      	Version: v1.12.1-OADP
      Copy to Clipboard Toggle word wrap
    3. To use a CA certificate with this command, you can add a certificate to the Velero deployment by running the following commands:

      $ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')
      Copy to Clipboard Toggle word wrap
      $ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
      Copy to Clipboard Toggle word wrap
      $ velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt
      Copy to Clipboard Toggle word wrap
    4. To fetch the backup logs, run the following command:

      $ velero backup logs  <backup_name>  --cacert /tmp/<your_cacert.txt>
      Copy to Clipboard Toggle word wrap

      You can use these logs to view failures and warnings for the resources that you cannot back up.

    5. If the Velero pod restarts, the /tmp/your-cacert.txt file disappears, and you must re-create the /tmp/your-cacert.txt file by re-running the commands from the previous step.
    6. You can check if the /tmp/your-cacert.txt file still exists, in the file location where you stored it, by running the following command:

      $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-cacert.txt"
      /tmp/your-cacert.txt
      Copy to Clipboard Toggle word wrap

      In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.

Red Hat logoGithubredditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi. Esplora i nostri ultimi aggiornamenti.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita il Blog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

Theme

© 2026 Red Hat
Torna in cima