Questo contenuto non è disponibile nella lingua selezionata.

3.2. Backups and Migration


3.2.1. Backing Up and Restoring the Red Hat Virtualization Manager

3.2.1.1. Backing up Red Hat Virtualization Manager - Overview

Use the engine-backup tool to take regular backups of the Red Hat Virtualization Manager. The tool backs up the engine database and configuration files into a single file and can be run without interrupting the ovirt-engine service.

3.2.1.2. Syntax for the engine-backup Command

The engine-backup command works in one of two basic modes:

# engine-backup --mode=backup
# engine-backup --mode=restore

These two modes are further extended by a set of options that allow you to specify the scope of the backup and different credentials for the engine database. Run engine-backup --help for a full list of options and their function.

Basic Options

--mode
Specifies whether the command performs a backup operation or a restore operation. The available options are: backup (set by default), restore, and verify. You must define the mode option for verify or restore operations.
--file
Specifies the path and name of a file (for example, file_name.backup) into which backups are saved in backup mode, and to be read as backup data in restore mode. The path is defined by default as /var/lib/ovirt-engine-backup/.
--log
Specifies the path and name of a file (for example, log_file_name) into which logs of the backup or restore operation are written. The path is defined by default as /var/log/ovirt-engine-backup/ .
--scope

Specifies the scope of the backup or restore operation. There are four options: all, to back up or restore all databases and configuration data (set by default); files, to back up or restore only files on the system; db, to back up or restore only the Manager database; and dwhdb, to back up or restore only the Data Warehouse database.

The --scope option can be specified multiple times in the same engine-backup command.

Manager Database Options

The following options are only available when using the engine-backup command in restore mode. The option syntax below applies to restoring the Manager database. The same options exist for restoring the Data Warehouse database. See engine-backup --help for the Data Warehouse option syntax.

--provision-db
Creates a PostgreSQL database for the Manager database backup to be restored to. This is a required option when restoring a backup on a remote host or fresh installation that does not have a PostgreSQL database already configured. When this option is used in restore mode, the --restore-permissions option is added by default.
--provision-all-databases
Creates databases for all memory dumps included in the archive. When enabled, this is the default.
--change-db-credentials
Allows you to specify alternate credentials for restoring the Manager database using credentials other than those stored in the backup itself. See engine-backup --help for the additional parameters required by this option.
--restore-permissions or --no-restore-permissions

Restores or does not restore the permissions of database users. One of these options is required when restoring a backup. When the --provision-* option is used in restore mode, --restore-permissions is applied by default.

Note

If a backup contains grants for extra database users, restoring the backup with the --restore-permissions and --provision-db (or --provision-dwh-db) options creates the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See How to grant access to an extra database user after restoring Red Hat Virtualization from a backup.

3.2.1.3. Creating a backup with the engine-backup command

You can back up the Red Hat Virtualization Manager with the engine-backup command while the Manager is active. Append one of the following values to the --scope option to specify what you want to back up:

all
A full backup of all databases and configuration files on the Manager. This is the default setting for the --scope option.
files
A backup of only the files on the system
db
A backup of only the Manager database
dwhdb
A backup of only the Data Warehouse database
cinderlibdb
A backup of only the Cinderlib database
grafanadb
A backup of only the Grafana database

You can specify the --scope option more than once.

You can also configure the engine-backup command to back up additional files. It restores everything that it backs up.

Important

To restore a database to a fresh installation of Red Hat Virtualization Manager, a database backup alone is not sufficient. The Manager also requires access to the configuration files. If you specify a scope other than all, you must also include --scope=files, or back up the file system.

For a complete explanation of the engine-backup command, enter engine-backup --help on the Manager machine.

Procedure

  1. Log on to the Manager machine.
  2. Create a backup:

    # engine-backup

The following settings are applied by default:

--scope=all

--mode=backup

The command generates the backup in /var/lib/ovirt-engine-backup/file_name.backup, and a log file in /var/log/ovirt-engine-backup/log_file_name.

Use file_name.tar to restore the environment.

The following examples demonstrate several different backup scenarios.

Example 3.1. Full backup

# engine-backup

Example 3.2. Manager database backup

# engine-backup --scope=files --scope=db

Example 3.3. Data Warehouse database backup

# engine-backup --scope=files --scope=dwhdb

Example 3.4. Adding specific files to the backup

  1. Make a directory to store configuration customizations for the engine-backup command:

    # mkdir -p /etc/ovirt-engine-backup/engine-backup-config.d
  2. Create a text file in the new directory named ntp-chrony.sh with the following contents:

    BACKUP_PATHS="${BACKUP_PATHS}
    /etc/chrony.conf
    /etc/ntp.conf
    /etc/ovirt-engine-backup"
  3. When you run the engine-backup command, use --scope=files. The backup and restore includes /etc/chrony.conf, /etc/ntp.conf, and /etc/ovirt-engine-backup.

3.2.1.4. Restoring a Backup with the engine-backup Command

Restoring a backup using the engine-backup command involves more steps than creating a backup does, depending on the restoration destination. For example, the engine-backup command can be used to restore backups to fresh installations of Red Hat Virtualization, on top of existing installations of Red Hat Virtualization, and using local or remote databases.

Important

The version of the Red Hat Virtualization Manager (such as 4.4.8) used to restore a backup must be later than or equal to the Red Hat Virtualization Manager version (such as 4.4.7) used to create the backup. Starting with Red Hat Virtualization 4.4.7, this policy is strictly enforced by the engine-backup command. To view the version of Red Hat Virtualization contained in a backup file, unpack the backup file and read the value in the version file located in the root directory of the unpacked files.

3.2.1.5. Restoring a Backup to a Fresh Installation

The engine-backup command can be used to restore a backup to a fresh installation of the Red Hat Virtualization Manager. The following procedure must be performed on a machine on which the base operating system has been installed and the required packages for the Red Hat Virtualization Manager have been installed, but the engine-setup command has not yet been run. This procedure assumes that the backup file or files can be accessed from the machine on which the backup is to be restored.

Procedure

  1. Log on to the Manager machine. If you are restoring the engine database to a remote host, you will need to log on to and perform the relevant actions on that host. Likewise, if also restoring the Data Warehouse to a remote host, you will need to log on to and perform the relevant actions on that host.
  2. Restore a complete backup or a database-only backup.

    • Restore a complete backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db

      When the --provision-* option is used in restore mode, --restore-permissions is applied by default.

      If Data Warehouse is also being restored as part of the complete backup, provision the additional database:

      engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db
    • Restore a database-only backup by restoring the configuration files and database backup:

      # engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=log_file_name --provision-db

      The example above restores a backup of the Manager database.

      # engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=log_file_name --provision-dwh-db

      The example above restores a backup of the Data Warehouse database.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  3. Run the following command and follow the prompts to configure the restored Manager:

    # engine-setup

The Red Hat Virtualization Manager has been restored to the version preserved in the backup. To change the fully qualified domain name of the new Red Hat Virtualization system, see The oVirt Engine Rename Tool.

3.2.1.6. Restoring a Backup to Overwrite an Existing Installation

The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup.

Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes.

Procedure

  1. Log in to the Manager machine.
  2. Remove the configuration files and clean the database associated with the Manager:

    # engine-cleanup

    The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database.

  3. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist.

    • Restore a full backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
    • Restore a database-only backup by restoring the configuration files and the database backup:

      # engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file=file_name --log=log_file_name --restore-permissions
      Note

      To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  4. Reconfigure the Manager:

    # engine-setup

3.2.1.7. Restoring a Backup with Different Credentials

The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up, but the credentials of the database in the backup are different to those of the database on the machine on which the backup is to be restored. This is useful when you have taken a backup of an installation and want to restore the installation from the backup to a different system.

Important

When restoring a backup to overwrite an existing installation, you must run the engine-cleanup command to clean up the existing installation before using the engine-backup command. The engine-cleanup command only cleans the engine database, and does not drop the database or delete the user that owns that database. So you do not need to create a new database or specify the database credentials. However, if the credentials for the owner of the engine database are not known, you must change them before you can restore the backup.

Procedure

  1. Log in to the Red Hat Virtualization Manager machine.
  2. Run the following command and follow the prompts to remove the Manager’s configuration files and to clean the Manager’s database:

    # engine-cleanup
  3. Change the password for the owner of the engine database if the credentials of that user are not known:

    1. Enter the postgresql command line:

      # su - postgres -c 'psql'
    2. Change the password of the user that owns the engine database:

      postgres=# alter role user_name encrypted password 'new_password';

      Repeat this for the user that owns the ovirt_engine_history database if necessary.

  4. Restore a complete backup or a database-only backup with the --change-db-credentials parameter to pass the credentials of the new database. The database_location for a database local to the Manager is localhost.

    Note

    The following examples use a --*password option for each database without specifying a password, which prompts for a password for each database. Alternatively, you can use --*passfile=password_file options for each database to securely pass the passwords to the engine-backup tool without the need for interactive prompts.

    • Restore a complete backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --no-restore-permissions

      If Data Warehouse is also being restored as part of the complete backup, include the revised credentials for the additional database:

      engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions
    • Restore a database-only backup by restoring the configuration files and the database backup:

      # engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --no-restore-permissions

      The example above restores a backup of the Manager database.

      # engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=log_file_name --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions

      The example above restores a backup of the Data Warehouse database.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  5. Run the following command and follow the prompts to reconfigure the firewall and ensure the ovirt-engine service is correctly configured:

    # engine-setup

3.2.1.8. Backing up and Restoring a Self-Hosted Engine

You can back up a self-hosted engine and restore it in a new self-hosted environment. Use this procedure for tasks such as migrating the environment to a new self-hosted engine storage domain with a different storage type.

When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment.

The backup and restore operation involves the following key actions:

This procedure assumes that you have access and can make changes to the original Manager.

Prerequisites

  • A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager.
  • The original Manager must be updated to the latest minor version. The version of the Red Hat Virtualization Manager (such as 4.4.8) used to restore a backup must be later than or equal to the Red Hat Virtualization Manager version (such as 4.4.7) used to create the backup. Starting with Red Hat Virtualization 4.4.7, this policy is strictly enforced by the engine-backup command. See Updating the Red Hat Virtualization Manager in the Upgrade Guide.

    Note

    If you need to restore a backup, but do not have a new appliance, the restore process will pause, and you can log into the temporary Manager machine via SSH, register, subscribe, or configure channels as needed, and upgrade the Manager packages before resuming the restore process.

  • The data center compatibility level must be set to the latest version to ensure compatibility with the updated storage version.
  • There must be at least one regular host in the environment. This host (and any other regular hosts) will remain active to host the SPM role and any running virtual machines. If a regular host is not already the SPM, move the SPM role before creating the backup by selecting a regular host and clicking Management Select as SPM.

    If no regular hosts are available, there are two ways to add one:

3.2.1.8.1. Backing up the Original Manager

Back up the original Manager using the engine-backup command, and copy the backup file to a separate location so that it can be accessed at any point during the process.

For more information about engine-backup --mode=backup options, see Backing Up and Restoring the Red Hat Virtualization Manager in the Administration Guide.

Procedure

  1. Log in to one of the self-hosted engine nodes and move the environment to global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Log in to the original Manager and stop the ovirt-engine service:

    # systemctl stop ovirt-engine
    # systemctl disable ovirt-engine
    Note

    Though stopping the original Manager from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Manager and the new Manager from simultaneously managing existing resources.

  3. Run the engine-backup command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log:

    # engine-backup --mode=backup --file=file_name --log=log_file_name
  4. Copy the files to an external server. In the following example, storage.example.com is the fully qualified domain name of a network storage server that will store the backup until it is needed, and /backup/ is any designated folder or path.

    # scp -p file_name log_file_name storage.example.com:/backup/
  5. If you do not require the Manager machine for other purposes, unregister it from Red Hat Subscription Manager:

    # subscription-manager unregister
  6. Log in to one of the self-hosted engine nodes and shut down the original Manager virtual machine:

    # hosted-engine --vm-shutdown

After backing up the Manager, deploy a new self-hosted engine and restore the backup on the new virtual machine.

3.2.1.8.2. Restoring the Backup on a New Self-Hosted Engine

Run the hosted-engine script on a new host, and use the --restore-from-file=path/to/file_name option to restore the Manager backup during the deployment.

Important

If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment:

  • If you are redeploying on an existing host, you must update the host’s iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi. The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable.
  • If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host.

Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target).

Procedure

  1. Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path.

    # scp -p file_name host.example.com:/backup/
  2. Log in to the new host.
  3. If you are deploying on Red Hat Virtualization Host, ovirt-hosted-engine-setup is already installed, so skip this step. If you are deploying on Red Hat Enterprise Linux, install the ovirt-hosted-engine-setup package:

    # dnf install ovirt-hosted-engine-setup
  4. Use the tmux window manager to run the script to avoid losing the session in case of network or terminal disruption.

    Install and run tmux:

    # dnf -y install tmux
    # tmux
  5. Run the hosted-engine script, specifying the path to the backup file:

    # hosted-engine --deploy --restore-from-file=backup/file_name

    To escape the script at any time, use CTRL+D to abort deployment.

  6. Select Yes to begin the deployment.
  7. Configure the network. The script detects possible NICs to use as a management bridge for the environment.
  8. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
  9. Enter the root password for the Manager.
  10. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user.
  11. Enter the virtual machine’s CPU and memory configuration.
  12. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
  13. Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Manager.

    Important

    The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).

  14. Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.
  15. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
  16. Enter a password for the admin@internal user to access the Administration Portal.

    The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed.

    Note

    If the host becomes non operational, due to a missing required network or a similar problem, the deployment pauses and a message such as the following is displayed:

    [ INFO  ] You can now connect to https://<host name>:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up'
    [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
    [ INFO  ] ok: [localhost]
    [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file]
    [ INFO  ] changed: [localhost]
    [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.<random>_he_setup_lock is removed, delete it once ready to proceed]

    Pausing the process allows you to:

    • Connect to the Administration Portal using the provided URL.
    • Assess the situation, find out why the host is non operational, and fix whatever is needed. For example, if this deployment was restored from a backup, and the backup included required networks for the host cluster, configure the networks, attaching the relevant host NICs to these networks.
    • Once everything looks OK, and the host status is Up, remove the lock file presented in the message above. The deployment continues.
  17. Select the type of storage to use:

    • For NFS, enter the version, full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

    • For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

    • For Gluster storage, enter the full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

      Important

      Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:

      gluster volume set VOLUME_NAME group virt
      gluster volume set VOLUME_NAME performance.strict-o-direct on
      gluster volume set VOLUME_NAME network.remote-dio off
      gluster volume set VOLUME_NAME storage.owner-uid 36
      gluster volume set VOLUME_NAME storage.owner-gid 36
      gluster volume set VOLUME_NAME network.ping-timeout 30
    • For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
  18. Enter the Manager disk size.

    The script continues until the deployment is complete.

  19. The deployment process changes the Manager’s SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager’s entry from the .ssh/known_hosts file on any client machines that accessed the original Manager.

When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories.

3.2.1.8.3. Enabling the Red Hat Virtualization Manager Repositories

You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories.

Procedure

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
    Note

    If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.

  2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

    # subscription-manager list --available
  3. Use the pool ID to attach the subscription to the system:

    # subscription-manager attach --pool=pool_id
    Note

    To view currently attached subscriptions:

    # subscription-manager list --consumed

    To list all enabled repositories:

    # dnf repolist
  4. Configure the repositories:

    # subscription-manager repos \
        --disable='*' \
        --enable=rhel-8-for-x86_64-baseos-eus-rpms \
        --enable=rhel-8-for-x86_64-appstream-eus-rpms \
        --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
        --enable=fast-datapath-for-rhel-8-x86_64-rpms \
        --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \
        --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
        --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  5. Set the RHEL version to 8.6:

    # subscription-manager release --set=8.6
  6. Enable the pki-deps module.

    # dnf module -y enable pki-deps
  7. Enable version 12 of the postgresql module.

    # dnf module -y enable postgresql:12
  8. Enable version 14 of the nodejs module:

    # dnf module -y enable nodejs:14
  9. Synchronize installed packages to update them to the latest available versions.

    # dnf distro-sync --nobest

Additional resources

For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components

The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node.

3.2.1.8.4. Reinstalling Hosts

Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.

Warning

When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.

Prerequisites

  • If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low.
  • Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance.
  • Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Management Maintenance and OK.
  3. Click Installation Reinstall. This opens the Install Host window.
  4. Click the Hosted Engine tab and select DEPLOY from the drop-down list.
  5. Click OK to reinstall the host.

After a host has been reinstalled and its status returns to Up, you can migrate virtual machines back to the host.

Important

After you register a Red Hat Virtualization Host to the Red Hat Virtualization Manager and reinstall it, the Administration Portal may erroneously display its status as Install Failed. Click Management Activate, and the host will change to an Up status and be ready for use.

After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes:

# hosted-engine --vm-status

During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain.

3.2.1.8.5. Removing a Storage Domain

You have a storage domain in your data center that you want to remove from the virtualized environment.

Procedure

  1. Click Storage Domains.
  2. Move the storage domain to maintenance mode and detach it:

    1. Click the storage domain’s name. This opens the details view.
    2. Click the Data Center tab.
    3. Click Maintenance, then click OK.
    4. Click Detach, then click OK.
  3. Click Remove.
  4. Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
  5. Click OK.

The storage domain is permanently removed from the environment.

3.2.1.9. Recovering a Self-Hosted Engine from an Existing Backup

If a self-hosted engine is unavailable due to problems that cannot be repaired, you can restore it in a new self-hosted environment using a backup taken before the problem began, if one is available.

When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment.

Restoring a self-hosted engine involves the following key actions:

This procedure assumes that you do not have access to the original Manager, and that the new host can access the backup file.

Prerequisites

  • A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager.
3.2.1.9.1. Restoring the Backup on a New Self-Hosted Engine

Run the hosted-engine script on a new host, and use the --restore-from-file=path/to/file_name option to restore the Manager backup during the deployment.

Important

If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment:

  • If you are redeploying on an existing host, you must update the host’s iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi. The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable.
  • If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host.

Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target).

Procedure

  1. Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path.

    # scp -p file_name host.example.com:/backup/
  2. Log in to the new host.
  3. If you are deploying on Red Hat Virtualization Host, ovirt-hosted-engine-setup is already installed, so skip this step. If you are deploying on Red Hat Enterprise Linux, install the ovirt-hosted-engine-setup package:

    # dnf install ovirt-hosted-engine-setup
  4. Use the tmux window manager to run the script to avoid losing the session in case of network or terminal disruption.

    Install and run tmux:

    # dnf -y install tmux
    # tmux
  5. Run the hosted-engine script, specifying the path to the backup file:

    # hosted-engine --deploy --restore-from-file=backup/file_name

    To escape the script at any time, use CTRL+D to abort deployment.

  6. Select Yes to begin the deployment.
  7. Configure the network. The script detects possible NICs to use as a management bridge for the environment.
  8. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
  9. Enter the root password for the Manager.
  10. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user.
  11. Enter the virtual machine’s CPU and memory configuration.
  12. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
  13. Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Manager.

    Important

    The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).

  14. Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.
  15. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
  16. Enter a password for the admin@internal user to access the Administration Portal.

    The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed.

    Note

    If the host becomes non operational, due to a missing required network or a similar problem, the deployment pauses and a message such as the following is displayed:

    [ INFO  ] You can now connect to https://<host name>:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up'
    [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
    [ INFO  ] ok: [localhost]
    [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file]
    [ INFO  ] changed: [localhost]
    [ INFO  ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.<random>_he_setup_lock is removed, delete it once ready to proceed]

    Pausing the process allows you to:

    • Connect to the Administration Portal using the provided URL.
    • Assess the situation, find out why the host is non operational, and fix whatever is needed. For example, if this deployment was restored from a backup, and the backup included required networks for the host cluster, configure the networks, attaching the relevant host NICs to these networks.
    • Once everything looks OK, and the host status is Up, remove the lock file presented in the message above. The deployment continues.
  17. Select the type of storage to use:

    • For NFS, enter the version, full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

    • For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

    • For Gluster storage, enter the full address and path to the storage, and any mount options.

      Warning

      Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.

      Important

      Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:

      gluster volume set VOLUME_NAME group virt
      gluster volume set VOLUME_NAME performance.strict-o-direct on
      gluster volume set VOLUME_NAME network.remote-dio off
      gluster volume set VOLUME_NAME storage.owner-uid 36
      gluster volume set VOLUME_NAME storage.owner-gid 36
      gluster volume set VOLUME_NAME network.ping-timeout 30
    • For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
  18. Enter the Manager disk size.

    The script continues until the deployment is complete.

  19. The deployment process changes the Manager’s SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager’s entry from the .ssh/known_hosts file on any client machines that accessed the original Manager.

When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories.

3.2.1.9.2. Enabling the Red Hat Virtualization Manager Repositories

You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories.

Procedure

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
    Note

    If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.

  2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

    # subscription-manager list --available
  3. Use the pool ID to attach the subscription to the system:

    # subscription-manager attach --pool=pool_id
    Note

    To view currently attached subscriptions:

    # subscription-manager list --consumed

    To list all enabled repositories:

    # dnf repolist
  4. Configure the repositories:

    # subscription-manager repos \
        --disable='*' \
        --enable=rhel-8-for-x86_64-baseos-eus-rpms \
        --enable=rhel-8-for-x86_64-appstream-eus-rpms \
        --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
        --enable=fast-datapath-for-rhel-8-x86_64-rpms \
        --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \
        --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
        --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  5. Set the RHEL version to 8.6:

    # subscription-manager release --set=8.6
  6. Enable the pki-deps module.

    # dnf module -y enable pki-deps
  7. Enable version 12 of the postgresql module.

    # dnf module -y enable postgresql:12
  8. Enable version 14 of the nodejs module:

    # dnf module -y enable nodejs:14
  9. Synchronize installed packages to update them to the latest available versions.

    # dnf distro-sync --nobest

Additional resources

For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components

The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node.

3.2.1.9.3. Reinstalling Hosts

Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.

Warning

When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.

Prerequisites

  • If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low.
  • Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance.
  • Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Management Maintenance and OK.
  3. Click Installation Reinstall. This opens the Install Host window.
  4. Click the Hosted Engine tab and select DEPLOY from the drop-down list.
  5. Click OK to reinstall the host.

After a host has been reinstalled and its status returns to Up, you can migrate virtual machines back to the host.

Important

After you register a Red Hat Virtualization Host to the Red Hat Virtualization Manager and reinstall it, the Administration Portal may erroneously display its status as Install Failed. Click Management Activate, and the host will change to an Up status and be ready for use.

After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes:

# hosted-engine --vm-status

During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain.

3.2.1.9.4. Removing a Storage Domain

You have a storage domain in your data center that you want to remove from the virtualized environment.

Procedure

  1. Click Storage Domains.
  2. Move the storage domain to maintenance mode and detach it:

    1. Click the storage domain’s name. This opens the details view.
    2. Click the Data Center tab.
    3. Click Maintenance, then click OK.
    4. Click Detach, then click OK.
  3. Click Remove.
  4. Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
  5. Click OK.

The storage domain is permanently removed from the environment.

3.2.1.10. Overwriting a Self-Hosted Engine from an Existing Backup

If a self-hosted engine is accessible, but is experiencing an issue such as database corruption, or a configuration error that is difficult to roll back, you can restore the environment to a previous state using a backup taken before the problem began, if one is available.

Restoring a self-hosted engine’s previous state involves the following steps:

For more information about engine-backup --mode=restore options, see Backing Up and Restoring the Manager.

3.2.1.10.1. Enabling global maintenance mode

You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine.

Procedure

  1. Log in to one of the self-hosted engine nodes and enable global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Confirm that the environment is in global maintenance mode before proceeding:

    # hosted-engine --vm-status

    You should see a message indicating that the cluster is in global maintenance mode.

3.2.1.10.2. Restoring a Backup to Overwrite an Existing Installation

The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup.

Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes.

Procedure

  1. Log in to the Manager machine.
  2. Remove the configuration files and clean the database associated with the Manager:

    # engine-cleanup

    The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database.

  3. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist.

    • Restore a full backup:

      # engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
    • Restore a database-only backup by restoring the configuration files and the database backup:

      # engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file=file_name --log=log_file_name --restore-permissions
      Note

      To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter.

      If successful, the following output displays:

      You should now run engine-setup.
      Done.
  4. Reconfigure the Manager:

    # engine-setup
3.2.1.10.3. Disabling global maintenance mode

Procedure

  1. Log in to the Manager virtual machine and shut it down.
  2. Log in to one of the self-hosted engine nodes and disable global maintenance mode:

    # hosted-engine --set-maintenance --mode=none

    When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start.

  3. Confirm that the environment is running:

    # hosted-engine --vm-status

    The listed information includes Engine Status. The value for Engine status should be:

    {"health": "good", "vm": "up", "detail": "Up"}
    Note

    When the virtual machine is still booting and the Manager hasn’t started yet, the Engine status is:

    {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}

    If this happens, wait a few minutes and try again.

When the environment is running again, you can start any virtual machines that were stopped, and check that the resources in the environment are behaving as expected.

3.2.2. Migrating the Data Warehouse to a Separate Machine

This section describes how to migrate the Data Warehouse database and service from the Red Hat Virtualization Manager machine to a separate machine. Hosting the Data Warehouse service on a separate machine reduces the load on each individual machine, and avoids potential conflicts caused by sharing CPU and memory resources with other processes.

Note

Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other.

You have the following migration options:

  • You can migrate the Data Warehouse service away from the Manager machine and connect it with the existing Data Warehouse database (ovirt_engine_history).
  • You can migrate the Data Warehouse database away from the Manager machine and then migrate the Data Warehouse service.

3.2.2.1. Migrating the Data Warehouse Database to a Separate Machine

Migrate the Data Warehouse database (ovirt_engine_history) before you migrate the Data Warehouse service. Use engine-backup to create a database backup and restore it on the new database machine. For more information on engine-backup, run engine-backup --help.

Note

Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other.

The new database server must have Red Hat Enterprise Linux 8 installed.

Enable the required repositories on the new database server.

3.2.2.1.1. Enabling the Red Hat Virtualization Manager Repositories

You need to log in and register the Data Warehouse machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories.

Procedure

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
    Note

    If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.

  2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

    # subscription-manager list --available
  3. Use the pool ID to attach the subscription to the system:

    # subscription-manager attach --pool=pool_id
    Note

    To view currently attached subscriptions:

    # subscription-manager list --consumed

    To list all enabled repositories:

    # dnf repolist
  4. Configure the repositories:

    # subscription-manager repos \
        --disable='*' \
        --enable=rhel-8-for-x86_64-baseos-eus-rpms \
        --enable=rhel-8-for-x86_64-appstream-eus-rpms \
        --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
        --enable=fast-datapath-for-rhel-8-x86_64-rpms \
        --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \
        --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
        --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
  5. Set the RHEL version to 8.6:

    # subscription-manager release --set=8.6
  6. Enable version 12 of the postgresql module.

    # dnf module -y enable postgresql:12
  7. Enable version 14 of the nodejs module:

    # dnf module -y enable nodejs:14
  8. Synchronize installed packages to update them to the latest available versions.

    # dnf distro-sync --nobest

Additional resources

For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components

3.2.2.1.2. Migrating the Data Warehouse Database to a Separate Machine

Procedure

  1. Create a backup of the Data Warehouse database and configuration files on the Manager:

    # engine-backup --mode=backup --scope=grafanadb --scope=dwhdb --scope=files --file=file_name --log=log_file_name
  2. Copy the backup file from the Manager to the new machine:

    # scp /tmp/file_name root@new.dwh.server.com:/tmp
  3. Install engine-backup on the new machine:

    # dnf install ovirt-engine-tools-backup
  4. Install the PostgreSQL server package:

    # dnf install postgresql-server postgresql-contrib
  5. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:

    # su - postgres -c 'initdb'
    # systemctl enable postgresql
    # systemctl start postgresql
  6. Restore the Data Warehouse database on the new machine. file_name is the backup file copied from the Manager.

    # engine-backup --mode=restore --scope=files --scope=grafanadb --scope=dwhdb --file=file_name --log=log_file_name --provision-dwh-db

    When the --provision-* option is used in restore mode, --restore-permissions is applied by default.

The Data Warehouse database is now hosted on a separate machine from that on which the Manager is hosted. After successfully restoring the Data Warehouse database, a prompt instructs you to run the engine-setup command. Before running this command, migrate the Data Warehouse service.

3.2.2.2. Migrating the Data Warehouse Service to a Separate Machine

You can migrate the Data Warehouse service installed and configured on the Red Hat Virtualization Manager to a separate machine. Hosting the Data Warehouse service on a separate machine helps to reduce the load on the Manager machine.

Notice that this procedure migrates the Data Warehouse service only.

To migrate the Data Warehouse database (ovirt_engine_history) prior to migrating the Data Warehouse service, see Migrating the Data Warehouse Database to a Separate Machine.

Note

Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other.

Prerequisites

  • You must have installed and configured the Manager and Data Warehouse on the same machine.
  • To set up the new Data Warehouse machine, you must have the following:

    • The password from the Manager’s /etc/ovirt-engine/engine.conf.d/10-setup-database.conf file.
    • Allowed access from the Data Warehouse machine to the Manager database machine’s TCP port 5432.
    • The username and password for the Data Warehouse database from the Manager’s /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file.

      If you migrated the ovirt_engine_history database using the procedures described in Migrating the Data Warehouse Database to a Separate Machine, the backup includes these credentials, which you defined during the database setup on that machine.

Installing this scenario requires four steps:

  1. Setting up the New Data Warehouse Machine
  2. Stopping the Data Warehouse service on the Manager machine
  3. Configuring the new Data Warehouse machine
  4. Disabling the Data Warehouse package on the Manager machine
3.2.2.2.1. Setting up the New Data Warehouse Machine

Enable the Red Hat Virtualization repositories and install the Data Warehouse setup package on a Red Hat Enterprise Linux 8 machine:

  1. Enable the required repositories:

    1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

      # subscription-manager register
    2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

      # subscription-manager list --available
    3. Use the pool ID to attach the subscription to the system:

      # subscription-manager attach --pool=pool_id
    4. Configure the repositories:

      # subscription-manager repos \
          --disable='*' \
          --enable=rhel-8-for-x86_64-baseos-eus-rpms \
          --enable=rhel-8-for-x86_64-appstream-eus-rpms \
          --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
          --enable=fast-datapath-for-rhel-8-x86_64-rpms \
          --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms
      
      # subscription-manager release --set=8.6
  2. Enable the pki-deps module.

    # dnf module -y enable pki-deps
  3. Ensure that all packages currently installed are up to date:

    # dnf upgrade --nobest
  4. Install the ovirt-engine-dwh-setup package:

    # dnf install ovirt-engine-dwh-setup
3.2.2.2.2. Stopping the Data Warehouse Service on the Manager Machine

Procedure

  1. Stop the Data Warehouse service:

    # systemctl stop ovirt-engine-dwhd.service
  2. If the database is hosted on a remote machine, you must manually grant access by editing the postgres.conf file. Edit the /var/lib/pgsql/data/postgresql.conf file and modify the listen_addresses line so that it matches the following:

    listen_addresses = '*'

    If the line does not exist or has been commented out, add it manually.

    If the database is hosted on the Manager machine and was configured during a clean setup of the Red Hat Virtualization Manager, access is granted by default.

  3. Restart the postgresql service:

    # systemctl restart postgresql
3.2.2.2.3. Configuring the New Data Warehouse Machine

The order of the options or settings shown in this section may differ depending on your environment.

  1. If you are migrating both the ovirt_engine_history database and the Data Warehouse service to the same machine, run the following, otherwise proceed to the next step.

    # sed -i '/^ENGINE_DB_/d' \
            /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf
    
    # sed -i \
         -e 's;^\(OVESETUP_ENGINE_CORE/enable=bool\):True;\1:False;' \
         -e '/^OVESETUP_CONFIG\/fqdn/d' \
         /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
  2. Remove the apache/grafana PKI files, so that they are regenerated by engine-setup with correct values:

    # rm -f \
      /etc/pki/ovirt-engine/certs/apache.cer \
      /etc/pki/ovirt-engine/certs/apache-grafana.cer \
      /etc/pki/ovirt-engine/keys/apache.key.nopass \
      /etc/pki/ovirt-engine/keys/apache-grafana.key.nopass \
      /etc/pki/ovirt-engine/apache-ca.pem \
      /etc/pki/ovirt-engine/apache-grafana-ca.pem
  3. Run the engine-setup command to begin configuration of Data Warehouse on the machine:

    # engine-setup
  4. Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter:

    Host fully qualified DNS name of this server [autodetected host name]:
  5. Press Enter to automatically configure the firewall, or type No and press Enter to maintain existing settings:

    Setup can automatically configure the firewall on this system.
    Note: automatic configuration of the firewall may overwrite current settings.
    Do you want Setup to configure the firewall? (Yes, No) [Yes]:

    If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter. This applies even in cases where only one option is listed.

  6. Enter the fully qualified domain name and password for the Manager. Press Enter to accept the default values in each other field:

    Host fully qualified DNS name of the engine server []: engine-fqdn
    Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action.
    Please choose one of the following:
    1 - Access remote engine server using ssh as root
    2 - Perform each action manually, use files to copy content around
    (1, 2) [1]:
    ssh port on remote engine server [22]:
    root password on remote engine server engine-fqdn: password
  7. Enter the FQDN and password for the Manager database machine. Press Enter to accept the default values in each other field:

    Engine database host []: manager-db-fqdn
    Engine database port [5432]:
    Engine database secured connection (Yes, No) [No]:
    Engine database name [engine]:
    Engine database user [engine]:
    Engine database password: password
  8. Confirm your installation settings:

    Please confirm installation settings (OK, Cancel) [OK]:

The Data Warehouse service is now configured on the remote machine. Proceed to disable the Data Warehouse service on the Manager machine.

3.2.2.2.4. Disabling the Data Warehouse Service on the Manager Machine

Prerequisites

  • The Grafana service on the Manager machine is disabled:

    # systemctl disable --now grafana-server.service

Procedure

  1. On the Manager machine, restart the Manager:

    # service ovirt-engine restart
  2. Run the following command to modify the file /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf and set the options to False:

    # sed -i \
         -e 's;^\(OVESETUP_DWH_CORE/enable=bool\):True;\1:False;' \
         -e 's;^\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\):True;\1:False;' \
         /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
    
    # sed -i \
         -e 's;^\(OVESETUP_GRAFANA_CORE/enable=bool\):True;\1:False;' \
         /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
  3. Disable the Data Warehouse service:

    # systemctl disable ovirt-engine-dwhd.service
  4. Remove the Data Warehouse files:

    # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf /var/lib/ovirt-engine-dwh/backups/*

The Data Warehouse service is now hosted on a separate machine from the Manager.

3.2.3. Backing Up and Restoring Virtual Machines Using a Backup Storage Domain

3.2.3.1. Backup storage domains explained

A backup storage domain is one that you can use specifically for storing and migrating virtual machines and virtual machine templates for the purpose of backing up and restoring for disaster recovery, migration, or any other backup/restore usage model. A backup domain differs from a non-backup domain in that all virtual machines on a backup domain are in a powered-down state. A virtual machine cannot run on a backup domain.

You can set any data storage domain to be a backup domain. You can enable or disable this setting by selecting or deselecting a checkbox in the Manage Domain dialog box. You can enable this setting only after all virtual machines on that storage domain are stopped.

You cannot start a virtual machine stored on a backup domain. The Manager blocks this and any other operation that might invalidate the backup. However, you can run a virtual machine based on a template stored on a backup domain if the virtual machine’s disks are not part of a backup domain.

As with other types of storage domains, you can attach or detach backup domains to or from a data center. So, in addition to storing backups, you can use backup domains to migrate virtual machines between data centers.

Advantages

Some reasons to use a backup domain, rather than an export domain, are listed here:

  • You can have multiple backup storage domains in a data center, as opposed to only one export domain.
  • You can dedicate a backup storage domain to use for backup and disaster recovery.
  • You can transfer a backup of a virtual machine, a template, or a snapshot to a backup storage domain
  • Migrating a large number of virtual machines, templates, or OVF files is significantly faster with backup domains than export domains.
  • A backup domain uses disk space more efficiently than an export domain.
  • Backup domains support both file storage (NFS, Gluster) and block storage (Fiber Channel and iSCSI). This contrasts with export domains, which only support file storage.
  • You can dynamically enable and disable the backup setting for a storage domain, taking into account the restrictions.

Restrictions

  • Any virtual machine or template on a _backup domain must have all its disks on that same domain.
  • All virtual machines on a storage domain must be powered down before you can set it to be a backup domain.
  • You cannot run a virtual machine that is stored on a backup domain, because doing so might manipulate the disk’s data.
  • A backup domain cannot be the target of memory volumes because memory volumes are only supported for active virtual machines.
  • You cannot preview a virtual machine on a backup domain.
  • Live migration of a virtual machine to a backup domain is not possible.
  • You cannot set a backup domain to be the master domain.
  • You cannot set a Self-hosted engine’s domain to be a backup domain.
  • Do not use the default storage domain as a backup domain.

3.2.3.2. Setting a data storage domain to be a backup domain

Prerequisites

  • All disks belonging to a virtual machine or template on the storage domain must be on the same domain.
  • All virtual machines on the domain must be powered down.

Procedure

  1. In the Administration Portal, select Storage Domains.
  2. Create a new storage domain or select an existing storage domain and click Manage Domain. The Manage Domains dialog box opens.
  3. Under Advanced Parameters, select the Backup checkbox.

The domain is now a backup domain.

3.2.3.3. Backing up or Restoring a Virtual Machine or Snapshot Using a Backup Domain

You can back up a powered down virtual machine or snapshot. You can then store the backup on the same data center and restore it as needed, or migrate it to another data center.

Procedure: Backing Up a Virtual Machine

  1. Create a backup domain. See Setting a storage domain to be a backup domain backup domain.
  2. Create a new virtual machine based on the virtual machine you want to back up:

    • To back up a snapshot, first create a virtual machine from a snapshot. See Creating a Virtual Machine from a Snapshot in the Virtual Machine Management Guide.
    • To back up a virtual machine, first clone the virtual machine. See Cloning a Virtual Machine in the Virtual Machine Management Guide. Make sure the clone is powered down before proceeding.
  3. Export the new virtual machine to a backup domain. See Exporting a Virtual Machine to a Data Domain in the Virtual Machine Management Guide.

Procedure: Restoring a Virtual Machine

  1. Make sure that the backup storage domain that stores the virtual machine backup is attached to a data center.
  2. Import the virtual machine from the backup domain. See Importing Virtual Machines from a Data Domain.

3.2.4. Backing Up and Restoring Virtual Machines Using the Backup and Restore API

3.2.4.1. The Backup and Restore API

The backup and restore API is a collection of functions that allows you to perform full or file-level backup and restoration of virtual machines. The API combines several components of Red Hat Virtualization, such as live snapshots and the REST API, to create and work with temporary volumes that can be attached to a virtual machine containing backup software provided by an independent software provider.

For supported third-party backup vendors, consult the Red Hat Virtualization Ecosystem.

3.2.4.2. Backing Up a Virtual Machine

Use the backup and restore API to back up a virtual machine. This procedure assumes you have two virtual machines: the virtual machine to back up, and a virtual machine on which the software for managing the backup is installed.

Procedure

  1. Using the REST API, create a snapshot of the virtual machine to back up:

    POST /api/vms/{vm:id}/snapshots/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <snapshot>
        <description>BACKUP</description>
    </snapshot>
    Note
    • Here, replace {vm:id} with the VM ID of the virtual machine whose snapshot you are making. This ID is available from the General tab of the New Virtual Machine and Edit Virtual Machine windows in the Administration Portal and VM Portal.
    • Taking a snapshot of a virtual machine stores its current configuration data in the data attribute of the configuration attribute in initialization under the snapshot.
    Important

    You cannot take snapshots of disks marked as shareable or based on direct LUN disks.

  2. Retrieve the configuration data of the virtual machine from the data attribute under the snapshot:

    GET /api/vms/{vm:id}/snapshots/{snapshot:id} HTTP/1.1
    All-Content: true
    Accept: application/xml
    Content-type: application/xml
    Note
    • Here, replace {vm:id} with the ID of the virtual machine whose snapshot you made earlier. Replace {snapshot:id} with the snapshot ID.
    • Add the All-Content: true header to retrieve additional OVF data in the response. The OVF data in the XML response is located within the VM configuration element, <initialization><configuration>. Later, you will use this data to restore the virtual machine.
  3. Get the snapshot ID:

    GET /api/vms/{vm:id}/snapshots/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
  4. Identify the disk ID of the snapshot:

    GET /api/vms/{vm:id}/snapshots/{snapshot:id}/disks HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
  5. Attach the snapshot to a backup virtual machine as an active disk attachment, with the correct interface type (for example, virtio_scsi):

    POST /api/vms/{vm:id}/diskattachments/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <disk_attachment>
    	<active>true</active>
    	<interface>_virtio_scsi_</interface>
    	<disk id="{disk:id}">
    	<snapshot id="{snapshot:id}"/>
    	</disk>
    </disk_attachment>
    Note

    Here, replace {vm:id} with the ID of the backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID. Replace {snapshot:id} with the snapshot ID.

  6. Use the backup software on the backup virtual machine to back up the data on the snapshot disk.
  7. Remove the snapshot disk attachment from the backup virtual machine:

    DELETE /api/vms/{vm:id}/diskattachments/{snapshot:id} HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    Note

    Here, replace {vm:id} with the ID of the backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {snapshot:id} with the snapshot ID.

  8. Optionally, delete the snapshot:

    DELETE /api/vms/{vm:id}/snapshots/{snapshot:id} HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    Note

    Here, replace {vm:id} with the ID of the virtual machine whose snapshot you made earlier. Replace {snapshot:id} with the snapshot ID.

You have backed up the state of a virtual machine at a fixed point in time using backup software installed on a separate virtual machine.

3.2.4.3. Restoring a Virtual Machine

Restore a virtual machine that has been backed up using the backup and restore API. This procedure assumes you have a backup virtual machine on which the software used to manage the previous backup is installed.

Procedure

  1. In the Administration Portal, create a floating disk on which to restore the backup. See Creating a Virtual Disk for details on how to create a floating disk.
  2. Attach the disk to the backup virtual machine:

    POST /api/vms/{vm:id}/disks/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <disk id="{disk:id}">
    </disk>
    Note

    Here, replace {vm:id} with the ID of this backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID you got while backing up the virtual machine.

  3. Use the backup software to restore the backup to the disk.
  4. Detach the disk from the backup virtual machine:

    DELETE /api/vms/{vm:id}/disks/{disk:id} HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <action>
        <detach>true</detach>
    </action>
    Note

    Here, replace {vm:id} with the ID of this backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID.

  5. Create a new virtual machine using the configuration data of the virtual machine being restored:

    POST /api/vms/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <vm>
        <cluster>
            <name>cluster_name</name>
        </cluster>
        <name>_NAME_</name>
          <initialization>
          <configuration>
      <data>
      <!-- omitting long ovf data -->
      </data>
          <type>ovf</type>
          </configuration>
          </initialization>
        ...
    </vm>
    Note

    To override any of the values in the ovf while creating the virtual machine, redefine the element before or after the initialization element. Not within the initialization element.

  6. Attach the disk to the new virtual machine:

    POST /api/vms/{vm:id}/disks/ HTTP/1.1
    Accept: application/xml
    Content-type: application/xml
    
    <disk id="{disk:id}">
    </disk>
    Note

    Here, replace {vm:id} with the ID of the new virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID.

You have restored a virtual machine using a backup that was created using the backup and restore API.

3.2.5. Backing Up and Restoring Virtual Machines Using the Incremental Backup and Restore API

3.2.5.1. Incremental Backup and Restore API

Red Hat Virtualization provides an Incremental Backup API that you can use for full backups of QCOW2 or RAW virtual disks, or incremental backups of QCOW 2 virtual disks, without any temporary snapshots. Data is backed-up in RAW format, whether the virtual disk being backed up is QCOW2 or RAW. You can restore RAW guest data and either RAW or QCOW2 disks. The Incremental Backup API is part of the RHV REST API. You can backup virtual machines that are running or that are not.

As a developer, you can use the API to develop a backup application.

Features

Backups are simpler, faster and more robust than when using the Backup and Restore API. The Incremental Backup API provides improved integration with backup applications, with new support for backing up and restoring RAW guest data, regardless of the underlying disk format.

If an invalid bitmap causes a backup to fail, you can remove a specific checkpoint in the backup chain. You do not need to run a full backup.

Limitations:

  • Only disks in QCOW2 format can be backed up incrementally, not RAW format disks. The backup process saves the backed up data in RAW format.
  • Only backed up data in RAW format can be restored.
  • Incremental restore does not support restoring snapshots as they existed at the time of the backup, rather incremental restore restores only the data and not the structure of volumes or images in snapshots as they existed at the time of the backup. This limit is common in backup solutions for other systems.
  • As is commonly the case with backup solutions, incremental restore restores only the data and not the structure of volumes or images in snapshots as they existed at the time of the backup.
  • An unclean shutdown of a virtual machine, whatever the cause, might invalidate bitmaps on the disk, which invalidates the entire backup chain. Restoring an incremental backup using an invalid bitmap leads to corrupt virtual machine data.

    There is no way to detect an invalid bitmap, other than starting a backup. If the disk includes any invalid bitmaps, the operation fails.

The following table describes the disk configurations that support incremental backup.

Note

When you create a disk using the Administration portal, you set the storage type, provisioning type, and whether incremental backup is enabled or disabled. Based on these settings, the Manager determines the virtual disk format.

Table 3.1. Supported disk configurations for incremental backup
Storage typeProvisioning typeWhen incremental backup is…​Virtual disk format is…​

block

thin

enabled

qcow2

block

preallocated

enabled

qcow2 (preallocated)

file

thin

enabled

qcow2

file

preallocated

enabled

qcow2 (preallocated)

block

thin

disabled

qcow2

block

preallocated

disabled

raw (preallocated)

file

thin

disabled

raw (sparse)

file

preallocated

disabled

raw (preallocated)

network

Not applicable

disabled

raw

lun

Not applicable

disabled

raw

3.2.5.1.1. Incremental Backup Flow

A backup application that uses the Incremental Backup API must follow this sequence to back up virtual machine disks that have already been enabled for incremental backup:

  1. The backup application uses the REST API to find virtual machine disks that should be included in the backup. Only disks in QCOW2 format are included.
  2. The backup application starts a full backup or an incremental backup. The API call specifies a virtual machine ID, an optional previous checkpoint ID, and a list of disks to back up. If the API call does not specify a previous checkpoint ID, a full backup begins, which includes all data in the specified disks, based on the current state of each disk.
  3. The engine prepares the virtual machine for backup. The virtual machine can continue running during the backup.
  4. The backup application polls the engine for the backup status, until the engine reports that the backup is ready to begin.
  5. When the backup is ready to begin, the backup application creates an image transfer object for every disk included in the backup.
  6. The backup application gets a list of changed blocks from ovirt-imageio for every image transfer. If a change list is not available, the backup application gets an error.
  7. The backup application downloads changed blocks in RAW format from ovirt-imageio and stores them in the backup media. If a list of changed blocks is not available, the backup application can fall back to copying the entire disk.
  8. The backup application finalizes all image transfers.
  9. The backup application finalizes the backup using the REST API.
3.2.5.1.2. Incremental Restore Flow

A backup application that uses the Incremental Backup API must follow this sequence to restore virtual machine disks that have been backed up:

  1. The user selects a restore point based on available backups using the backup application.
  2. The backup application creates a new disk or a snapshot with an existing disk to hold the restored data.
  3. The backup application starts an upload image transfer for every disk, specifying format is raw. This enables format conversion when uploading RAW data to a QCOW2 disk.
  4. The backup application transfers the data included in this restore point to imageio using the API.
  5. The backup application finalizes the image transfers.
3.2.5.1.3. Incremental Backup and Restore API Tasks

The Incremental Backup and Restore API is documented in the Red Hat Virtualization REST API Guide. The backup and restore flow requires the following actions.

3.2.5.1.4. Enabling Incremental Backup on a new virtual disk

Enable incremental backup for a virtual disk to mark it as included in an incremental backup. When adding a disk, you can enable incremental backup for every disk, either with the REST API or using the Administration Portal. You can back up existing disks that are not enabled for incremental backup using full backup or in the same way you did previously.

Note

The Manager does not require the disk to be enabled for it to be included in an incremental backup, but you can enable it to keep track of which disks are enabled.

Because incremental backup requires disks to be formatted in QCOW2, use QCOW2 format instead of RAW format.

Procedure

  1. Add a new virtual disk. For more information see Creating a Virtual Disk.
  2. When configuring the disk, select the Enable Incremental Backup checkbox.
3.2.5.1.5. Enabling Incremental Backup on an existing RAW virtual disk

Because incremental backup is not supported for disks in RAW format, a QCOW2 format layer must exist on top of any RAW format disks in order to use incremental backup. Creating a snapshot generates a QCOW2 layer, enabling incremental backup on all disks that are included in the snapshot, from the point at which the snapshot is created.

Warning

If the base layer of a disk uses RAW format, deleting the last snapshot and merging the top QCOW2 layer into the base layer converts the disk to RAW format, thereby disabling incremental backup if it was set. To re-enable incremental backup, you can create a new snapshot, including this disk.

Procedure

  1. In the Administration Portal, click Compute Virtual Machines.
  2. Select a virtual machine and click the Disks tab.
  3. Click the Edit button. The Edit Disk dialog box opens.
  4. Select the Enable Incremental Backup checkbox.
3.2.5.1.6. Enabling incremental backup

You can use a REST API request to enable incremental backup for a virtual machine’s disk.

Procedure

  • Enable incremental backup for a new disk. For example, for a new disk on a virtual machine with ID 123, send this request:

    POST /ovirt-engine/api/vms/123/diskattachments

    The request body should include backup set to incremental as part of a disk object, like this:

    <disk_attachment>
        …​
        <disk>
           …​
           <backup>incremental</backup>
           …​
        </disk>
    </disk_attachment>

The response is:

<disk_attachment>
    …​
    <disk href="/ovirt-engine/api/disks/456" id="456"/>
    …​
</disk_attachment>

Additional resources

3.2.5.1.7. Finding disks that are enabled for incremental backup

For the specified virtual machine, you can list the disks that are enabled for incremental backup, filtered according to the backup property.

Procedure

  1. List the disks that are attached to the virtual machine. For example, for a virtual machine with the ID 123, send this request:

    GET /ovirt-engine/api/vms/123/diskattachments

    The response includes all disk_attachment objects, each of which includes one or more disk objects. For example:

    <disk_attachments>
        <disk_attachment>
           …​
           <disk href="/ovirt-engine/api/disks/456" id="456"/>
           …​
        </disk_attachment>
        …​
    </disk_attachments>
  2. Use the disk service to see the properties of a disk from the previous step. For example, for the disk with the ID 456, send this request:

    GET /ovirt-engine/api/disks/456

    The response includes all properties for the disk. backup is set to none or incremental. For example:

    <disk href="/ovirt-engine/api/disks/456" id="456">
        …​
        <backup>incremental</backup>
        …​
    </disk>
3.2.5.1.8. Starting a full backup

After a full backup you can use the resulting checkpoint ID as the start point in the next incremental backup.

When taking a backup of a running virtual machine, the process creates a scratch disk on the same storage domain as the disk being backed up. The backup process creates this disk to enable new data to be written to the running virtual machine during the backup. You can see this scratch disk in the Administration Portal during the backup. It is automatically deleted when the backup finishes.

Starting a full backup requires a request call with a body, and includes a response.

Procedure

  1. Send a request specifying a virtual machine to back up. For example, specify a virtual machine with ID 123 like this:

    POST /ovirt-engine/api/vms/123/backups
  2. In the request body, specify a disk to back up. For example, to start a full backup of a disk with ID 456, send the following request body:

    <backup>
        <disks>
           <disk id="456" />
           …​
        </disks>
    </backup>

    The response body should look similar to this:

    <backup id="789">
        <disks>
           <disk id="456" />
           …​
           …​
        </disks>
        <status>initializing</status>
        <creation_date>
    </backup>

    The response includes the following:

    • The backup id
    • The status of the backup, indicating that the backup is initializing.
  3. Poll the backup until the status is ready. The response includes to_checkpoint_id. Note this ID and use it for from_checkpoint_id in the next incremental backup.

Additional resources

3.2.5.1.9. Starting an incremental backup

Once a full backup is completed for a given virtual disk, subsequent incremental backups that disk contain only the changes since the last backup. Use the value of to_checkpoint_id from the most recent backup as the value for from_checkpoint_id in the request body.

When taking a backup of a running virtual machine, the process creates a scratch disk on the same storage domain as the disk being backed up. The backup process creates this disk to enable new data to be written to the running virtual machine during the backup. You can see this scratch disk in the Administration Portal during the backup. It is automatically deleted when the backup finishes.

Starting an incremental backup or mixed backup requires a request call with a body, and includes a response.

Procedure

  1. Send a request specifying a virtual machine to back up. For example, specify a virtual machine with ID 123 like this:

    POST /ovirt-engine/api/vms/123/backups
  2. In the request body, specify a disk to back up. For example, to start an incremental backup of a disk with ID 456, send the following request body:

    <backup>
        <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id>
        <disks>
           <disk id="456" />
           …​
        </disks>
    </backup>
    Note

    In the request body, if you include a disk that is not included in the previous checkpoint, the request also runs a full backup of this disk. For example, a disk with ID 789 has not been backed up yet. To add a full backup of 789 to the above request body, send a request body like this:

    <backup>
        <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id>
        <disks>
           <disk id="456" />
           <disk id="789" />
           …​
        </disks>
    </backup>

    The response body should look similar to this:

    <backup id="101112">
    <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id>
    <to_checkpoint_id>new-checkpoint-uuid</to_checkpoint_id>
        <disks>
           <disk id="456" />
           <disk id="789" />
           …​
           …​
        </disks>
        <status>initializing</status>
        <creation_date>
    </backup>

    The response includes the following:

    • The backup ID.
    • The ID of any disk that was included in the backup.
    • The status, indicating that the backup is initializing.
  3. Poll the backup until the status is ready. The response includes to_checkpoint_id. Note this ID and use it for from_checkpoint_id in the next incremental backup.

Additional resources

3.2.5.1.10. Getting information about a backup

You can get information about a backup that you can use to start a new incremental backup.

The list method of the VmBackups service returns the following information about a backup:

  • The ID of each disk that was backed up.
  • The IDs of the start and end checkpoints of the backup.
  • The ID of the disk image of the backup, for each disk included in the backup.
  • The status of the backup.
  • The date the backup was created.

When the value of <status> is ready, the response includes <to_checkpoint_id> which should be used as the <from_checkpoint_id> in the next incremental backup and you can start downloading the disks to back up the virtual machine storage.

Procedure

  • To get information about a backup with ID 456 of a virtual machine with ID 123, send a request like this:

    GET /ovirt-engine/api/vms/456/backups/123

    The response includes the backup with ID 456, with <from_checkpoint_id> 999 and <to_checkpoint_id> 666. The disks included in the backup are referenced in the <link> element.

    <backup id="456">
        <from_checkpoint_id>999</from_checkpoint_id>
        <to_checkpoint_id>666</to_checkpoint_id>
        <link href="/ovirt-engine/api/vms/456/backups/123/disks" rel="disks"/>
        <status>ready</status>
        <creation_date>
    </backup>
3.2.5.1.11. Getting information about the disks in a backup

You can get information about the disks that are part of the backup, including the backup mode that was used for each disk in a backup, which helps determine the mode that you use to download the backup.

The list method of the VmBackupDisks service returns the following information about a backup:

  • The ID and name of each disk that was backed up.
  • The ID of the disk image of the backup, for each disk included in the backup.
  • The disk format.
  • The backup behavior supported by the disk.
  • The backup type that was taken for the disk (full/incremental).

Procedure

  • To get information about a backup with ID 456 of a virtual machine with ID 123, send a request like this:

    GET /ovirt-engine/api/vms/456/backups/123/disks

    The response includes the disk with ID 789, and the ID of the disk image is 555.

    <disks>
        <disk id="789">
            <name>vm1_Disk1</name>
            <actual_size>671744</actual_size>
            <backup>incremental</backup>
            <backup_mode>full</backup_mode>
            <format>cow</format>
            <image_id>555</image_id>
            <qcow_version>qcow2_v3</qcow_version>
            <status>locked</status>
            <storage_type>image</storage_type>
            <total_size>0</total_size>
        </disk>
    </disks>
3.2.5.1.12. Finalizing a backup

Finalizing a backup ends the backup, unlocks resources, and performs cleanups. Use the finalize backup service method

Procedure

  • To finalize a backup of a disk with ID 456 on a virtual machine with ID 123, send a request like this:

    POST /vms/123/backups/456/finalize

Additional resources

3.2.5.1.13. Creating an image transfer object for incremental backup

When the backup is ready to download, the backup application should create an imagetransfer object, which initiates a transfer for an incremental backup.

Creating an image transfer object requires a request call with a body.

Procedure

  1. Send a request like this:

    POST /ovirt-engine/api/imagetransfers
  2. In the request body, specify the following parameters:

    • Disk ID.
    • Backup ID.
    • Direction of the disk set to download.
    • Format of the disk set to raw.

    For example, to transfer a backup of a disk where the ID of the disk is 123 and the ID of the backup is 456, send the following request body:

    <image_transfer>
        <disk id="123"/>
        <backup id="456"/>
        <direction>download</direction>
        <format>raw</format>
    </image_transfer>

Additional resources

3.2.5.1.14. Creating an image transfer object for incremental restore

To enable restoring raw data backed up using the incremental backup API to a QCOW2-formatted disk, the backup application should create an imagetransfer object.

When the transfer format is raw and the underlying disk format is QCOW2, uploaded data is converted on the fly to QCOW2 format when writing to storage. Uploading data from a QCOW2 disk to a RAW disk is not supported.

Creating an image transfer object requires a request call with a body.

Procedure

  1. Send a request like this:

    POST /ovirt-engine/api/imagetransfers
  2. In the request body, specify the following parameters:

    • Disk ID or snapshot ID.
    • Direction of the disk set to upload.
    • Format of the disk set to raw.

    For example, to transfer a backup of a disk where the ID of the disk is 123, send the following request body:

    <image_transfer>
        <disk id="123"/>
        <direction>upload</direction>
        <format>raw</format>
    </image_transfer>

Additional resources

3.2.5.1.15. Listing checkpoints for a virtual machine

You can list all checkpoints for a virtual machine, including information for each checkpoint, by sending a request call.

Procedure

  • Send a request specifying a virtual machine. For example, specify a virtual machine with ID 123 like this:

    GET /vms/123/checkpoints/

The response includes all the virtual machine checkpoints. Each checkpoint contains the following information:

  • The checkpoint’s disks.
  • The ID of the parent checkpoint.
  • Creation date of the checkpoint.
  • The virtual machine to which it belongs.

For example:

<parent_id>, <creation_date> and the virtual machine it belongs to <vm>:
<checkpoints>
    <checkpoint id="456">
         <link href="/ovirt-engine/api/vms/vm-uuid/checkpoints/456/disks" rel="disks"/>
         <parent_id>parent-checkpoint-uuid</parent_id>
         <creation_date>xxx</creation_date>
         <vm href="/ovirt-engine/api/vms/123" id="123"/>
    </checkpoint>
</checkpoints>

Additional resources

3.2.5.1.16. Listing a specific checkpoint for a virtual machine

You can list information for a specific checkpoint for a virtual machine by sending a request call.

Procedure

  • Send a request specifying a virtual machine. For example, specify a virtual machine with ID 123 and checkpoint ID 456 like this:

    GET /vms/123/checkpoints/456

The response includes the following information for the checkpoint:

  • The checkpoint’s disks.
  • The ID of the parent checkpoint.
  • Creation date of the checkpoint.
  • The virtual machine to which it belongs.

For example:

<checkpoint id="456">
     <link href="/ovirt-engine/api/vms/vm-uuid/checkpoints/456/disks" rel="disks"/>
     <parent_id>parent-checkpoint-uuid</parent_id>
     <creation_date>xxx</creation_date>
     <vm href="/ovirt-engine/api/vms/123" id="123"/>
</checkpoint>

Additional resources

3.2.5.1.17. Removing a checkpoint

You can remove a checkpoint of a virtual machine by sending a DELETE request. You can remove a checkpoint on a virtual machine whether it is running or not.

Procedure

  • Send a request specifying a virtual machine and a checkpoint. For example, specify a virtual machine with ID 123 and a checkpoint with ID 456 like this:

    DELETE /vms/123/checkpoints/456/

Additional resources

3.2.5.1.18. Using the imageio API to transfer backup data

Image transfer APIs start and stop an image transfer. The result is a transfer URL.

You use the imageio API to actually transfer the data from the transfer URL.

For complete information on using the imageio API, see the ovirt-imageio Images API reference.

Table 3.2. imageio Image API methods used in incremental backup and restore
API requestDescriptionimageio Image API reference section

OPTIONS /images/{ticket-id} HTTP/1.1

Gets the server options, to find out what features the server supports.

See OPTIONS

GET /images/{ticket-id}/extents

Gets information about disk image content and allocation, or about blocks that changed during an incremental backup. This information is known as extent information.

See EXTENTS

GET /images/{ticket-id}/extent?context=dirty

The program doing image transfer needs to download changes from the backup. These changes are know as dirty extents. To download changes, send a request like this:

See EXTENTSExamplesRequest dirty extents

PUT /images/{ticket-id}

The backup application creates a new disk or a snapshot with an existing disk to hold the restored data.

See PUT

Additional resources

The Red Hat Virtualization Python SDK includes several implementation examples you can use to get started with transferring backups:

Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.