Chapter 7. Recovering a Self-Hosted Engine from an Existing Backup
If a self-hosted engine is unavailable due to problems that cannot be repaired, you can restore it in a new self-hosted environment using a backup taken before the problem began, if one is available.
When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment.
Restoring a self-hosted engine involves the following key actions:
This procedure assumes that you do not have access to the original Manager, and that the new host can access the backup file.
Prerequisites
- A fresh installation of Red Hat Virtualization Host or Red Hat Enterprise Linux 7, with the required repositories enabled. See Installing Red Hat Virtualization Host or Enabling the Red Hat Enterprise Linux Host Repositories in the Installation Guide.
- A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager.
- A directory of at least 5 GB on the host, for the RHV-M Appliance. The deployment process will check if /var/tmp has enough space to extract the appliance files. If not, you can specify a different directory or mount external storage. The VDSM user and KVM group must have read, write, and execute permissions on the directory.
Prepared storage for a data storage domain dedicated to the Manager virtual machine. This storage domain is created during the self-hosted engine deployment, and must be at least 74 GiB. Highly available storage is recommended. For more information on preparing storage for your deployment, see the Storage chapter of the Administration Guide.
WarningIf you are using NFS or Gluster storage, do not use the old self-hosted engine storage domain’s mount point to deploy the new storage domain, as you risk losing virtual machine data.
ImportantIf you are using iSCSI storage, the self-hosted engine storage domain must use its own iSCSI target. Any additional storage domains must use a different iSCSI target.
7.1. Restoring the Backup on a New Self-Hosted Engine
Run the hosted-engine
script on a new host, and use the --restore-from-file=path/to/file_name
option to restore the Manager backup during the deployment.
If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE
error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment:
-
If you are redeploying on an existing host, you must update the host’s iSCSI initiator settings in
/etc/iscsi/initiatorname.iscsi
. The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable. - If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host.
Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target).
Procedure
Copy the backup file to the new host. In the following example,
host.example.com
is the FQDN for the host, and/backup/
is any designated folder or path.# scp -p file_name host.example.com:/backup/
Log in to the new host. If you are deploying on Red Hat Virtualization Host, the self-hosted engine deployment tool is available by default. If you are deploying on Red Hat Enterprise Linux, you must install the package:
# yum install ovirt-hosted-engine-setup
Red Hat recommends using the
screen
window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and runscreen
:# yum install screen # screen
In the event of session timeout or connection disruption, run
screen -d -r
to recover the deployment session.Run the
hosted-engine
script, specifying the path to the backup file:# hosted-engine --deploy --restore-from-file=backup/file_name
To escape the script at any time, use CTRL+D to abort deployment.
- Select Yes to begin the deployment.
- Configure the network. The script detects possible NICs to use as a management bridge for the environment.
- If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
- Specify the FQDN for the Manager virtual machine.
- Enter the root password for the Manager.
- Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user.
- Enter the virtual machine’s CPU and memory configuration.
- Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Manager.
ImportantThe static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
-
Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s
/etc/hosts
file. You must ensure that the host names are resolvable. - Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
Enter a password for the
admin@internal
user to access the Administration Portal.The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed.
Select the type of storage to use:
For NFS, enter the version, full address and path to the storage, and any mount options.
WarningDo not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.
For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
NoteTo specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.
For Gluster storage, enter the full address and path to the storage, and any mount options.
WarningDo not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.
ImportantOnly replica 3 Gluster storage is supported. Ensure you have the following configuration:
In the /etc/glusterfs/glusterd.vol file on all three Gluster servers, set
rpc-auth-allow-insecure
toon
.option rpc-auth-allow-insecure on
Configure the volume as follows:
gluster volume set _volume_ cluster.quorum-type auto gluster volume set _volume_ network.ping-timeout 10 gluster volume set _volume_ auth.allow \* gluster volume set _volume_ group virt gluster volume set _volume_ storage.owner-uid 36 gluster volume set _volume_ storage.owner-gid 36 gluster volume set _volume_ server.allow-insecure on
- For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
Enter the Manager disk size.
The script continues until the deployment is complete.
-
The deployment process changes the Manager’s SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager’s entry from the
.ssh/known_hosts
file on any client machines that accessed the original Manager.
When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories.