Questo contenuto non è disponibile nella lingua selezionata.
6.2. Restoring the Self-Hosted Engine Environment
engine-backup
tool.
Warning
tar
backup file produced by the engine-backup
tool. If a third-party tool is used, it must create a backup of the tar
file.
- Create a newly installed Red Hat Enterprise Linux host and run the
hosted-engine
deployment script. - Restore the Red Hat Virtualization Manager configuration settings and database content in the new Manager virtual machine.
- Remove self-hosted engine nodes in a Non Operational state and re-install them into the restored self-hosted engine environment.
Prerequisites
- To restore a self-hosted engine environment, you must prepare a newly installed Red Hat Enterprise Linux system on a physical host.
- The operating system version of the new host and Manager must be the same as that of the original host and Manager.
- You must have Red Hat Subscription Manager entitlements for your new environment. For a list of the required repositories, see Subscribing to the Required Entitlements in the Installation Guide.
- The fully qualified domain name of the new Manager must be the same fully qualified domain name as that of the original Manager. Forward and reverse lookup records must both be set in DNS.
- You must prepare storage for the new self-hosted engine environment to use as the Manager virtual machine's shared storage domain. This domain must be at least 68 GB. For more information on preparing storage for your deployment, see the Storage chapter of the Administration Guide.
6.2.1. Creating a New Self-Hosted Engine Environment to be Used as the Restored Environment Copia collegamentoCollegamento copiato negli appunti!
Host 1
, used in Section 6.1, “Backing up the Self-Hosted Engine Manager Virtual Machine” uses the default hostname of hosted_engine_1
which is also used in this procedure. Due to the nature of the restore process for the self-hosted engine, before the final synchronization of the restored engine can take place, this failover host will need to be removed, and this can only be achieved if the host had no virtual load when the backup was taken. You can also restore the backup on a separate hardware which was not used in the backed up environment and this is not a concern.
Important
Procedure 6.4. Creating a New Self-Hosted Environment to be Used as the Restored Environment
Updating DNS
Update your DNS so that the fully qualified domain name of the Red Hat Virtualization environment correlates to the IP address of the new Manager. In this procedure, fully qualified domain name was set as Manager.example.com. The fully qualified domain name provided for the engine must be identical to that given in the engine setup of the original engine that was backed up.Initiating Hosted Engine Deployment
On the newly installed Red Hat Enterprise Linux host, run thehosted-engine
deployment script. To escape the script at any time, use the CTRL+D keyboard combination to abort deployment. If running thehosted-engine
deployment script over a network, it is recommended to use thescreen
window manager to avoid losing the session in case of network or terminal disruption. Install the screen package first if not installed.screen
# screen
Copy to Clipboard Copied! Toggle word wrap Toggle overflow hosted-engine --deploy
# hosted-engine --deploy
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Preparing for Initialization
The script begins by requesting confirmation to use the host as a hypervisor for use in a self-hosted engine environment.Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]:
Continuing will configure this host for serving as hypervisor and create a VM where you have to install oVirt Engine afterwards. Are you sure you want to continue? (Yes, No)[Yes]:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configuring Storage
Select the type of storage to use.During customization use CTRL-D to abort. Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
During customization use CTRL-D to abort. Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs3, nfs4)[nfs3]:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For NFS storage types, specify the full address, using either the FQDN or IP address, and path name of the shared storage domain.
Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs
Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For iSCSI, specify the iSCSI portal IP address, port, user name and password, and select a target name from the auto-detected list. You can only select one iSCSI target during the deployment.
Please specify the iSCSI portal IP address: Please specify the iSCSI portal port [3260]: Please specify the iSCSI portal user: Please specify the iSCSI portal password: Please specify the target name (auto-detected values) [default]:
Please specify the iSCSI portal IP address: Please specify the iSCSI portal port [3260]: Please specify the iSCSI portal user: Please specify the iSCSI portal password: Please specify the target name (auto-detected values) [default]:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For Gluster storage, specify the full address, using either the FQDN or IP address, and path name of the shared storage domain.
Important
Only replica 3 Gluster storage is supported. Ensure the following configuration has been made:- In the
/etc/glusterfs/glusterd.vol
file on all three Gluster servers, setrpc-auth-allow-insecure
toon
.option rpc-auth-allow-insecure on
option rpc-auth-allow-insecure on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the volume as follows:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/gluster_volume
Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/gluster_volume
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For Fibre Channel, the host bus adapters must be configured and connected, and the
hosted-engine
script will auto-detect the LUNs available. The LUNs must not contain any existing data.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Configuring the Network
The script detects possible network interface controllers (NICs) to use as a management bridge for the environment. It then checks your firewall configuration and offers to modify it for console (SPICE or VNC) access the Manager virtual machine. Provide a pingable gateway IP address, to be used by theovirt-ha-agent
, to help determine a host's suitability for running a Manager virtual machine.Please indicate a nic to set ovirtmgmt bridge on: (eth1, eth0) [eth1]: iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [X.X.X.X]:
Please indicate a nic to set ovirtmgmt bridge on: (eth1, eth0) [eth1]: iptables was detected on your computer, do you wish setup to configure it? (Yes, No)[Yes]: Please indicate a pingable gateway IP address [X.X.X.X]:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configuring the New Manager Virtual Machine
The script creates a virtual machine to be configured as the new Manager virtual machine. Specify the boot device and, if applicable, the path name of the installation media, the image alias, the CPU type, the number of virtual CPUs, and the disk size. Specify a MAC address for the Manager virtual machine, or accept a randomly generated one. The MAC address can be used to update your DHCP server prior to installing the operating system on the Manager virtual machine. Specify memory size and console connection type for the creation of Manager virtual machine.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Identifying the Name of the Host
Specify the password for theadmin@internal
user to access the Administration Portal.A unique name must be provided for the name of the host, to ensure that it does not conflict with other resources that will be present when the engine has been restored from the backup. The namehosted_engine_1
can be used in this procedure because this host was placed into maintenance mode before the environment was backed up, enabling removal of this host between the restoring of the engine and the final synchronization of the host and the engine.Enter engine admin password: Confirm engine admin password: Enter the name which will be used to identify this host inside the Administration Portal [hosted_engine_1]:
Enter engine admin password: Confirm engine admin password: Enter the name which will be used to identify this host inside the Administration Portal [hosted_engine_1]:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configuring the Hosted Engine
Provide the fully qualified domain name for the new Manager virtual machine. This procedure uses the fully qualified domain name Manager.example.com. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications.Important
The fully qualified domain name provided for the engine (Manager.example.com) must be the same fully qualified domain name provided when original Manager was initially set up.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configuration Preview
Before proceeding, thehosted-engine
deployment script displays the configuration values you have entered, and prompts for confirmation to proceed with these values.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Creating the New Manager Virtual Machine
The script creates the virtual machine to be configured as the Manager virtual machine and provides connection details. You must install an operating system on it before thehosted-engine
deployment script can proceed on Hosted Engine configuration.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the naming convention of this procedure, connect to the virtual machine using VNC with the following command:/usr/bin/remote-viewer vnc://hosted_engine_1.example.com:5900
/usr/bin/remote-viewer vnc://hosted_engine_1.example.com:5900
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Installing the Virtual Machine Operating System
Connect to Manager virtual machine and install a Red Hat Enterprise Linux 7 operating system.Synchronizing the Host and the Manager
Return to the host and continue thehosted-engine
deployment script by selecting option 1:(1) Continue setup - VM installation is complete
(1) Continue setup - VM installation is complete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Installing the Manager
Connect to the new Manager virtual machine, register it with Red Hat Subscription Management, and enable the required repositories. See Subscribing to the Required Entitlements in the Installation Guide.Ensure the latest versions of all installed packages are in use, and install the rhevm packages.yum update
# yum update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Reboot the machine if any kernel related packages have been updated.yum install rhevm
# yum install rhevm
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.2.2. Restoring the Self-Hosted Engine Manager Copia collegamentoCollegamento copiato negli appunti!
engine-backup
tool to automate the restore of the configuration settings and database content for a backed-up self-hosted engine Manager virtual machine and Data Warehouse. The procedure only applies to components that were configured automatically during the initial engine-setup
. If you configured the database(s) manually during engine-setup
, follow the instructions at Section 6.2.3, “Restoring the Self-Hosted Engine Manager Manually” to restore the back-up environment manually.
Procedure 6.5. Restoring the Self-Hosted Engine Manager
- Secure copy the backup files to the new Manager virtual machine. This example copies the files from a network storage server to which the files were copied in Section 6.1, “Backing up the Self-Hosted Engine Manager Virtual Machine”. In this example, Storage.example.com is the fully qualified domain name of the storage server, /backup/EngineBackupFiles is the designated file path for the backup files on the storage server, and /backup/ is the path to which the files will be copied on the new Manager.
scp -p Storage.example.com:/backup/EngineBackupFiles /backup/
# scp -p Storage.example.com:/backup/EngineBackupFiles /backup/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Use the
engine-backup
tool to restore a complete backup.- If you are only restoring the Manager, run:
engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If you are restoring the Manager and Data Warehouse, run:
engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db --restore-permissions
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If successful, the following output displays:You should now run engine-setup. Done.
You should now run engine-setup. Done.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the restored Manager virtual machine. This process identifies the existing configuration settings and database content. Confirm the settings. Upon completion, the setup provides an SSH fingerprint and an internal Certificate Authority hash.
engine-setup
# engine-setup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Removing the Host from the Restored Environment
If the deployment of the restored self-hosted engine is on new hardware that has a unique name not present in the backed-up engine, skip this step. This step is only applicable to deployments occurring on the failover host,hosted_engine_1
. Because this host was present in the environment at time the backup was created, it maintains a presence in the restored engine and must first be removed from the environment before final synchronization can take place.- Log in to the Administration Portal.
- Click the Hosts tab. The failover host,
hosted_engine_1
, will be in maintenance mode and without a virtual load, as this was how it was prepared for the backup. - Click.
- Click.
Note
If the host you are trying to remove becomes non-operational, see Section 6.2.4, “Removing Non-Operational Hosts from a Restored Self-Hosted Engine Environment” for instructions on how to force the removal of a host.Synchronizing the Host and the Manager
Return to the host and continue thehosted-engine
deployment script by selecting option 1:(1) Continue setup - engine installation is complete
(1) Continue setup - engine installation is complete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow [ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational...
[ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow At this point,hosted_engine_1
will become visible in the Administration Portal with Installing and Initializing states before entering a Non Operational state. The host will continue to wait for the VDSM host to become operational until it eventually times out. This happens because another host in the environment maintains the Storage Pool Manager (SPM) role andhosted_engine_1
cannot interact with the storage domain because the SPM host is in a Non Responsive state. When this process times out, you are prompted to shut down the virtual machine to complete the deployment. When deployment is complete, the host can be manually placed into maintenance mode and activated through the Administration Portal.[ INFO ] Still waiting for VDSM host to become operational... [ ERROR ] Timed out while waiting for host to start. Please check the logs. [ ERROR ] Unable to add hosted_engine_2 to the manager Please shutdown the VM allowing the system to launch it as a monitored service. The system will wait until the VM is down.
[ INFO ] Still waiting for VDSM host to become operational... [ ERROR ] Timed out while waiting for host to start. Please check the logs. [ ERROR ] Unable to add hosted_engine_2 to the manager Please shutdown the VM allowing the system to launch it as a monitored service. The system will wait until the VM is down.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Shut down the new Manager virtual machine.
shutdown -h now
# shutdown -h now
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Return to the host to confirm it has detected that the Manager virtual machine is down.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Activate the host.
- Log in to the Administration Portal.
- Click the Hosts tab.
- Select
hosted_engine_1
and click. The host may take several minutes before it enters maintenance mode. - Click
.
Once active,hosted_engine_1
immediately contends for SPM, and the storage domain and data center become active. - Migrate virtual machines to the active host by manually fencing the Non Responsive hosts. In the Administration Portal, right-click the hosts and select Confirm 'Host has been Rebooted'.Any virtual machines that were running on that host at the time of the backup will now be removed from that host, and move from an Unknown state to a Down state. These virtual machines can now be run on
hosted_engine_1
. The host that was fenced can now be forcefully removed using the REST API.
hosted_engine_1
is active and is able to run virtual machines in the restored environment. The remaining self-hosted engine nodes in Non Operational state can now be removed by following the steps in Section 6.2.4, “Removing Non-Operational Hosts from a Restored Self-Hosted Engine Environment” and then re-installed into the environment by following the steps in Chapter 7, Installing Additional Hosts to a Self-Hosted Environment.
Note
6.2.3. Restoring the Self-Hosted Engine Manager Manually Copia collegamentoCollegamento copiato negli appunti!
Procedure 6.6. Restoring the Self-Hosted Engine Manager
- Manually create an empty database to which the database content in the backup can be restored. The following steps must be performed on the machine where the database is to be hosted.
- If the database is to be hosted on a machine other than the Manager virtual machine, install the postgresql-server package. This step is not required if the database is to be hosted on the Manager virtual machine because this package is included with the rhevm package.
yum install postgresql-server
# yum install postgresql-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Initialize the
postgresql
database, start thepostgresql
service, and ensure this service starts on boot:postgresql-setup initdb systemctl start postgresql.service systemctl enable postgresql.service
# postgresql-setup initdb # systemctl start postgresql.service # systemctl enable postgresql.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enter the postgresql command line:
su postgres psql
# su postgres $ psql
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the
engine
user:postgres=# create role engine with login encrypted password 'password';
postgres=# create role engine with login encrypted password 'password';
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are also restoring Data Warehouse, create theovirt_engine_history
user on the relevant host:postgres=# create role ovirt_engine_history with login encrypted password 'password';
postgres=# create role ovirt_engine_history with login encrypted password 'password';
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create the new database:
postgres=# create database database_name owner engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
postgres=# create database database_name owner engine template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are also restoring the Data Warehouse, create the database on the relevant host:postgres=# create database database_name owner ovirt_engine_history template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
postgres=# create database database_name owner ovirt_engine_history template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Exit the postgresql command line and log out of the postgres user:
postgres=# \q $ exit
postgres=# \q $ exit
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the
/var/lib/pgsql/data/pg_hba.conf
file as follows:- For each local database, replace the existing directives in the section starting with
local
at the bottom of the file with the following directives:host database_name user_name 0.0.0.0/0 md5 host database_name user_name ::0/0 md5
host database_name user_name 0.0.0.0/0 md5 host database_name user_name ::0/0 md5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For each remote database:
- Add the following line immediately underneath the line starting with
Local
at the bottom of the file, replacing X.X.X.X with the IP address of the Manager:host database_name user_name X.X.X.X/32 md5
host database_name user_name X.X.X.X/32 md5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Allow TCP/IP connections to the database. Edit the
/var/lib/pgsql/data/postgresql.conf
file and add the following line:listen_addresses='*'
listen_addresses='*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This example configures thepostgresql
service to listen for connections on all interfaces. You can specify an interface by giving its IP address. - Open the default port used for PostgreSQL database connections, and save the updated firewall rules:
iptables -I INPUT 5 -p tcp -s Manager_IP_Address --dport 5432 -j ACCEPT service iptables save
# iptables -I INPUT 5 -p tcp -s Manager_IP_Address --dport 5432 -j ACCEPT # service iptables save
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Restart the
postgresql
service:systemctl restart postgresql.service
# systemctl restart postgresql.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Secure copy the backup files to the new Manager virtual machine. This example copies the files from a network storage server to which the files were copied in Section 6.1, “Backing up the Self-Hosted Engine Manager Virtual Machine”. In this example, Storage.example.com is the fully qualified domain name of the storage server, /backup/EngineBackupFiles is the designated file path for the backup files on the storage server, and /backup/ is the path to which the files will be copied on the new Manager.
scp -p Storage.example.com:/backup/EngineBackupFiles /backup/
# scp -p Storage.example.com:/backup/EngineBackupFiles /backup/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restore a complete backup or a database-only backup with the
--change-db-credentials
parameter to pass the credentials of the new database. The database_location for a database local to the Manager islocalhost
.Note
The following examples use a--*password
option for each database without specifying a password, which will prompt for a password for each database. Passwords can be supplied for these options in the command itself, however this is not recommended as the password will then be stored in the shell history. Alternatively,--*passfile=
password_file options can be used for each database to securely pass the passwords to theengine-backup
tool without the need for interactive prompts.- Restore a complete backup:
engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
# engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If Data Warehouse is also being restored as part of the complete backup, include the revised credentials for the additional database:engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restore a database-only backup restoring the configuration files and the database backup:
engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
# engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example above restores a backup of the Manager database.engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=file_name --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
# engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=file_name --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The example above restores a backup of the Data Warehouse database.
If successful, the following output displays:You should now run engine-setup. Done.
You should now run engine-setup. Done.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Configure the restored Manager virtual machine. This process identifies the existing configuration settings and database content. Confirm the settings. Upon completion, the setup provides an SSH fingerprint and an internal Certificate Authority hash.
engine-setup
# engine-setup
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Removing the Host from the Restored Environment
If the deployment of the restored self-hosted engine is on new hardware that has a unique name not present in the backed-up engine, skip this step. This step is only applicable to deployments occurring on the failover host,hosted_engine_1
. Because this host was present in the environment at time the backup was created, it maintains a presence in the restored engine and must first be removed from the environment before final synchronization can take place.- Log in to the Administration Portal.
- Click the Hosts tab. The failover host,
hosted_engine_1
, will be in maintenance mode and without a virtual load, as this was how it was prepared for the backup. - Click.
- Click.
Synchronizing the Host and the Manager
Return to the host and continue thehosted-engine
deployment script by selecting option 1:(1) Continue setup - engine installation is complete
(1) Continue setup - engine installation is complete
Copy to Clipboard Copied! Toggle word wrap Toggle overflow [ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational...
[ INFO ] Engine replied: DB Up!Welcome to Health Status! [ INFO ] Waiting for the host to become operational in the engine. This may take several minutes... [ INFO ] Still waiting for VDSM host to become operational...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow At this point,hosted_engine_1
will become visible in the Administration Portal with Installing and Initializing states before entering a Non Operational state. The host will continue to wait for the VDSM host to become operational until it eventually times out. This happens because another host in the environment maintains the Storage Pool Manager (SPM) role andhosted_engine_1
cannot interact with the storage domain because the SPM host is in a Non Responsive state. When this process times out, you are prompted to shut down the virtual machine to complete the deployment. When deployment is complete, the host can be manually placed into maintenance mode and activated through the Administration Portal.[ INFO ] Still waiting for VDSM host to become operational... [ ERROR ] Timed out while waiting for host to start. Please check the logs. [ ERROR ] Unable to add hosted_engine_2 to the manager Please shutdown the VM allowing the system to launch it as a monitored service. The system will wait until the VM is down.
[ INFO ] Still waiting for VDSM host to become operational... [ ERROR ] Timed out while waiting for host to start. Please check the logs. [ ERROR ] Unable to add hosted_engine_2 to the manager Please shutdown the VM allowing the system to launch it as a monitored service. The system will wait until the VM is down.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Shut down the new Manager virtual machine.
shutdown -h now
# shutdown -h now
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Return to the host to confirm it has detected that the Manager virtual machine is down.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Activate the host.
- Log in to the Administration Portal.
- Click the Hosts tab.
- Select
hosted_engine_1
and click. The host may take several minutes before it enters maintenance mode. - Click
.
Once active,hosted_engine_1
immediately contends for SPM, and the storage domain and data center become active. - Migrate virtual machines to the active host by manually fencing the Non Responsive hosts. In the Administration Portal, right-click the hosts and select Confirm 'Host has been Rebooted'.Any virtual machines that were running on that host at the time of the backup will now be removed from that host, and move from an Unknown state to a Down state. These virtual machines can now be run on
hosted_engine_1
. The host that was fenced can now be forcefully removed using the REST API.
hosted_engine_1
is active and is able to run virtual machines in the restored environment. The remaining self-hosted engine nodes in Non Operational state can now be removed by following the steps in Section 6.2.4, “Removing Non-Operational Hosts from a Restored Self-Hosted Engine Environment” and then re-installed into the environment by following the steps in Chapter 7, Installing Additional Hosts to a Self-Hosted Environment.
Note
6.2.4. Removing Non-Operational Hosts from a Restored Self-Hosted Engine Environment Copia collegamentoCollegamento copiato negli appunti!
Fencing the Non-Operational Host
In the Administration Portal, right-click the hosts and select Confirm 'Host has been Rebooted'.Any virtual machines that were running on that host at the time of the backup will now be removed from that host, and move from an Unknown state to a Down state. The host that was fenced can now be forcefully removed using the REST API.Retrieving the Manager Certificate Authority
Connect to the Manager virtual machine and use the command line to perform the following requests with cURL.Use aGET
request to retrieve the Manager Certificate Authority (CA) certificate for use in all future API requests. In the following example, the--output
option is used to designate the file hosted-engine.ca as the output for the Manager CA certificate. The--insecure
option means that this initial request will be without a certificate.curl --output hosted-engine.ca --insecure https://[Manager.example.com]/ca.crt
# curl --output hosted-engine.ca --insecure https://[Manager.example.com]/ca.crt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieving the GUID of the Host to be Removed
Use aGET
request on the hosts collection to retrieve the Global Unique Identifier (GUID) for the host to be removed. The following example includes the Manager CA certificate file, and uses theadmin@internal
user for authentication, the password for which will be prompted once the command is executed.curl --request GET --cacert hosted-engine.ca --user admin@internal https://[Manager.example.com]/api/hosts
# curl --request GET --cacert hosted-engine.ca --user admin@internal https://[Manager.example.com]/api/hosts
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This request returns the details of all of the hosts in the environment. The host GUID is a hexadecimal string associated with the host name. For more information on the Red Hat Virtualization REST API, see the Red Hat Virtualization REST API Guide.Removing the Fenced Host
Use aDELETE
request using the GUID of the fenced host to remove the host from the environment. In addition to the previously used options this example specifies headers to specify that the request is to be sent and returned using eXtensible Markup Language (XML), and the body in XML that sets theforce
action to betrue
.curl --request DELETE --cacert hosted-engine.ca --user admin@internal --header "Content-Type: application/xml" --header "Accept: application/xml" --data "<action><force>true</force></action>" https://[Manager.example.com]/api/hosts/ecde42b0-de2f-48fe-aa23-1ebd5196b4a5
curl --request DELETE --cacert hosted-engine.ca --user admin@internal --header "Content-Type: application/xml" --header "Accept: application/xml" --data "<action><force>true</force></action>" https://[Manager.example.com]/api/hosts/ecde42b0-de2f-48fe-aa23-1ebd5196b4a5
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ThisDELETE
request can be used to remove every fenced host in the self-hosted engine environment, as long as the appropriate GUID is specified.Removing the Self-Hosted Engine Configuration from the Host
Remove the host's self-hosted engine configuration so it can be reconfigured when the host is re-installed to a self-hosted engine environment.Log in to the host and remove the configuration file:rm /etc/ovirt-hosted-engine/hosted-engine.conf
# rm /etc/ovirt-hosted-engine/hosted-engine.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow