Chapter 3. Administering the Environment
3.1. Administering the Self-Hosted Engine
3.1.1. Maintaining the Self-hosted engine
3.1.1.1. Self-hosted engine maintenance modes explained
The maintenance modes enable you to start, stop, and modify the Manager virtual machine without interference from the high-availability agents, and to restart and modify the self-hosted engine nodes in the environment without interfering with the Manager.
There are three maintenance modes:
-
global
- All high-availability agents in the cluster are disabled from monitoring the state of the Manager virtual machine. The global maintenance mode must be applied for any setup or upgrade operations that require theovirt-engine
service to be stopped, such as upgrading to a later version of Red Hat Virtualization. -
local
- The high-availability agent on the node issuing the command is disabled from monitoring the state of the Manager virtual machine. The node is exempt from hosting the Manager virtual machine while in local maintenance mode; if hosting the Manager virtual machine when placed into this mode, the Manager will migrate to another node, provided there is one available. The local maintenance mode is recommended when applying system changes or updates to a self-hosted engine node. -
none
- Disables maintenance mode, ensuring that the high-availability agents are operating.
3.1.1.2. Setting local maintenance mode
Enabling local maintenance mode stops the high-availability agent on a single self-hosted engine node.
Setting the local maintenance mode from the Administration Portal
Put a self-hosted engine node into local maintenance mode:
-
In the Administration Portal, click
and select a self-hosted engine node. -
Click
and . Local maintenance mode is automatically triggered for that node.
-
In the Administration Portal, click
After you have completed any maintenance tasks, disable the maintenance mode:
-
In the Administration Portal, click
and select the self-hosted engine node. -
Click
.
-
In the Administration Portal, click
Setting the local maintenance mode from the command line
Log in to a self-hosted engine node and put it into local maintenance mode:
# hosted-engine --set-maintenance --mode=local
After you have completed any maintenance tasks, disable the maintenance mode:
# hosted-engine --set-maintenance --mode=none
3.1.1.3. Setting global maintenance mode
Enabling global maintenance mode stops the high-availability agents on all self-hosted engine nodes in the cluster.
Setting the global maintenance mode from the Administration Portal
Put all of the self-hosted engine nodes into global maintenance mode:
-
In the Administration Portal, click
and select any self-hosted engine node. - Click More Actions ( ), then click Enable Global HA Maintenance.
-
In the Administration Portal, click
After you have completed any maintenance tasks, disable the maintenance mode:
-
In the Administration Portal, click
and select any self-hosted engine node. - Click More Actions ( ), then click Disable Global HA Maintenance.
-
In the Administration Portal, click
Setting the global maintenance mode from the command line
Log in to any self-hosted engine node and put it into global maintenance mode:
# hosted-engine --set-maintenance --mode=global
After you have completed any maintenance tasks, disable the maintenance mode:
# hosted-engine --set-maintenance --mode=none
3.1.2. Administering the Manager Virtual Machine
The hosted-engine
utility provides many commands to help administer the Manager virtual machine. You can run hosted-engine
on any self-hosted engine node. To see all available commands, run hosted-engine --help
. For additional information on a specific command, run hosted-engine --command --help
.
3.1.2.1. Updating the Self-Hosted Engine Configuration
To update the self-hosted engine configuration, use the hosted-engine --set-shared-config
command. This command updates the self-hosted engine configuration on the shared storage domain after the initial deployment.
To see the current configuration values, use the hosted-engine --get-shared-config
command.
To see a list of all available configuration keys and their corresponding types, enter the following command:
# hosted-engine --set-shared-config key --type=type --help
Where type
is one of the following:
|
Sets values in the local instance of |
|
Sets values in |
|
Sets values in |
|
Sets values in |
3.1.2.2. Configuring Email Notifications
You can configure email notifications using SMTP for any HA state transitions on the self-hosted engine nodes. The keys that can be updated include: smtp-server
, smtp-port
, source-email
, destination-emails
, and state_transition
.
To configure email notifications:
On a self-hosted engine node, set the
smtp-server
key to the desired SMTP server address:# hosted-engine --set-shared-config smtp-server smtp.example.com --type=broker
NoteTo verify that the self-hosted engine configuration file has been updated, run:
# hosted-engine --get-shared-config smtp-server --type=broker broker : smtp.example.com, type : broker
Check that the default SMTP port (port 25) has been configured:
# hosted-engine --get-shared-config smtp-port --type=broker broker : 25, type : broker
Specify an email address you want the SMTP server to use to send out email notifications. Only one address can be specified.
# hosted-engine --set-shared-config source-email source@example.com --type=broker
Specify the destination email address to receive email notifications. To specify multiple email addresses, separate each address by a comma.
# hosted-engine --set-shared-config destination-emails destination1@example.com,destination2@example.com --type=broker
To verify that SMTP has been properly configured for your self-hosted engine environment, change the HA state on a self-hosted engine node and check if email notifications were sent. For example, you can change the HA state by placing HA agents into maintenance mode. See Maintaining the Self-Hosted Engine for more information.
3.1.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts
If the Manager virtual machine shuts down or needs to be migrated, there must be enough memory on a self-hosted engine node for the Manager virtual machine to restart on or migrate to it. This memory can be reserved on multiple self-hosted engine nodes by using a scheduling policy. The scheduling policy checks if enough memory to start the Manager virtual machine will remain on the specified number of additional self-hosted engine nodes before starting or migrating any virtual machines. See Creating a Scheduling Policy in the Administration Guide for more information about scheduling policies.
To add more self-hosted engine nodes to the Red Hat Virtualization Manager, see Adding self-hosted engine nodes to the Manager.
Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts
-
Click
and select the cluster containing the self-hosted engine nodes. - Click .
- Click the Scheduling Policy tab.
- Click HeSparesCount. and select
- Enter the number of additional self-hosted engine nodes that will reserve enough free memory to start the Manager virtual machine.
- Click .
3.1.4. Adding Self-Hosted Engine Nodes to the Red Hat Virtualization Manager
Add self-hosted engine nodes in the same way as a standard host, with an additional step to deploy the host as a self-hosted engine node. The shared storage domain is automatically detected and the node can be used as a failover host to host the Manager virtual machine when required. You can also attach standard hosts to a self-hosted engine environment, but they cannot host the Manager virtual machine. Have at least two self-hosted engine nodes to ensure the Manager virtual machine is highly available. You can also add additional hosts using the REST API. See Hosts in the REST API Guide.
Prerequisites
- All self-hosted engine nodes must be in the same cluster.
- If you are reusing a self-hosted engine node, remove its existing self-hosted engine configuration. See Removing a Host from a Self-Hosted Engine Environment.
Procedure
-
In the Administration Portal, click
. Click
.For information on additional host settings, see Explanation of Settings and Controls in the New Host and Edit Host Windows in the Administration Guide.
- Use the drop-down list to select the Data Center and Host Cluster for the new host.
- Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
Select an authentication method to use for the Manager to access the host.
- Enter the root user’s password to use password authentication.
- Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
- Optionally, configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
- Click the Hosted Engine tab.
- Select Deploy.
- Click .
3.1.5. Reinstalling an Existing Host as a Self-Hosted Engine Node
You can convert an existing, standard host in a self-hosted engine environment to a self-hosted engine node capable of hosting the Manager virtual machine.
When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.
Procedure
-
Click
and select the host. -
Click
and . -
Click
. - Click the Hosted Engine tab and select DEPLOY from the drop-down list.
- Click .
The host is reinstalled with self-hosted engine configuration, and is flagged with a crown icon in the Administration Portal.
3.1.6. Booting the Manager Virtual Machine in Rescue Mode
This topic describes how to boot the Manager virtual machine into rescue mode when it does not start. For more information, see Booting to Rescue Mode in the Red Hat Enterprise Linux System Administrator’s Guide.
Connect to one of the hosted-engine nodes:
$ ssh root@host_address
Put the self-hosted engine in global maintenance mode:
# hosted-engine --set-maintenance --mode=global
Check if there is already a running instance of the Manager virtual machine:
# hosted-engine --vm-status
If a Manager virtual machine instance is running, connect to its host:
# ssh root@host_address
Shut down the virtual machine:
# hosted-engine --vm-shutdown
NoteIf the virtual machine does not shut down, execute the following command:
# hosted-engine --vm-poweroff
Start the Manager virtual machine in pause mode:
hosted-engine --vm-start-paused
Set a temporary VNC password:
hosted-engine --add-console-password
The command outputs the necessary information you need to log in to the Manger virtual machine with VNC.
- Log in to the Manager virtual machine with VNC. The Manager virtual machine is still paused, so it appears to be frozen.
Resume the Manager virtual machine with the following command on its host:
WarningAfter running the following command, the boot loader menu appears. You need to enter into rescue mode before the boot loader proceeds with the normal boot process. Read the next step about entering into rescue mode before proceeding with this command.
# /usr/bin/virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine
- Boot the Manager virtual machine in rescue mode.
Disable global maintenance mode
# hosted-engine --set-maintenance --mode=none
You can now run rescue tasks on the Manager virtual machine.
3.1.7. Removing a Host from a Self-Hosted Engine Environment
To remove a self-hosted engine node from your environment, place the node into maintenance mode, undeploy the node, and optionally remove it. The node can be managed as a regular host after the HA services have been stopped, and the self-hosted engine configuration files have been removed.
Procedure
-
In the Administration Portal, click
and select the self-hosted engine node. -
Click
and . -
Click
. -
Click the Hosted Engine tab and select UNDEPLOY from the drop-down list. This action stops the
ovirt-ha-agent
andovirt-ha-broker
services and removes the self-hosted engine configuration file. - Click .
- Optionally, click Remove Host(s) confirmation window. . This opens the
- Click .
3.1.8. Updating a Self-Hosted Engine
To update a self-hosted engine from your current version to the latest version, you must place the environment in global maintenance mode and then follow the standard procedure for updating between minor versions.
Enabling global maintenance mode
You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine.
Procedure
Log in to one of the self-hosted engine nodes and enable global maintenance mode:
# hosted-engine --set-maintenance --mode=global
Confirm that the environment is in global maintenance mode before proceeding:
# hosted-engine --vm-status
You should see a message indicating that the cluster is in global maintenance mode.
Updating the Red Hat Virtualization Manager
Procedure
On the Manager machine, check if updated packages are available:
# engine-upgrade-check
Update the setup packages:
# yum update ovirt\*setup\* rh\*vm-setup-plugins
Update the Red Hat Virtualization Manager with the
engine-setup
script. Theengine-setup
script prompts you with some configuration questions, then stops theovirt-engine
service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts theovirt-engine
service.# engine-setup
When the script completes successfully, the following message appears:
Execution of setup completed successfully
NoteThe
engine-setup
script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date ifengine-config
was used to update configuration after installation. For example, ifengine-config
was used to updateSANWipeAfterDelete
totrue
after installation,engine-setup
will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten byengine-setup
.ImportantThe update process might take some time. Do not stop the process before it completes.
Update the base operating system and any optional packages installed on the Manager:
# yum update --nobest
ImportantIf you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).
ImportantIf any kernel packages were updated:
- Disable global maintenance mode
- Reboot the machine to complete the update.
Related Information
Disabling global maintenance mode
Procedure
- Log in to the Manager virtual machine and shut it down.
Log in to one of the self-hosted engine nodes and disable global maintenance mode:
# hosted-engine --set-maintenance --mode=none
When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start.
Confirm that the environment is running:
# hosted-engine --vm-status
The listed information includes Engine Status. The value for Engine status should be:
{"health": "good", "vm": "up", "detail": "Up"}
NoteWhen the virtual machine is still booting and the Manager hasn’t started yet, the Engine status is:
{"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}
If this happens, wait a few minutes and try again.
3.1.9. Changing the FQDN of the Manager in a Self-Hosted Engine
You can use the ovirt-engine-rename
command to update records of the fully qualified domain name (FQDN) of the Manager.
For details see Renaming the Manager with the Ovirt Engine Rename Tool.