Questo contenuto non è disponibile nella lingua selezionata.

2.5. Hosts


2.5.1. Introduction to Hosts

Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM).

KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the Red Hat Virtualization Manager. A Red Hat Virtualization environment has one or more hosts attached to it.

Red Hat Virtualization supports two methods of installing hosts. You can use the Red Hat Virtualization Host (RHVH) installation media, or install hypervisor packages on a standard Red Hat Enterprise Linux installation.

Note

You can identify the host type of an individual host in the Red Hat Virtualization Manager by selecting the host’s name. This opens the details view. Then look at the OS Description under Software.

Hosts use tuned profiles, which provide virtualization optimizations. For more information on tuned, see the TuneD Profiles in Red Hat Enterprise Linux Monitoring and managing system status and performance.

The Red Hat Virtualization Host has security features enabled. Security Enhanced Linux (SELinux) and the firewall are fully configured and on by default. The status of SELinux on a selected host is reported under SELinux mode in the General tab of the details view. The Manager can open required ports on Red Hat Enterprise Linux hosts when it adds them to the environment.

A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Red Hat Enterprise Linux 7 AMD64/Intel 64 version.

A physical host on the Red Hat Virtualization platform:

  • Must belong to only one cluster in the system.
  • Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions.
  • Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation.
  • Has a minimum of 2 GB RAM.
  • Can have an assigned system administrator with system permissions.

Administrators can receive the latest security advisories from the Red Hat Virtualization watch list. Subscribe to the Red Hat Virtualization watch list to receive new security advisories for Red Hat Virtualization products by email. Subscribe by completing this form:

https://www.redhat.com/mailman/listinfo/rhsa-announce

2.5.2. Red Hat Virtualization Host

Red Hat Virtualization Host (RHVH) is installed using a special build of Red Hat Enterprise Linux with only the packages required to host virtual machines. It uses an Anaconda installation interface based on the one used by Red Hat Enterprise Linux hosts, and can be updated through the Red Hat Virtualization Manager or via yum. Using the yum command is the only way to install additional packages and have them persist after an upgrade.

RHVH features a Cockpit web interface for monitoring the host’s resources and performing administrative tasks. Direct access to RHVH via SSH or console is not supported, so the Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the Red Hat Virtualization Manager, such as configuring networking or running terminal commands via the Terminal sub-tab.

Access the Cockpit web interface at https://HostFQDNorIP:9090 in your web browser. Cockpit for RHVH includes a custom Virtualization dashboard that displays the host’s health status, SSH Host Key, self-hosted engine status, virtual machines, and virtual machine statistics.

Starting in Red Hat Virtualization version 4.4 SP1 the RHVH uses systemd-coredump to gather, save and process core dumps. For more information, see the documentation for core dump storage configuration files and systemd-coredump service.

In Red Hat Virtualization 4.4 and earlier RHVH uses the Automatic Bug Reporting Tool (ABRT) to collect meaningful debug information about application crashes. For more information, see the Red Hat Enterprise Linux System Administrator’s Guide.

Note

Custom boot kernel arguments can be added to Red Hat Virtualization Host using the grubby tool. The grubby tool makes persistent changes to the grub.cfg file. Navigate to the Terminal sub-tab in the host’s Cockpit web interface to use grubby commands. See the Red Hat Enterprise Linux System Administrator’s Guide for more information.

Warning

Do not create untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities.

2.5.3. Red Hat Enterprise Linux hosts

You can use a Red Hat Enterprise Linux 7 installation on capable hardware as a host. Red Hat Virtualization supports hosts running Red Hat Enterprise Linux 7 Server AMD64/Intel 64 version with Intel VT or AMD-V extensions. To use your Red Hat Enterprise Linux machine as a host, you must also attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions.

Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and the creation of a bridge. Use the details view to monitor the process as the host and management system establish a connection.

Optionally, you can install a Cockpit web interface for monitoring the host’s resources and performing administrative tasks. The Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the Red Hat Virtualization Manager, such as configuring networking or running terminal commands via the Terminal sub-tab.

Important

Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM.

2.5.4. Satellite Host Provider Hosts

Hosts provided by a Satellite host provider can also be used as virtualization hosts by the Red Hat Virtualization Manager. After a Satellite host provider has been added to the Manager as an external provider, any hosts that it provides can be added to and used in Red Hat Virtualization in the same way as Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts.

2.5.5. Host Tasks

2.5.5.1. Adding Standard Hosts to the Red Hat Virtualization Manager

Important

Always use the RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate).

Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge.

Procedure

  1. From the Administration Portal, click Compute Hosts.
  2. Click New.
  3. Use the drop-down list to select the Data Center and Host Cluster for the new host.
  4. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
  5. Select an authentication method to use for the Manager to access the host.

    • Enter the root user’s password to use password authentication.
    • Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
  6. Optionally, click the Advanced Parameters button to change the following advanced host settings:

    • Disable automatic firewall configuration.
    • Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
  7. Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
  8. Click OK.

The new host displays in the list of hosts with a status of Installing, and you can view the progress of the installation in the Events section of the Notification Drawer ( EventsIcon ). After a brief delay the host status changes to Up.

2.5.5.2. Adding a Satellite Host Provider Host

The process for adding a Satellite host provider host is almost identical to that of adding a Red Hat Enterprise Linux host except for the method by which the host is identified in the Manager. The following procedure outlines how to add a host provided by a Satellite host provider.

Procedure

  1. Click Compute Hosts.
  2. Click New.
  3. Use the drop-down menu to select the Host Cluster for the new host.
  4. Select the Foreman/Satellite check box to display the options for adding a Satellite host provider host and select the provider from which the host is to be added.
  5. Select either Discovered Hosts or Provisioned Hosts.

    • Discovered Hosts (default option): Select the host, host group, and compute resources from the drop-down lists.
    • Provisioned Hosts: Select a host from the Providers Hosts drop-down list.

      Any details regarding the host that can be retrieved from the external provider are automatically set, and can be edited as desired.

  6. Enter the Name and SSH Port (Provisioned Hosts only) of the new host.
  7. Select an authentication method to use with the host.

    • Enter the root user’s password to use password authentication.
    • Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication (Provisioned Hosts only).
  8. You have now completed the mandatory steps to add a Red Hat Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings.

    1. Optionally disable automatic firewall configuration.
    2. Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
  9. You can configure the Power Management, SPM, Console, and Network Provider using the applicable tabs now; however, as these are not fundamental to adding a Red Hat Enterprise Linux host, they are not covered in this procedure.
  10. Click OK to add the host and close the window.

The new host displays in the list of hosts with a status of Installing, and you can view the progress of the installation in the details view. After installation is complete, the status will update to Reboot. The host must be activated for the status to change to Up.

2.5.5.3. Setting up Satellite errata viewing for a host

In the Administration Portal, you can configure a host to view errata from Red Hat Satellite. After you associate a host with a Red Hat Satellite provider, you can receive updates in the host configuration dashboard about available errata and their importance, and decide when it is practical to apply the updates.

Red Hat Virtualization 4.4 supports viewing errata with Red Hat Satellite 6.6.

Prerequisites

  • The Satellite server must be added as an external provider.
  • The Manager and any hosts on which you want to view errata must be registered in the Satellite server by their respective FQDNs. This ensures that external content host IDs do not need to be maintained in Red Hat Virtualization.

    Important

    Hosts added using an IP address cannot report errata.

  • The Satellite account that manages the host must have Administrator permissions and a default organization set.
  • The host must be registered to the Satellite server.
  • Use Red Hat Satellite remote execution to manage packages on hosts.
Note

The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Edit.
  3. Select the Use Foreman/Satellite check box.
  4. Select the required Satellite server from the drop-down list.
  5. Click OK.

The host is now configured to show the available errata, and their importance, in the same dashboard used to manage the host’s configuration.

Additional resources

2.5.5.3.1. Configuring a Host for PCI Passthrough
Note

This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV

Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first.

Prerequisites

  • Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information.

Configuring a Host for PCI Passthrough

  1. Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information.
  2. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually.

  3. For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information.

Enabling IOMMU Manually

  1. Enable IOMMU by editing the grub configuration file.

    Note

    If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default.

    • For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file.

      # vi /etc/default/grub
      ...
      GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... intel_iommu=on
      ...
    • For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file.

      # vi /etc/default/grub
      …​
      GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 …​ amd_iommu=on
      …​
      Note

      If intel_iommu=on or an AMD IOMMU is detected, you can try adding iommu=pt. The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to the previous option if the pt option doesn’t work for your host.

      If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option:

      # vi /etc/modprobe.d
      options vfio_iommu_type1 allow_unsafe_interrupts=1
  2. Refresh the grub.cfg file and reboot the host for these changes to take effect:

    # grub2-mkconfig -o /boot/grub2/grub.cfg
    # reboot
2.5.5.3.2. Enabling nested virtualization for all virtual machines
Important

Using hooks to enable nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Red Hat Technology Preview Features Support Scope.

Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines.

Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to Red Hat Virtualization (RHV) administrators.

By default, nested virtualization is not enabled in RHV. To enable nested virtualization, you install a VDSM hook, vdsm-hook-nestedvt, on all of the hosts in the cluster. Then, all of the virtual machines that run on these hosts can function as parent virtual machines.

You should only run parent virtual machines on hosts that support nested virtualization. If a parent virtual machine migrates to a host that does not support nested virtualization, its child virtual machines fail. To prevent this from happening, configure all of the hosts in the cluster to support nested virtualization. Otherwise, restrict parent virtual machines from migrating to hosts that do not support nested virtualization.

Warning

Take precautions to prevent parent virtual machines from migrating to hosts that do not support nested virtualization.

Procedure

  1. In the Administration Portal, click Compute Hosts.
  2. Select a host in the cluster where you want to enable nested virtualization and click Management Maintenance and OK.
  3. Select the host again, click Host Console, and log into the host console.
  4. Install the VDSM hook:

    # dnf install vdsm-hook-nestedvt
  5. Reboot the host.
  6. Log into the host console again and verify that nested virtualization is enabled:

    $ cat /sys/module/kvm*/parameters/nested

    If this command returns Y or 1, the feature is enabled.

  7. Repeat this procedure for all of the hosts in the cluster.

Additional resources

2.5.5.3.3. Enabling nested virtualization for individual virtual machines
Important

Nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope.

Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines.

Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to Red Hat Virtualization (RHV) administrators.

To enable nested virtualization on specific virtual machines, not all virtual machines, you configure a host or hosts to support nested virtualization. Then you configure the virtual machine or virtual machines on run on those specific hosts and enable Pass-Through Host CPU. This option lets the virtual machines use the nested virtualization settings you just configured on the host. This option also restricts which hosts the virtual machines can run on and requires manual migration.

Otherwise, to enable nested virtualization for all of the virtual machines in a cluster, see Enabling nested virtualization for all virtual machines

Only run parent virtual machines on hosts that support nested virtualization. If you migrate a parent virtual machine to a host that does not support nested virtualization, its child virtual machines will fail.

Warning

Do not migrate parent virtual machines to hosts that do not support nested virtualization.

Avoid live migration of parent virtual machines that are running child virtual machines. Even if the source and destination hosts are identical and support nested virtualization, the live migration can cause the child virtual machines to fail. Instead, shut down virtual machines before migration.

Procedure

Configure the hosts to support nested virtualization:

  1. In the Administration Portal, click Compute Hosts.
  2. Select a host in the cluster where you want to enable nested virtualization and click Management Maintenance and OK.
  3. Select the host again, click Host Console, and log into the host console.
  4. In the Edit Host window, select the Kernel tab.
  5. Under Kernel boot parameters, if the checkboxes are greyed-out, click RESET.
  6. Select Nested Virtualization and click OK.

    This action displays a kvm-<architecture>.nested=1 parameter in Kernel command line. The following steps add this parameter to the Current kernel CMD line.

  7. Click Installation Reinstall.
  8. When the host status returns to Up, click Management Restart under Power Management or SSH Management.
  9. Verify that nested virtualization is enabled. Log into the host console and enter:

    $ cat /sys/module/kvm*/parameters/nested

    If this command returns Y or 1, the feature is enabled.

  10. Repeat this procedure for all of the hosts you need to run parent virtual machines.

Enable nested virtualization in specific virtual machines:

  1. In the Administration Portal, click Compute Virtual Machines.
  2. Select a virtual machine and click Edit
  3. In the Edit Vitual Machine window, click Show Advanced Options and select the Host tab.
  4. Under Start Running On, click Specific Host and select the host or hosts you configured to support nested virtualization.
  5. Under CPU Options, select Pass-Through Host CPU. This action automatically sets the Migration mode to Allow manual migration only.

    Note

    In RHV version 4.2, you can only enable Pass-Through Host CPU when Do not allow migration is selected.

Additional resources

2.5.5.4. Moving a Host to Maintenance Mode

Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. Hosts should be placed into maintenance mode before any event that might cause VDSM to stop working properly, such as a reboot, or issues with networking or storage.

When a host is placed into maintenance mode the Red Hat Virtualization Manager attempts to migrate all running virtual machines to alternative hosts. The standard prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.

Note

Virtual machines that are pinned to the host and cannot be migrated are shut down. You can check which virtual machines are pinned to the host by clicking Pinned to Host in the Virtual Machines tab of the host’s details view.

Placing a Host into Maintenance Mode

  1. Click Compute Hosts and select the desired host.
  2. Click Management Maintenance. This opens the Maintenance Host(s) confirmation window.
  3. Optionally, enter a Reason for moving the host into maintenance mode, which will appear in the logs and when the host is activated again. Then, click OK

    Note

    The host maintenance Reason field will only appear if it has been enabled in the cluster settings. See Cluster General Settings Explained for more information.

  4. Optionally, select the required options for hosts that support Gluster.

    Select the Ignore Gluster Quorum and Self-Heal Validations option to avoid the default checks. By default, the Manager checks that the Gluster quorum is not lost when the host is moved to maintenance mode. The Manager also checks that there is no self-heal activity that will be affected by moving the host to maintenance mode. If the Gluster quorum will be lost or if there is self-heal activity that will be affected, the Manager prevents the host from being placed into maintenance mode. Only use this option if there is no other way to place the host in maintenance mode.

    Select the Stop Gluster Service option to stop all Gluster services while moving the host to maintenance mode.

    Note

    These fields will only appear in the host maintenance window when the selected host supports Gluster. See Replacing the Primary Gluster Storage Node in Maintaining Red Hat Hyperconverged Infrastructure for more information.

  5. Click OK to initiate maintenance mode.

All running virtual machines are migrated to alternative hosts. If the host is the Storage Pool Manager (SPM), the SPM role is migrated to another host. The Status field of the host changes to Preparing for Maintenance, and finally Maintenance when the operation completes successfully. VDSM does not stop while the host is in maintenance mode.

Note

If migration fails on any virtual machine, click Management Activate on the host to stop the operation placing it into maintenance mode, then click Cancel Migration on the virtual machine to stop the migration.

2.5.5.5. Activating a Host from Maintenance Mode

A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used. Activation may fail if the host is not ready; ensure that all tasks are complete before attempting to activate the host.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Management Activate.

The host status changes to Unassigned, and finally Up when the operation is complete. Virtual machines can now run on the host. Virtual machines that were migrated off the host when it was placed into maintenance mode are not automatically migrated back to the host when it is activated, but can be migrated manually. If the host was the Storage Pool Manager (SPM) before being placed into maintenance mode, the SPM role does not return automatically when the host is activated.

2.5.5.5.1. Configuring Host Firewall Rules

You can configure the host firewall rules so that they are persistent, using Ansible. The cluster must be configured to use firewalld.

Note

Changing the firewalld zone is not supported.

Configuring Firewall Rules for Hosts

  1. On the Manager machine, edit ovirt-host-deploy-post-tasks.yml.example to add a custom firewall port:

    # vi /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml.example
    ---
    #
    # Any additional tasks required to be executing during host deploy process can
    # be added below
    #
    - name: Enable additional port on firewalld
      firewalld:
        port: "12345/tcp"
        permanent: yes
        immediate: yes
        state: enabled
  2. Save the file to another location as ovirt-host-deploy-post-tasks.yml.

New or reinstalled hosts are configured with the updated firewall rules.

Existing hosts must be reinstalled by clicking Installation Reinstall and selecting Automatically configure host firewall.

2.5.5.5.2. Removing a Host

Removing a host from your Red Hat Virtualization environment is sometimes necessary, such as when you need to reinstall a host.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Management Maintenance.
  3. Once the host is in maintenance mode, click Remove. The Remove Host(s) confirmation window opens.
  4. Select the Force Remove check box if the host is part of a Red Hat Gluster Storage cluster and has volume bricks on it, or if the host is non-responsive.
  5. Click OK.
2.5.5.5.3. Updating Hosts Between Minor Releases

You can update all hosts in a cluster, or update individual hosts.

2.5.5.5.3.1. Updating All Hosts in a Cluster

You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of Red Hat Virtualization. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates.

Update one cluster at a time.

Limitations

  • On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.
  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster.
  • In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.
  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.
  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead.

Procedure

  1. In the Administration Portal, click Compute Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster.
  2. Click Upgrade.
  3. Select the hosts to update, then click Next.
  4. Configure the options:

    • Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update.
    • Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60. You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly.
    • Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Manager to check for host updates less frequently than the default.
    • Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot.
    • Use Maintenance Policy sets the cluster’s scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option.
  5. Click Next.
  6. Review the summary of the hosts and virtual machines that are affected.
  7. Click Upgrade.
  8. A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process.

You can track the progress of host updates:

  • in the Compute Clusters view, the Upgrade Status column displays a progress bar that displays the percentage of completion.
  • in the Compute Hosts view
  • in the Events section of the Notification Drawer ( EventsIcon ).

You can track the progress of individual virtual machine migrations in the Status column of the Compute Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines.

2.5.5.5.3.2. Updating Individual Hosts

Use the host upgrade manager to update individual hosts directly from the Administration Portal.

Note

The upgrade manager only checks hosts with a status of Up or Non-operational, but not Maintenance.

Limitations

  • On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.
  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low.
  • In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.
  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.
  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host.

Procedure

  1. Ensure that the correct repositories are enabled. To view a list of currently enabled repositories, run dnf repolist.

    • For Red Hat Virtualization Hosts:

      # subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms
    • For Red Hat Enterprise Linux hosts:

      # subscription-manager repos \
          --enable=rhel-8-for-x86_64-baseos-eus-rpms \
          --enable=rhel-8-for-x86_64-appstream-eus-rpms \
          --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \
          --enable=advanced-virt-for-rhel-8-x86_64-rpms \
          --enable=fast-datapath-for-rhel-8-x86_64-rpms
      
      # subscription-manager release --set=8.6
  2. In the Administration Portal, click Compute Hosts and select the host to be updated.
  3. Click Installation Check for Upgrade and click OK.

    Open the Notification Drawer ( EventsIcon ) and expand the Events section to see the result.

  4. If an update is available, click Installation Upgrade.
  5. Click OK to update the host. Running virtual machines are migrated according to their migration policy. If migration is disabled for any virtual machines, you are prompted to shut them down.

    The details of the host are updated in Compute Hosts and the status transitions through these stages:

    Maintenance > Installing > Reboot > Up

    Note

    If the update fails, the host’s status changes to Install Failed. From Install Failed you can click Installation Upgrade again.

Repeat this procedure for each host in the Red Hat Virtualization environment.

Note

You should update the hosts from the Administration Portal. However, you can update the hosts using dnf upgrade instead.

2.5.5.5.3.3. Manually Updating Hosts
Caution

This information is provided for advanced system administrators who need to update hosts manually, but Red Hat does not support this method. The procedure described in this topic does not include important steps, including certificate renewal, assuming advanced knowledge of such information. Red Hat supports updating hosts using the Administration Portal. For details, see Updating individual hosts or Updating all hosts in a cluster in the Administration Guide.

You can use the dnf command to update your hosts. Update your systems regularly, to ensure timely application of security and bug fixes.

Limitations

  • On RHVH, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.
  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low.
  • In a self-hosted engine environment, the Manager virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.
  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.
  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host.

Procedure

  1. Ensure the correct repositories are enabled. You can check which repositories are currently enabled by running dnf repolist.

    • For Red Hat Virtualization Hosts:

      # subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms
    • For Red Hat Enterprise Linux hosts:

      # subscription-manager repos \
          --enable=rhel-8-for-x86_64-baseos-eus-rpms \
          --enable=rhel-8-for-x86_64-appstream-eus-rpms \
          --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \
          --enable=advanced-virt-for-rhel-8-x86_64-rpms \
          --enable=fast-datapath-for-rhel-8-x86_64-rpms
      
      # subscription-manager release --set=8.6
  2. In the Administration Portal, click Compute Hosts and select the host to be updated.
  3. Click Management Maintenance and OK.
  4. For Red Hat Enterprise Linux hosts:

    1. Identify the current version of Red Hat Enterprise Linux:

      # cat /etc/redhat-release
    2. Check which version of the redhat-release package is available:

      # dnf --refresh info --available redhat-release

      This command shows any available updates. For example, when upgrading from Red Hat Enterprise Linux 8.2.z to 8.3, compare the version of the package with the currently installed version:

      Available Packages
      Name         : redhat-release
      Version      : 8.3
      Release      : 1.0.el8
      …​
      Caution

      The Red Hat Enterprise Linux Advanced Virtualization module is usually released later than the Red Hat Enterprise Linux y-stream. If no new Advanced Virtualization module is available yet, or if there is an error enabling it, stop here and cancel the upgrade. Otherwise you risk corrupting the host.

    3. If the Advanced Virtualization stream is available for Red Hat Enterprise Linux 8.3 or later, reset the virt module:

      # dnf module reset virt
      Note

      If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact.

      You can see the value of the stream by entering:

      # dnf module list virt
    4. Enable the virt module in the Advanced Virtualization stream with the following command:

      • For RHV 4.4.2:

        # dnf module enable virt:8.2
      • For RHV 4.4.3 to 4.4.5:

        # dnf module enable virt:8.3
      • For RHV 4.4.6 to 4.4.10:

        # dnf module enable virt:av
      • For RHV 4.4 and later:

        # dnf module enable virt:rhel
        Note

        Starting with RHEL 8.6 the Advanced virtualization packages will use the standard virt:rhel module. For RHEL 8.4 and 8.5, only one Advanced Virtualization stream is used, rhel:av.

  5. Enable version 14 of the nodejs module:

    # dnf module -y enable nodejs:14
  6. Update the host:

    # dnf upgrade --nobest
  7. Reboot the host to ensure all updates are correctly applied.

    Note

    Check the imgbased logs to see if any additional package updates have failed for a Red Hat Virtualization Host. If some packages were not successfully reinstalled after the update, check that the packages are listed in /var/imgbased/persisted-rpms. Add any missing packages then run rpm -Uvh /var/imgbased/persisted-rpms/*.

Repeat this process for each host in the Red Hat Virtualization environment.

2.5.5.5.4. Reinstalling Hosts

Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.

Warning

When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.

Prerequisites

  • If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low.
  • Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance.
  • Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Management Maintenance and OK.
  3. Click Installation Reinstall. This opens the Install Host window.
  4. Click OK to reinstall the host.

After a host has been reinstalled and its status returns to Up, you can migrate virtual machines back to the host.

Important

After you register a Red Hat Virtualization Host to the Red Hat Virtualization Manager and reinstall it, the Administration Portal may erroneously display its status as Install Failed. Click Management Activate, and the host will change to an Up status and be ready for use.

2.5.5.6. Viewing Host Errata

Errata for each host can be viewed after the host has been configured to receive errata information from the Red Hat Satellite server. For more information on configuring a host to receive errata information see Configuring Satellite Errata Management for a Host

Procedure

  1. Click Compute Hosts.
  2. Click the host’s name. This opens the details view.
  3. Click the Errata tab.

2.5.5.7. Viewing the Health Status of a Host

Hosts have an external health status in addition to their regular Status. The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the host’s Name as one of the following icons:

  • OK: No icon
  • Info: Info
  • Warning: Warning
  • Error: Error
  • Failure: Failure

To view further details about the host’s health status, click the host’s name. This opens the details view, and click the Events tab.

The host’s health status can also be viewed using the REST API. A GET request on a host will include the external_status element, which contains the health status.

You can set a host’s health status in the REST API via the events collection. For more information, see Adding Events in the REST API Guide.

2.5.5.8. Viewing Host Devices

You can view the host devices for each host in the Host Devices tab in the details view. If the host has been configured for direct device assignment, these devices can be directly attached to virtual machines for improved performance.

For more information on the hardware requirements for direct device assignment, see Additional Hardware Considerations for Using Device Assignment in Hardware Considerations for Implementing SR-IOV.

For more information on configuring the host for direct device assignment, see Configuring a Host for PCI Passthrough host tasks.

For more information on attaching host devices to virtual machines, see Host Devices in the Virtual Machine Management Guide.

Procedure

  1. Click Compute Hosts.
  2. Click the host’s name. This opens the details view.
  3. Click Host Devices tab.

This tab lists the details of the host devices, including whether the device is attached to a virtual machine, and currently in use by that virtual machine.

2.5.5.9. Accessing Cockpit from the Administration Portal

Cockpit is available by default on Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. You can access the Cockpit web interface by typing the address into a browser, or through the Administration Portal.

Procedure

  1. In the Administration Portal, click Compute Hosts and select a host.
  2. Click Host Console.

The Cockpit login page opens in a new browser window.

2.5.5.9.1. Setting a Legacy SPICE Cipher

SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is: kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL

This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine.

You can change the cipher string by using an Ansible playbook.

Changing the cipher string

  1. On the Manager machine, create a file in the directory /usr/share/ovirt-engine/playbooks. For example:

    # vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml
  2. Enter the following in the file and save it:

    name: oVirt - setup weaker SPICE encryption for old clients
    hosts: hostname
    vars:
      host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES'
    roles:
      - ovirt-host-deploy-spice-encryption
  3. Run the file you just created:

    # ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml

Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy using the --extra-vars option with the variable host_deploy_spice_cipher_string:

# ansible-playbook -l hostname \
  --extra-vars host_deploy_spice_cipher_string=”DEFAULT:-RC4:-3DES:-DES” \
  /usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml

2.5.5.10. Configuring Host Power Management Settings

Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal.

You must configure host power management in order to utilize host high availability and virtual machine high availability. For more information about power management devices, see Power Management in the Technical Reference.

Procedure

  1. Click Compute Hosts and select a host.
  2. Click Management Maintenance, and click OK to confirm.
  3. When the host is in maintenance mode, click Edit.
  4. Click the Power Management tab.
  5. Select the Enable Power Management check box to enable the fields.
  6. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.

    Important

    If you enable or disable Kdump integration on an existing host, you must reinstall the host for kdump to be configured.

  7. Optionally, select the Disable policy control of power management check box if you do not want your host’s power management to be controlled by the Scheduling Policy of the host’s cluster.
  8. Click the plus (+) button to add a new power management device. The Edit fence agent window opens.
  9. Enter the User Name and Password of the power management device into the appropriate fields.
  10. Select the power management device Type in the drop-down list.
  11. Enter the IP address in the Address field.
  12. Enter the SSH Port number used by the power management device to communicate with the host.
  13. Enter the Slot number used to identify the blade of the power management device.
  14. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries.

    • If both IPv4 and IPv6 IP addresses can be used (default), leave the Options field blank.
    • If only IPv4 IP addresses can be used, enter inet4_only=1.
    • If only IPv6 IP addresses can be used, enter inet6_only=1.
  15. Select the Secure check box to enable the power management device to connect securely to the host.
  16. Click Test to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification.
  17. Click OK to close the Edit fence agent window.
  18. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host’s cluster and dc (datacenter) for a fencing proxy.
  19. Click OK.
Note
  • For IPv6, Red Hat Virtualization supports only static addressing.
  • Dual-stack IPv4 and IPv6 addressing is not supported.

The Management Power Management drop-down menu is now enabled in the Administration Portal.

2.5.5.11. Configuring Host Storage Pool Manager Settings

The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host’s available resources, it is important to prioritize hosts that can afford the resources.

The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority.

Procedure

  1. Click Compute Hosts.
  2. Click Edit.
  3. Click the SPM tab.
  4. Use the radio buttons to select the appropriate SPM priority for the host.
  5. Click OK.
2.5.5.11.1. Migrating a self-hosted engine host to a different cluster

You cannot migrate a host that is configured as a self-hosted engine host to a data center or cluster other than the one in which the self-hosted engine virtual machine is running. All self-hosted engine hosts must be in the same data center and cluster.

You need to disable the host from being a self-hosted engine host by undeploying the self-hosted engine configuration from the host.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Management Maintenance. The host’s status changes to Maintenance.
  3. Under Reinstall, select Hosted Engine UNDEPLOY.
  4. Click Reinstall.

    Tip

    Alternatively, you can use the REST API undeploy_hosted_engine parameter.

  5. Click Edit.
  6. Select the target data center and cluster.
  7. Click OK.
  8. Click Management Activate.

2.5.6. Explanation of Settings and Controls in the New Host and Edit Host Windows

2.5.6.1. Host General Settings Explained

These settings apply when editing the details of a host or adding new Red Hat Enterprise Linux hosts and Satellite host provider hosts.

The General settings table contains the information required on the General tab of the New Host or Edit Host window.

Table 2.20. General settings
Field NameDescription

Host Cluster

The cluster and data center to which the host belongs.

Use Foreman/Satellite

Select or clear this check box to view or hide options for adding hosts provided by Satellite host providers. The following options are also available:

Discovered Hosts

  • Discovered Hosts - A drop-down list that is populated with the name of Satellite hosts discovered by the engine.
  • Host Groups -A drop-down list of host groups available.
  • Compute Resources - A drop-down list of hypervisors to provide compute resources.

Provisioned Hosts

  • Providers Hosts - A drop-down list that is populated with the name of hosts provided by the selected external provider. The entries in this list are filtered in accordance with any search queries that have been input in the Provider search filter.
  • Provider search filter - A text field that allows you to search for hosts provided by the selected external provider. This option is provider-specific; see provider documentation for details on forming search queries for specific providers. Leave this field blank to view all available hosts.

Name

The name of the host. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Comment

A field for adding plain text, human-readable comments regarding the host.

Hostname

The IP address or resolvable host name of the host. If a resolvable hostname is used, you must ensure that all addresses that the hostname is resolved to match the IP addresses, IPv4 and IPv6, used by the management network of the host.

Password

The password of the host’s root user. Set the password when adding the host. The password cannot be edited afterwards.

Activate host after install

Select this checkbox to activate the host after successful installation. This is enabled by default and required for the hypervisors to be activated successfully.

After successful installation, you can clear this checkbox to switch the host status to Maintenance. This allows the administrator to perform additional configuration tasks on the hypervisors.

Reboot host after install

Select this checkbox to reboot the host after it is installed. This is enabled by default.

Note

Changing the kernel command line parameters of the host, or changing the firewall type of the cluster also require you to reboot the host.

SSH Public Key

Copy the contents in the text box to the /root/.ssh/authorized_hosts file on the host to use the Manager’s SSH key instead of a password to authenticate with a host.

Automatically configure host firewall

When adding a new host, the Manager can open the required ports on the host’s firewall. This is enabled by default. This is an Advanced Parameter.

SSH Fingerprint

You can fetch the host’s SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter.

2.5.6.2. Host Power Management Settings Explained

The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows. You can configure power management if the host has a supported power management card.

Table 2.21. Power Management Settings
Field NameDescription

Enable Power Management

Enables power management on the host. Select this check box to enable the rest of the fields in the Power Management tab.

Kdump integration

Prevents the host from fencing while performing a kernel crash dump, so that the crash dump is not interrupted. In Red Hat Enterprise Linux 7.1 and later, kdump is available by default. If kdump is available on the host, but its configuration is not valid (the kdump service cannot be started), enabling Kdump integration will cause the host (re)installation to fail. If you enable or disable Kdump integration on an existing host, you must reinstall the host.

Disable policy control of power management

Power management is controlled by the Scheduling Policy of the host’s cluster. If power management is enabled and the defined low utilization value is reached, the Manager will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Select this check box to disable policy control.

Agents by Sequential Order

Lists the host’s fence agents. Fence agents can be sequential, concurrent, or a mix of both.

  • If fence agents are used sequentially, the primary agent is used first to stop or start a host, and if it fails, the secondary agent is used.
  • If fence agents are used concurrently, both fence agents have to respond to the Stop command for the host to be stopped; if one agent responds to the Start command, the host will go up.

Fence agents are sequential by default. Use the up and down buttons to change the sequence in which the fence agents are used.

To make two fence agents concurrent, select one fence agent from the Concurrent with drop-down list next to the other fence agent. Additional fence agents can be added to the group of concurrent fence agents by selecting the group from the Concurrent with drop-down list next to the additional fence agent.

Add Fence Agent

Click the + button to add a new fence agent. The Edit fence agent window opens. See the table below for more information on the fields in this window.

Power Management Proxy Preference

By default, specifies that the Manager will search for a fencing proxy within the same cluster as the host, and if no fencing proxy is found, the Manager will search in the same dc (data center). Use the up and down buttons to change the sequence in which these resources are used. This field is available under Advanced Parameters.

The following table contains the information required in the Edit fence agent window.

Table 2.22. Edit fence agent Settings
Field NameDescription

Address

The address to access your host’s power management device. Either a resolvable hostname or an IP address.

User Name

User account with which to access the power management device. You can set up a user on the device, or use the default user.

Password

Password for the user accessing the power management device.

Type

The type of power management device in your host. Choose one of the following:

  • apc - APC MasterSwitch network power switch. Not for use with APC 5.x power switch devices.
  • apc_snmp - Use with APC 5.x power switch devices.
  • bladecenter - IBM Bladecenter Remote Supervisor Adapter.
  • cisco_ucs - Cisco Unified Computing System.
  • drac5 - Dell Remote Access Controller for Dell computers.
  • drac7 - Dell Remote Access Controller for Dell computers.
  • eps - ePowerSwitch 8M+ network power switch.
  • hpblade - HP BladeSystem.
  • ilo, ilo2, ilo3, ilo4 - HP Integrated Lights-Out.
  • ipmilan - Intelligent Platform Management Interface and Sun Integrated Lights Out Management devices.
  • rsa - IBM Remote Supervisor Adapter.
  • rsb - Fujitsu-Siemens RSB management interface.
  • wti - WTI Network Power Switch.

For more information about power management devices, see Power Management in the Technical Reference.

Port

The port number used by the power management device to communicate with the host.

Slot

The number used to identify the blade of the power management device.

Service Profile

The service profile name used to identify the blade of the power management device. This field appears instead of Slot when the device type is cisco_ucs.

Options

Power management device specific options. Enter these as 'key=value'. See the documentation of your host’s power management device for the options available.

For Red Hat Enterprise Linux 7 hosts, if you are using cisco_ucs as the power management device, you also need to append ssl_insecure=1 to the Options field.

Secure

Select this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on the power management agent.

2.5.6.3. SPM Priority Settings Explained

The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window.

Table 2.23. SPM settings
Field NameDescription

SPM Priority

Defines the likelihood that the host will be given the role of Storage Pool Manager (SPM). The options are Low, Normal, and High priority. Low priority means that there is a reduced likelihood of the host being assigned the role of SPM, and High priority means there is an increased likelihood. The default setting is Normal.

2.5.6.4. Host Console Settings Explained

The Console settings table details the information required on the Console tab of the New Host or Edit Host window.

Table 2.24. Console settings
Field NameDescription

Override display address

Select this check box to override the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, the machine returns a public IP or FQDN (which is resolved in the external network to the public IP).

Display address

The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP.

vGPU Placement

Specifies the preferred placement of vGPUs:

  • Consolidated: Select this option if you prefer to run more vGPUs on available physical cards.
  • Separated: Select this option if you prefer to run each vGPU on a separate physical card.

2.5.6.5. Network Provider Settings Explained

The Network Provider settings table details the information required on the Network Provider tab of the New Host or Edit Host window.

Table 2.25. Network Provider settings
Field NameDescription

External Network Provider

If you have added an external network provider and want the host’s network to be provisioned by the external network provider, select one from the list.

2.5.6.6. Kernel Settings Explained

The Kernel settings table details the information required on the Kernel tab of the New Host or Edit Host window. Common kernel boot parameter options are listed as check boxes so you can easily select them.

For more complex changes, use the free text entry field next to Kernel command line to add in any additional parameters required. If you change any kernel command line parameters, you must reinstall the host.

Important

If the host is attached to the Manager, you must place the host into maintenance mode before making changes. After making the changes, reinstall the host to apply the changes.

Table 2.26. Kernel Settings
Field NameDescription

Hostdev Passthrough & SR-IOV

Enables the IOMMU flag in the kernel so a virtual machine can use a host device as if it is attached directly to the virtual machine. The host hardware and firmware must also support IOMMU. The virtualization extension and IOMMU extension must be enabled on the hardware. See Configuring a Host for PCI Passthrough. IBM POWER8 has IOMMU enabled by default.

Nested Virtualization

Enables the vmx or svm flag so virtual machines can run within virtual machines. This option is a Technology Preview feature: It is intended only for evaluation purposes. It is not supported for production purposes. To use this setting, you must install the vdsm-hook-nestedvt hook on the host. For details, see Enabling nested virtualization for all virtual machines and Enabling nested virtualization for individual virtual machines

Unsafe Interrupts

If IOMMU is enabled but the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling this option. Note that you should only enable this option if the virtual machines on the host are trusted; having the option enabled potentially exposes the host to MSI attacks from the virtual machines. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes.

PCI Reallocation

If your SR-IOV NIC is unable to allocate virtual functions because of memory issues, consider enabling this option. The host hardware and firmware must also support PCI reallocation. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes.

Blacklist Nouveau

Blocks the nouveau driver. Nouveau is a community driver for NVIDIA GPUs that conflicts with vendor-supplied drivers. The nouveau driver should be blocked when vendor drivers take precedence.

SMT Disabled

Disables Simultaneous Multi Threading (SMT). Disabling SMT can mitigate security vulnerabilities, such as L1TF or MDS.

FIPS mode

Enables FIPS mode. For details, see Enabling FIPS using the Manager.

Kernel command line

This field allows you to append more kernel parameters to the default parameters.

Note

If the kernel boot parameters are grayed out, click the reset button and the options will be available.

2.5.6.7. Hosted Engine Settings Explained

The Hosted Engine settings table details the information required on the Hosted Engine tab of the New Host or Edit Host window.

Table 2.27. Hosted Engine Settings
Field NameDescription

Choose hosted engine deployment action

Three options are available:

  • None - No actions required.
  • Deploy - Select this option to deploy the host as a self-hosted engine node.
  • Undeploy - For a self-hosted engine node, you can select this option to undeploy the host and remove self-hosted engine related configurations.

2.5.7. Host Resilience

2.5.7.1. Host High Availability

The Red Hat Virtualization Manager uses fencing to keep hosts in a cluster responsive. A Non Responsive host is different from a Non Operational host. Non Operational hosts can be communicated with by the Manager, but have an incorrect configuration, for example a missing logical network. Non Responsive hosts cannot be communicated with by the Manager.

Fencing allows a cluster to react to unexpected host failures and enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host’s power management device and test their correctness from time to time. In a fencing operation, a non-responsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains non-responsive pending manual intervention and troubleshooting.

Note

To automatically check the fencing parameters, you can configure the PMHealthCheckEnabled (false by default) and PMHealthCheckIntervalInSec (3600 sec by default) engine-config options.

When set to true, PMHealthCheckEnabled will check all host agents at the interval specified by PMHealthCheckIntervalInSec, and raise warnings if it detects issues. See Syntax for the engine-config Command for more information about configuring engine-config options.

Power management operations can be performed by Red Hat Virtualization Manager after it reboots, by a proxy host, or manually in the Administration Portal. All the virtual machines running on the non-responsive host are stopped, and highly available virtual machines are started on a different host. At least two hosts are required for power management operations.

After the Manager starts up, it automatically attempts to fence non-responsive hosts that have power management enabled after the quiet time (5 minutes by default) has elapsed. The quiet time can be configured by updating the DisableFenceAtStartupInSec engine-config option.

Note

The DisableFenceAtStartupInSec engine-config option helps prevent a scenario where the Manager attempts to fence hosts while they boot up. This can occur after a data center outage because a host’s boot process is normally longer than the Manager boot process.

Hosts can be fenced automatically by the proxy host using the power management parameters, or manually by right-clicking on a host and using the options on the menu.

Important

If a host runs virtual machines that are highly available, power management must be enabled and configured.

2.5.7.2. Power Management by Proxy in Red Hat Virtualization

The Red Hat Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy.

You can select between:

  • Any host in the same cluster as the host requiring fencing.
  • Any host in the same data center as the host requiring fencing.

A viable fencing proxy host has a status of either UP or Maintenance.

2.5.7.3. Setting Fencing Parameters on a Host

The parameters for host fencing are set using the Power Management fields on the New Host or Edit Host windows. Power management enables the system to fence a troublesome host using an additional interface such as a Remote Access Card (RAC).

All power management operations are done using a proxy host, as opposed to directly by the Red Hat Virtualization Manager. At least two hosts are required for power management operations.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click Edit.
  3. Click the Power Management tab.
  4. Select the Enable Power Management check box to enable the fields.
  5. Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.

    Important

    If you enable or disable Kdump integration on an existing host, you must reinstall the host.

  6. Optionally, select the Disable policy control of power management check box if you do not want your host’s power management to be controlled by the Scheduling Policy of the host’s cluster.
  7. Click the + button to add a new power management device. The Edit fence agent window opens.
  8. Enter the Address, User Name, and Password of the power management device.
  9. Select the power management device Type from the drop-down list.
  10. Enter the SSH Port number used by the power management device to communicate with the host.
  11. Enter the Slot number used to identify the blade of the power management device.
  12. Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries.
  13. Select the Secure check box to enable the power management device to connect securely to the host.
  14. Click the Test button to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification.

    Warning

    Power management parameters (userid, password, options, etc) are tested by Red Hat Virtualization Manager only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without the corresponding change in Red Hat Virtualization Manager, fencing is likely to fail when most needed.

  15. Click OK to close the Edit fence agent window.
  16. In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Manager will search the host’s cluster and dc (datacenter) for a fencing proxy.
  17. Click OK.

You are returned to the list of hosts. Note that the exclamation mark next to the host’s name has now disappeared, signifying that power management has been successfully configured.

2.5.7.4. fence_kdump Advanced Configuration

kdump

Click the name of a host to view the status of the kdump service in the General tab of the details view:

  • Enabled: kdump is configured properly and the kdump service is running.
  • Disabled: the kdump service is not running (in this case kdump integration will not work properly).
  • Unknown: happens only for hosts with an earlier VDSM version that does not report kdump status.

For more information on installing and using kdump, see the Red Hat Enterprise Linux 7 Kernel Crash Dump Guide.

fence_kdump

Enabling Kdump integration in the Power Management tab of the New Host or Edit Host window configures a standard fence_kdump setup. If the environment’s network configuration is simple and the Manager’s FQDN is resolvable on all hosts, the default fence_kdump settings are sufficient for use.

However, there are some cases where advanced configuration of fence_kdump is necessary. Environments with more complex networking may require manual changes to the configuration of the Manager, fence_kdump listener, or both. For example, if the Manager’s FQDN is not resolvable on all hosts with Kdump integration enabled, you can set a proper host name or IP address using engine-config:

engine-config -s FenceKdumpDestinationAddress=A.B.C.D

The following example cases may also require configuration changes:

  • The Manager has two NICs, where one of these is public-facing, and the second is the preferred destination for fence_kdump messages.
  • You need to execute the fence_kdump listener on a different IP or port.
  • You need to set a custom interval for fence_kdump notification messages, to prevent possible packet loss.

Customized fence_kdump detection settings are recommended for advanced users only, as changes to the default configuration are only necessary in more complex networking setups.

2.5.7.5. fence_kdump listener Configuration

Edit the configuration of the fence_kdump listener. This is only necessary in cases where the default configuration is not sufficient.

Procedure

  1. Create a new file (for example, my-fence-kdump.conf) in /etc/ovirt-engine/ovirt-fence-kdump-listener.conf.d/.
  2. Enter your customization with the syntax OPTION=value and save the file.

    Important

    The edited values must also be changed in engine-config as outlined in the fence_kdump Listener Configuration Options table in Configuring fence-kdump on the Manager.

  3. Restart the fence_kdump listener:

    # systemctl restart ovirt-fence-kdump-listener.service

The following options can be customized if required:

Table 2.28. fence_kdump Listener Configuration Options
VariableDescriptionDefaultNote

LISTENER_ADDRESS

Defines the IP address to receive fence_kdump messages on.

0.0.0.0

If the value of this parameter is changed, it must match the value of FenceKdumpDestinationAddress in engine-config.

LISTENER_PORT

Defines the port to receive fence_kdump messages on.

7410

If the value of this parameter is changed, it must match the value of FenceKdumpDestinationPort in engine-config.

HEARTBEAT_INTERVAL

Defines the interval in seconds of the listener’s heartbeat updates.

30

If the value of this parameter is changed, it must be half the size or smaller than the value of FenceKdumpListenerTimeout in engine-config.

SESSION_SYNC_INTERVAL

Defines the interval in seconds to synchronize the listener’s host kdumping sessions in memory to the database.

5

If the value of this parameter is changed, it must be half the size or smaller than the value of KdumpStartedTimeout in engine-config.

REOPEN_DB_CONNECTION_INTERVAL

Defines the interval in seconds to reopen the database connection which was previously unavailable.

30

-

KDUMP_FINISHED_TIMEOUT

Defines the maximum timeout in seconds after the last received message from kdumping hosts after which the host kdump flow is marked as FINISHED.

60

If the value of this parameter is changed, it must be double the size or higher than the value of FenceKdumpMessageInterval in engine-config.

2.5.7.6. Configuring fence_kdump on the Manager

Edit the Manager’s kdump configuration. This is only necessary in cases where the default configuration is not sufficient. The current configuration values can be found using:

# engine-config -g OPTION

Procedure

  1. Edit kdump’s configuration using the engine-config command:

    # engine-config -s OPTION=value
    Important

    The edited values must also be changed in the fence_kdump listener configuration file as outlined in the Kdump Configuration Options table. See fence_kdump listener configuration.

  2. Restart the ovirt-engine service:

    # systemctl restart ovirt-engine.service
  3. Reinstall all hosts with Kdump integration enabled, if required (see the table below).

The following options can be configured using engine-config:

Table 2.29. Kdump Configuration Options
VariableDescriptionDefaultNote

FenceKdumpDestinationAddress

Defines the hostname(s) or IP address(es) to send fence_kdump messages to. If empty, the Manager’s FQDN is used.

Empty string (Manager FQDN is used)

If the value of this parameter is changed, it must match the value of LISTENER_ADDRESS in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.

FenceKdumpDestinationPort

Defines the port to send fence_kdump messages to.

7410

If the value of this parameter is changed, it must match the value of LISTENER_PORT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.

FenceKdumpMessageInterval

Defines the interval in seconds between messages sent by fence_kdump.

5

If the value of this parameter is changed, it must be half the size or smaller than the value of KDUMP_FINISHED_TIMEOUT in the fence_kdump listener configuration file, and all hosts with Kdump integration enabled must be reinstalled.

FenceKdumpListenerTimeout

Defines the maximum timeout in seconds since the last heartbeat to consider the fence_kdump listener alive.

90

If the value of this parameter is changed, it must be double the size or higher than the value of HEARTBEAT_INTERVAL in the fence_kdump listener configuration file.

KdumpStartedTimeout

Defines the maximum timeout in seconds to wait until the first message from the kdumping host is received (to detect that host kdump flow has started).

30

If the value of this parameter is changed, it must be double the size or higher than the value of SESSION_SYNC_INTERVAL in the fence_kdump listener configuration file, and FenceKdumpMessageInterval.

2.5.7.7. Soft-Fencing Hosts

Hosts can sometimes become non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue.

"SSH Soft Fencing" is a process where the Manager attempts to restart VDSM via SSH on non-responsive hosts. If the Manager fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured.

Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Manager and the host times out, the following happens:

  1. On the first network failure, the status of the host changes to "connecting".
  2. The Manager then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running virtual machines on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Manager chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula).
  3. If the host does not respond when that interval has elapsed, vdsm restart is executed via SSH.
  4. If vdsm restart does not succeed in re-establishing the connection between the host and the Manager, the status of the host changes to Non Responsive and, if power management is configured, fencing is handed off to the external fencing agent.
Note

Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured.

2.5.7.8. Using Host Power Management Functions

When power management has been configured for a host, you can access a number of options from the Administration Portal interface. While each power management device has its own customizable options, they all support the basic options to start, stop, and restart a host.

Procedure

  1. Click Compute Hosts and select the host.
  2. Click the Management drop-down menu and select one of the following Power Management options:

    • Restart: This option stops the host and waits until the host’s status changes to Down. When the agent has verified that the host is down, the highly available virtual machines are restarted on another host in the cluster. The agent then restarts this host. When the host is ready for use its status displays as Up.
    • Start: This option starts the host and lets it join a cluster. When it is ready for use its status displays as Up.
    • Stop: This option powers off the host. Before using this option, ensure that the virtual machines running on the host have been migrated to other hosts in the cluster. Otherwise the virtual machines will crash and only the highly available virtual machines will be restarted on another host. When the host has been stopped its status displays as Non-Operational.

      Note

      If Power Management is not enabled, you can restart or stop the host by selecting it, clicking the Management drop-down menu, and selecting an SSH Management option, Restart or Stop.

      Important

      When two fencing agents are defined on a host, they can be used concurrently or sequentially. For concurrent agents, both agents have to respond to the Stop command for the host to be stopped; and when one agent responds to the Start command, the host will go up. For sequential agents, to start or stop a host, the primary agent is used first; if it fails, the secondary agent is used.

  3. Click OK.

2.5.7.9. Manually Fencing or Isolating a Non-Responsive Host

If a host unpredictably goes into a non-responsive state, for example, due to a hardware failure, it can significantly affect the performance of the environment. If you do not have a power management device, or if it is incorrectly configured, you can reboot the host manually.

Warning

Do not select Confirm 'Host has been Rebooted' unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption.

Procedure

  1. In the Administration Portal, click Compute Hosts and confirm the host’s status is Non Responsive.
  2. Manually reboot the host. This could mean physically entering the lab and rebooting the host.
  3. In the Administration Portal, select the host and click More Actions ( moreactions ), then click Confirm 'Host has been Rebooted'.
  4. Select the Approve Operation check box and click OK.
  5. If your hosts take an unusually long time to boot, you can set ServerRebootTimeout to specify how many seconds to wait before determining that the host is Non Responsive:

    # engine-config --set ServerRebootTimeout=integer
Red Hat logoGithubRedditYoutubeTwitter

Formazione

Prova, acquista e vendi

Community

Informazioni sulla documentazione di Red Hat

Aiutiamo gli utenti Red Hat a innovarsi e raggiungere i propri obiettivi con i nostri prodotti e servizi grazie a contenuti di cui possono fidarsi.

Rendiamo l’open source più inclusivo

Red Hat si impegna a sostituire il linguaggio problematico nel codice, nella documentazione e nelle proprietà web. Per maggiori dettagli, visita ilBlog di Red Hat.

Informazioni su Red Hat

Forniamo soluzioni consolidate che rendono più semplice per le aziende lavorare su piattaforme e ambienti diversi, dal datacenter centrale all'edge della rete.

© 2024 Red Hat, Inc.