Este conteúdo não está disponível no idioma selecionado.
Chapter 6. Installing Hosts for Red Hat Virtualization
Red Hat Virtualization supports two types of hosts: Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. Depending on your environment, you may want to use one type only, or both. At least two hosts are required for features such as migration and high availability.
See Recommended practices for configuring host networks for networking information.
SELinux is in enforcing mode upon installation. To verify, run getenforce. SELinux must be in enforcing mode on all hosts and Managers for your Red Hat Virtualization environment to be supported.
| Host Type | Other Names | Description |
|---|---|---|
| Red Hat Virtualization Host | RHVH, thin host | This is a minimal operating system based on Red Hat Enterprise Linux. It is distributed as an ISO file from the Customer Portal and contains only the packages required for the machine to act as a host. |
| Red Hat Enterprise Linux host | RHEL host, thick host | Red Hat Enterprise Linux systems with the appropriate subscriptions attached can be used as hosts. |
Host Compatibility
When you create a new data center, you can set the compatibility version. Select the compatibility version that suits all the hosts in the data center. Once set, version regression is not allowed. For a fresh Red Hat Virtualization installation, the latest compatibility version is set in the default data center and default cluster; to use an earlier compatibility version, you must create additional data centers and clusters. For more information about compatibility versions see Red Hat Virtualization Manager Compatibility in Red Hat Virtualization Life Cycle.
6.1. Red Hat Virtualization Hosts Copiar o linkLink copiado para a área de transferência!
6.1.1. Installing Red Hat Virtualization Hosts Copiar o linkLink copiado para a área de transferência!
Red Hat Virtualization Host (RHVH) is a minimal operating system based on Red Hat Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a Red Hat Virtualization environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See Running Cockpit for the minimum browser requirements.
RHVH supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default.
The host must meet the minimum host requirements.
When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.
Procedure
- Go to the Get Started with Red Hat Virtualization on the Red Hat Customer Portal and log in.
- Click Download Latest to access the product download page.
- Choose the appropriate Hypervisor Image for RHV from the list and click Download Now.
- Start the machine on which you are installing RHVH, booting from the prepared installation media.
From the boot menu, select Install RHVH 4.4 and press
Enter.NoteYou can also press the
Tabkey to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing theEnterkey. Press theEsckey to clear any changes to the kernel parameters and return to the boot menu.- Select a language, and click .
- Select a keyboard layout from the Keyboard Layout screen and click .
Select the device on which to install RHVH from the Installation Destination screen. Optionally, enable encryption. Click .
ImportantUse the Automatically configure partitioning option.
- Select a time zone from the Time & Date screen and click .
Select a network from the Network & Host Name screen and click Configure… to configure the connection details.
NoteTo use the connection every time the system boots, select the Connect automatically with priority check box. For more information, see Configuring network and host name options in the Red Hat Enterprise Linux 8 Installation Guide.
Enter a host name in the Host Name field, and click Done.
- Optional: Configure Security Policy and Kdump. See Customizing your RHEL installation using the GUI in Performing a standard RHEL installation for Red Hat Enterprise Linux 8 for more information on each of the sections in the Installation Summary screen.
- Click Begin Installation.
Set a root password and, optionally, create an additional user while RHVH installs.
WarningDo not create untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities.
Click Reboot to complete the installation.
NoteWhen RHVH restarts,
nodectl checkperforms a health check on the host and displays the result when you log in on the command line. The messagenode status: OKornode status: DEGRADEDindicates the health status. Runnodectl checkto get more information.NoteIf necessary, you can prevent kernel modules from loading automatically.
6.1.2. Enabling the Red Hat Virtualization Host Repository Copiar o linkLink copiado para a área de transferência!
Register the system to receive updates. Red Hat Virtualization Host only requires one repository. This section provides instructions for registering RHVH with the Content Delivery Network, or with Red Hat Satellite 6.
Registering RHVH with the Content Delivery Network
Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
Red Hat Virtualization Host 8repository to allow later updates to the Red Hat Virtualization Host:subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms
# subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Registering RHVH with Red Hat Satellite 6
-
Log in to the Cockpit web interface at
https://HostFQDNorIP:9090. - Click Terminal.
Register RHVH with Red Hat Satellite 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can also configure virtual machine subscriptions in Red Hat Satellite using virt-who. See Using virt-who to manage host-based subscriptions.
6.1.3. Advanced Installation Copiar o linkLink copiado para a área de transferência!
6.1.3.1. Custom Partitioning Copiar o linkLink copiado para a área de transferência!
Custom partitioning on Red Hat Virtualization Host (RHVH) is not recommended. Use the Automatically configure partitioning option in the Installation Destination window.
If your installation requires custom partitioning, select the I will configure partitioning option during the installation, and note that the following restrictions apply:
- Ensure the default LVM Thin Provisioning option is selected in the Manual Partitioning window.
The following directories are required and must be on thin provisioned logical volumes:
-
root (
/) -
/home -
/tmp -
/var -
/var/crash -
/var/log /var/log/auditImportantDo not create a separate partition for
/usr. Doing so will cause the installation to fail./usrmust be on a logical volume that is able to change versions along with RHVH, and therefore should be left on root (/).For information about the required storage sizes for each partition, see Storage Requirements.
-
root (
-
The
/bootdirectory should be defined as a standard partition. -
The
/vardirectory must be on a separate volume or disk. - Only XFS or Ext4 file systems are supported.
Configuring Manual Partitioning in a Kickstart File
The following example demonstrates how to configure manual partitioning in a Kickstart file.
If you use logvol --thinpool --grow, you must also include volgroup --reserved-space or volgroup --reserved-percent to reserve space in the volume group for the thin pool to grow.
6.1.3.2. Installing a DUD driver on a host without installer support Copiar o linkLink copiado para a área de transferência!
There are times when installing Red Hat Virtualization Host (RHVH) requires a Driver Update Disk (DUD), such as when using a hardware RAID device that is not supported by the default configuration of RHVH. In contrast with Red Hat Enterprise Linux hosts, RHVH does not fully support using a DUD. Subsequently the host fails to boot normally after installation because it does not see RAID. Instead it boots into emergency mode.
Example output:
Warning: /dev/test/rhvh-4.4-20210202.0+1 does not exist Warning: /dev/test/swap does not exist Entering emergency mode. Exit the shell to continue.
Warning: /dev/test/rhvh-4.4-20210202.0+1 does not exist
Warning: /dev/test/swap does not exist
Entering emergency mode. Exit the shell to continue.
In such a case you can manually add the drivers before finishing the installation.
Prerequisites
- A machine onto which you are installing RHVH.
- A DUD.
- If you are using a USB drive for the DUD and RHVH, you must have at least two available USB ports.
Procedure
- Load the DUD on the host machine.
Install RHVH. See Installing Red Hat Virtualization Hosts in Installing Red Hat Virtualization as a self-hosted engine using the command line.
ImportantWhen installation completes, do not reboot the system.
TipIf you want to access the DUD using SSH, do the following:
Add the string
inst.sshdto the kernel command line:<kernel_command_line> inst.sshd
<kernel_command_line> inst.sshdCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable networking during the installation.
- Enter the console mode, by pressing Ctrl + Alt + F3. Alternatively you can connect to it using SSH.
Mount the DUD:
mkdir /mnt/dud mount -r /dev/<dud_device> /mnt/dud
# mkdir /mnt/dud # mount -r /dev/<dud_device> /mnt/dudCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the RPM file inside the DUD to the target machine’s disk:
cp /mnt/dud/rpms/<path>/<rpm_file>.rpm /mnt/sysroot/root/
# cp /mnt/dud/rpms/<path>/<rpm_file>.rpm /mnt/sysroot/root/Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
cp /mnt/dud/rpms/x86_64/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpm /mnt/sysroot/root/
# cp /mnt/dud/rpms/x86_64/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpm /mnt/sysroot/root/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the root directory to
/mnt/sysroot:chroot /mnt/sysroot
# chroot /mnt/sysrootCopy to Clipboard Copied! Toggle word wrap Toggle overflow Back up the current initrd images. For example:
cp -p /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1 cp -p /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1
# cp -p /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1 # cp -p /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the RPM file for the driver from the copy you made earlier.
For example:
dnf install /root/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpm
# dnf install /root/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpmCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis package is not visible on the system after you reboot into the installed environment, so if you need it, for example, to rebuild the
initramfs, you need to install that package once again, after which the package remains.If you update the host using
dnf, the driver update persists, so you do not need to repeat this process.TipIf you do not have an internet connection, use the
rpmcommand instead ofdnf:rpm -ivh /root/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpm
# rpm -ivh /root/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new image, forcefully adding the driver:
dracut --force --add-drivers <module_name> --kver <kernel_version>
# dracut --force --add-drivers <module_name> --kver <kernel_version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
dracut --force --add-drivers 3w-9xxx --kver 4.18.0-240.15.1.el8_3.x86_64
# dracut --force --add-drivers 3w-9xxx --kver 4.18.0-240.15.1.el8_3.x86_64Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the results. The new image should be larger, and include the driver. For example, compare the sizes of the original, backed-up image file and the new image file.
In this example, the new image file is 88739013 bytes, larger than the original 88717417 bytes:
ls -ltr /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img*
# ls -ltr /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img* -rw-------. 1 root root 88717417 Jun 2 14:29 /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1 -rw-------. 1 root root 88739013 Jun 2 17:47 /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.imgCopy to Clipboard Copied! Toggle word wrap Toggle overflow The new drivers should be part of the image file. For example, the 3w-9xxx module should be included:
lsinitrd /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img | grep 3w-9xxx
# lsinitrd /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img | grep 3w-9xxx drwxr-xr-x 2 root root 0 Feb 22 15:57 usr/lib/modules/4.18.0-240.15.1.el8_3.x86_64/weak-updates/3w-9xxx lrwxrwxrwx 1 root root 55 Feb 22 15:57 usr/lib/modules/4.18.0-240.15.1.el8_3.x86_64/weak-updates/3w-9xxx/3w-9xxx.ko-../../../4.18.0-240.el8.x86_64/extra/3w-9xxx/3w-9xxx.ko drwxr-xr-x 2 root root 0 Feb 22 15:57 usr/lib/modules/4.18.0-240.el8.x86_64/extra/3w-9xxx -rw-r--r-- 1 root root 80121 Nov 10 2020 usr/lib/modules/4.18.0-240.el8.x86_64/extra/3w-9xxx/3w-9xxx.koCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the image to the the directory under
/bootthat contains the kernel to be used in the layer being installed, for example:cp -p /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
# cp -p /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.imgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Exit chroot.
- Exit the shell.
- If you used Ctrl + Alt + F3 to access a virtual terminal, then move back to the installer by pressing Ctrl + Alt + F_<n>_, usually F1 or F5
- At the installer screen, reboot.
Verification
The machine should reboot successfully.
6.1.3.3. Automating Red Hat Virtualization Host deployment Copiar o linkLink copiado para a área de transferência!
You can install Red Hat Virtualization Host (RHVH) without a physical media device by booting from a PXE server over the network with a Kickstart file that contains the answers to the installation questions.
When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.
General instructions for installing from a PXE server with a Kickstart file are available in the Red Hat Enterprise Linux Installation Guide, as RHVH is installed in much the same way as Red Hat Enterprise Linux. RHVH-specific instructions, with examples for deploying RHVH with Red Hat Satellite, are described below.
The automated RHVH deployment has 3 stages:
6.1.3.3.1. Preparing the installation environment Copiar o linkLink copiado para a área de transferência!
- Go to the Get Started with Red Hat Virtualization on the Red Hat Customer Portal and log in.
- Click Download Latest to access the product download page.
- Choose the appropriate Hypervisor Image for RHV from the list and click Download Now.
- Make the RHVH ISO image available over the network. See Installation Source on a Network in the Red Hat Enterprise Linux Installation Guide.
Extract the squashfs.img hypervisor image file from the RHVH ISO:
mount -o loop /path/to/RHVH-ISO /mnt/rhvh cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp cd /tmp rpm2cpio redhat-virtualization-host-image-update* | cpio -idmv
# mount -o loop /path/to/RHVH-ISO /mnt/rhvh # cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp # cd /tmp # rpm2cpio redhat-virtualization-host-image-update* | cpio -idmvCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis squashfs.img file, located in the
/tmp/usr/share/redhat-virtualization-host/image/directory, is called redhat-virtualization-host-version_number_version.squashfs.img. It contains the hypervisor image for installation on the physical machine. It should not be confused with the /LiveOS/squashfs.img file, which is used by the Anacondainst.stage2option.
6.1.3.3.2. Configuring the PXE server and the boot loader Copiar o linkLink copiado para a área de transferência!
- Configure the PXE server. See Preparing for a Network Installation in the Red Hat Enterprise Linux Installation Guide.
Copy the RHVH boot images to the
/tftpbootdirectory:cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/# cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a
rhvhlabel specifying the RHVH boot images in the boot loader configuration:LABEL rhvh MENU LABEL Install Red Hat Virtualization Host KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2=URL/to/RHVH-ISO
LABEL rhvh MENU LABEL Install Red Hat Virtualization Host KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2=URL/to/RHVH-ISOCopy to Clipboard Copied! Toggle word wrap Toggle overflow RHVH Boot loader configuration example for Red Hat Satellite
If you are using information from Red Hat Satellite to provision the host, you must create a global or host group level parameter called
rhvh_imageand populate it with the directory URL where the ISO is mounted or extracted:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Make the content of the RHVH ISO locally available and export it to the network, for example, using an HTTPD server:
cp -a /mnt/rhvh/ /var/www/html/rhvh-install curl URL/to/RHVH-ISO/rhvh-install
# cp -a /mnt/rhvh/ /var/www/html/rhvh-install # curl URL/to/RHVH-ISO/rhvh-installCopy to Clipboard Copied! Toggle word wrap Toggle overflow
6.1.3.3.3. Creating and running a Kickstart file Copiar o linkLink copiado para a área de transferência!
- Create a Kickstart file and make it available over the network. See Kickstart Installations in the Red Hat Enterprise Linux Installation Guide.
Ensure that the Kickstart file meets the following RHV-specific requirements:
The
%packagessection is not required for RHVH. Instead, use theliveimgoption and specify the redhat-virtualization-host-version_number_version.squashfs.img file from the RHVH ISO image:liveimg --url=example.com/tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host-version_number_version.squashfs.img
liveimg --url=example.com/tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host-version_number_version.squashfs.imgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Autopartitioning is highly recommended, but use caution: ensure that the local disk is detected first, include the
ignorediskcommand, and specify the local disk to ignore, such assda. To ensure that a particular drive is used, Red Hat recommends usingignoredisk --only-use=/dev/disk/<path>orignoredisk --only-use=/dev/disk/<ID>:autopart --type=thinp ignoredisk --only-use=sda ignoredisk --only-use=/dev/disk/<path> ignoredisk --only-use=/dev/disk/<ID>
autopart --type=thinp ignoredisk --only-use=sda ignoredisk --only-use=/dev/disk/<path> ignoredisk --only-use=/dev/disk/<ID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAutopartitioning requires thin provisioning.
The
--no-homeoption does not work in RHVH because/homeis a required directory.If your installation requires manual partitioning, see Custom Partitioning for a list of limitations that apply to partitions and an example of manual partitioning in a Kickstart file.
A
%postsection that calls thenodectl initcommand is required:%post nodectl init %end
%post nodectl init %endCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteEnsure that the
nodectl initcommand is at the very end of the%postsection but before the reboot code, if any.Kickstart example for deploying RHVH on its own
This Kickstart example shows you how to deploy RHVH. You can include additional commands and options as required.
WarningThis example assumes that all disks are empty and can be initialized. If you have attached disks with data, either remove them or add them to the
ignoredisksproperty.Copy to Clipboard Copied! Toggle word wrap Toggle overflow Kickstart example for deploying RHVH with registration and network configuration from Satellite
This Kickstart example uses information from Red Hat Satellite to configure the host network and register the host to the Satellite server. You must create a global or host group level parameter called
rhvh_imageand populate it with the directory URL to the squashfs.img file.ntp_server1is also a global or host group level variable.WarningThis example assumes that all disks are empty and can be initialized. If you have attached disks with data, either remove them or add them to the
ignoredisksproperty.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the Kickstart file location to the boot loader configuration file on the PXE server:
APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2=URL/to/RHVH-ISO inst.ks=URL/to/RHVH-ks.cfg
APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2=URL/to/RHVH-ISO inst.ks=URL/to/RHVH-ks.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Install RHVH following the instructions in Booting from the Network Using PXE in the Red Hat Enterprise Linux Installation Guide.
6.2. Red Hat Enterprise Linux hosts Copiar o linkLink copiado para a área de transferência!
6.2.1. Installing Red Hat Enterprise Linux hosts Copiar o linkLink copiado para a área de transferência!
A Red Hat Enterprise Linux host is based on a standard basic installation of Red Hat Enterprise Linux 8 on a physical server, with the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions attached.
For detailed installation instructions, see the Performing a standard RHEL installation.
The host must meet the minimum host requirements.
When installing or reinstalling the host’s operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.
Virtualization must be enabled in your host’s BIOS settings. For information on changing your host’s BIOS settings, refer to your host’s hardware documentation.
Do not install third-party watchdogs on Red Hat Enterprise Linux hosts. They can interfere with the watchdog daemon provided by VDSM.
6.2.2. Enabling the Red Hat Enterprise Linux host Repositories Copiar o linkLink copiado para a área de transferência!
To use a Red Hat Enterprise Linux machine as a host, you must register the system with the Content Delivery Network, attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions, and enable the host repositories.
Procedure
Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
subscription-manager register
# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find the
Red Hat Enterprise Linux ServerandRed Hat Virtualizationsubscription pools and record the pool IDs:subscription-manager list --available
# subscription-manager list --availableCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the pool IDs to attach the subscriptions to the system:
subscription-manager attach --pool=poolid
# subscription-manager attach --pool=poolidCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo view currently attached subscriptions:
subscription-manager list --consumed
# subscription-manager list --consumedCopy to Clipboard Copied! Toggle word wrap Toggle overflow To list all enabled repositories:
dnf repolist
# dnf repolistCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the repositories:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the RHEL version to 8.6:
subscription-manager release --set=8.6
# subscription-manager release --set=8.6Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that all packages currently installed are up to date:
dnf upgrade --nobest
# dnf upgrade --nobestCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the machine.
NoteIf necessary, you can prevent kernel modules from loading automatically.
6.2.3. Installing Cockpit on Red Hat Enterprise Linux hosts Copiar o linkLink copiado para a área de transferência!
You can install Cockpit for monitoring the host’s resources and performing administrative tasks.
Procedure
Install the dashboard packages:
dnf install cockpit-ovirt-dashboard
# dnf install cockpit-ovirt-dashboardCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the
cockpit.socketservice:systemctl enable cockpit.socket systemctl start cockpit.socket
# systemctl enable cockpit.socket # systemctl start cockpit.socketCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check if Cockpit is an active service in the firewall:
firewall-cmd --list-services
# firewall-cmd --list-servicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow You should see
cockpitlisted. If it is not, enter the following with root permissions to addcockpitas a service to your firewall:firewall-cmd --permanent --add-service=cockpit
# firewall-cmd --permanent --add-service=cockpitCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
--permanentoption keeps thecockpitservice active after rebooting.
You can log in to the Cockpit web interface at https://HostFQDNorIP:9090.
6.3. Recommended Practices for Configuring Host Networks Copiar o linkLink copiado para a área de transferência!
Always use the RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate).
If your network environment is complex, you may need to configure a host network manually before adding the host to the Red Hat Virtualization Manager.
Consider the following practices for configuring a host network:
-
Configure the network with Cockpit. Alternatively, you can use
nmtuiornmcli. - If a network is not required for a self-hosted engine deployment or for adding a host to the Manager, configure the network in the Administration Portal after adding the host to the Manager. See Creating a New Logical Network in a Data Center or Cluster.
Use the following naming conventions:
-
VLAN devices:
VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD -
VLAN interfaces:
physical_device.VLAN_ID(for example,eth0.23,eth1.128,enp3s0.50) -
Bond interfaces:
bondnumber(for example,bond0,bond1) -
VLANs on bond interfaces:
bondnumber.VLAN_ID(for example,bond0.50,bond1.128)
-
VLAN devices:
- Use network bonding. Network teaming is not supported in Red Hat Virtualization and will cause errors if the host is used to deploy a self-hosted engine or added to the Manager.
Use recommended bonding modes:
-
If the
ovirtmgmtnetwork is not used by virtual machines, the network may use any supported bonding mode. -
If the
ovirtmgmtnetwork is used by virtual machines, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?. -
Red Hat Virtualization’s default bonding mode is
(Mode 4) Dynamic Link Aggregation. If your switch does not support Link Aggregation Control Protocol (LACP), use(Mode 1) Active-Backup. See Bonding Modes for details.
-
If the
Configure a VLAN on a physical NIC as in the following example (although
nmcliis used, you can use any tool):nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ipv4.gateway 123.123.0.254
# nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 # nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ipv4.gateway 123.123.0.254Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure a VLAN on a bond as in the following example (although
nmcliis used, you can use any tool):nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup,miimon=100" ipv4.method disabled ipv6.method ignore nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ipv4.gateway 123.123.0.254
# nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup,miimon=100" ipv4.method disabled ipv6.method ignore # nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond # nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond # nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 # nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ipv4.gateway 123.123.0.254Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Do not disable
firewalld. - Customize the firewall rules in the Administration Portal after adding the host to the Manager. See Configuring Host Firewall Rules.
6.4. Adding Self-Hosted Engine Nodes to the Red Hat Virtualization Manager Copiar o linkLink copiado para a área de transferência!
Add self-hosted engine nodes in the same way as a standard host, with an additional step to deploy the host as a self-hosted engine node. The shared storage domain is automatically detected and the node can be used as a failover host to host the Manager virtual machine when required. You can also attach standard hosts to a self-hosted engine environment, but they cannot host the Manager virtual machine. Have at least two self-hosted engine nodes to ensure the Manager virtual machine is highly available. You can also add additional hosts using the REST API. See Hosts in the REST API Guide.
Prerequisites
- All self-hosted engine nodes must be in the same cluster.
- If you are reusing a self-hosted engine node, remove its existing self-hosted engine configuration. See Removing a Host from a Self-Hosted Engine Environment.
Procedure
-
In the Administration Portal, click
. Click .
For information on additional host settings, see Explanation of Settings and Controls in the New Host and Edit Host Windows in the Administration Guide.
- Use the drop-down list to select the Data Center and Host Cluster for the new host.
- Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
Select an authentication method to use for the Manager to access the host.
- Enter the root user’s password to use password authentication.
- Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
- Optionally, configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
- Click the Hosted Engine tab.
- Select Deploy.
- Click .
6.5. Adding Standard Hosts to the Red Hat Virtualization Manager Copiar o linkLink copiado para a área de transferência!
Always use the RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate).
Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge.
Procedure
-
From the Administration Portal, click
. - Click .
- Use the drop-down list to select the Data Center and Host Cluster for the new host.
- Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
Select an authentication method to use for the Manager to access the host.
- Enter the root user’s password to use password authentication.
- Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
Optionally, click the Advanced Parameters button to change the following advanced host settings:
- Disable automatic firewall configuration.
- Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
- Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
- Click .
The new host displays in the list of hosts with a status of Installing, and you can view the progress of the installation in the Events section of the Notification Drawer (
). After a brief delay the host status changes to Up.