Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 5. Installing the Red Hat Virtualization Manager

download PDF

5.1. Manually installing the RHV-M Appliance

When you deploy the self-hosted engine, the following sequence of events takes place:

  1. The installer installs the RHV-M Appliance to the deployment host.
  2. The appliance installs the Manager virtual machine.
  3. The appliance installs the Manager on the Manager virtual machine.

However, you can install the appliance manually on the deployment host beforehand if you need to. The appliance is large and network connectivity issues might cause the appliance installation to take a long time, or possibly fail.

Procedure

  1. On Red Hat Enterprise Linux hosts:

    1. Reset the virt module:

      # dnf module reset virt
      Note

      If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact.

      You can see the value of the stream by entering:

      # dnf module list virt
    2. Enable the virt module in the Advanced Virtualization stream with the following command:
  • For RHV 4.4.2:

    # dnf module enable virt:8.2
  • For RHV 4.4.3 to 4.4.5:

    # dnf module enable virt:8.3
  • For RHV 4.4.6 to 4.4.10:

    # dnf module enable virt:av
  • For RHV 4.4 and later:

    # dnf module enable virt:rhel
    Note

    Starting with RHEL 8.6 the Advanced virtualization packages will use the standard virt:rhel module. For RHEL 8.4 and 8.5, only one Advanced Virtualization stream is used, rhel:av.

    1. Synchronize installed packages to update them to the latest available versions:

      # dnf distro-sync --nobest
    2. Install the RHV-M Appliance to the host manually:

      # dnf install rhvm-appliance

Now, when you deploy the self-hosted engine, the installer detects that the appliance is already installed.

5.2. Enabling and configuring the firewall

firewalld must be installed and running before you run the self-hosted deployment script. You must also have an active zone with an interface configured.

Prerequisites

  • firewalld is installed. hosted-engine-setup requires the firewalld package, so you do not need to do any additional steps.

Procedure

  1. Start firewalld:

    # systemctl unmask firewalld
    # systemctl start firewalld

    To ensure firewalld starts automatically at system start, enter the following command as root:

    # systemctl enable firewalld
  2. Ensure that firewalld is running:

    # systemctl status firewalld
  3. Ensure that your management interface is in a firewall zone via

    # firewall-cmd --get-active-zones

Now you are ready to deploy the self-hosted engine.

5.3. Deploying the self-hosted engine using the command line

You can deploy a self-hosted engine from the command line. After installing the setup package, you run the command hosted-engine --deploy, and a script collects the details of your environment and uses them to configure the host and the Manager.

You can customize the Manager virtual machine during deployment, either manually, by pausing the deployment, or using automation.

  • Setting the variable he_pause_host to true pauses deployment after installing the Manager and adding the deployment host to the Manager.
  • Setting the variable he_pause_before_engine_setup to true pauses the deployment before installing the Manager and before restoring the Manager when using he_restore_from_file.

    Note

    When the he_pause_host or he_pause_before_engine_setup variables are set to true a lock file is created at /tmp with the suffix _he_setup_lock on the deployment host. You can then manually customize the virtual machine as needed. The deployment continues after you delete the lock file, or after 24 hours, whichever comes first.

  • Adding an Ansible playbook to any of the following directories on the deployment host automatically runs the playbook. Add the playbook under one of the following directories under /usr/share/ansible/collections/ansible_collections/redhat/rhv/roles/hosted_engine_setup/hooks/:

    • enginevm_before_engine_setup
    • enginevm_after_engine_setup
    • after_add_host
    • after_setup

Prerequisites

  • Upgrade the appliance content to the latest product version before performing engine-setup.

    • To do this manually, pause the deployment using he_pause_before_engine_setup and perform a dnf update.
    • To do this automatically, apply the enginevm_before_engine_setup hook.
  • FQDNs prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS.
  • When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine.
  • Optional: If you want to customize the Manager virtual machine during deployment using automation, an Ansible playbook must be added. See Customizing the Engine virtual machine using automation during deployment.
  • The self-hosted engine setup script requires ssh public key access using 2048-bit RSA keys from the engine virtual machine to the root account of its bare metal host. In /etc/ssh/sshd_config, these values must be set as follows:

    • PubkeyAcceptedKeyTypes must allow 2048-bit RSA keys or stronger.

      By default, this setting uses system-wide crypto policies. For more information, see the manual page crypto-policies(7).

      Note

      RHVH hosts that are registered with the Manager in versions earlier than 4.4.5.5 require RSA 2048 for backward compatibility until all the keys are migrated.

      RHVH hosts registered for 4.4.5.5 and later use the strongest algorithm that is supported by both the Manager and RHVH. The PubkeyAcceptedKeyTypes setting helps determine which algorithm is used.

    • PermitRootLogin is set to without-password or yes
    • PubkeyAuthentication is set to yes

Procedure

  1. Install the deployment tool:

    # dnf install ovirt-hosted-engine-setup
  2. Use the tmux window manager to run the script to avoid losing the session in case of network or terminal disruption.

    Install and run tmux:

    # dnf -y install tmux
    # tmux
  3. Start the deployment script:

    # hosted-engine --deploy

    Alternatively, to pause the deployment after adding the deployment host to the Manager, use the command line option --ansible-extra-vars=he_pause_host=true:

    # hosted-engine --deploy --ansible-extra-vars=he_pause_host=true
    Note

    To escape the script at any time, use the Ctrl+D keyboard combination to abort deployment. In the event of session timeout or connection disruption, run tmux attach to recover the deployment session.

  4. When prompted, enter Yes to begin the deployment:

    Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine.
    The locally running engine will be used to configure a new storage domain and create a VM there.
    At the end the disk of the local VM will be moved to the shared storage.
    Are you sure you want to continue? (Yes, No)[Yes]:
  5. Configure the network. Check that the gateway shown is correct and press Enter. Enter a pingable address on the same subnet so the script can check the host’s connectivity.

    Please indicate a pingable gateway IP address [X.X.X.X]:
  6. The script detects possible NICs to use as a management bridge for the environment. Enter one of them or press Enter to accept the default.

    Please indicate a nic to set ovirtmgmt bridge on: (ens1, ens0) [ens1]:
  7. Specify how to check network connectivity. The default is dns.

    Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]:
    ping
    Attempts to ping the gateway.
    dns
    Checks the connection to the DNS server.
    tcp
    Creates a TCP connection to a host and port combination. You need to specify a destination IP address and port. Once the connection is successfully created, the network is considered to be alive. Ensure that the given host is able to accept incoming TCP connections on the given port.
    none
    The network is always considered connected.
  8. Enter a name for the data center in which to deploy the host for the self-hosted engine. The default name is Default.

    Please enter the name of the data center where you want to deploy this hosted-engine host.
    Data center [Default]:
  9. Enter a name for the cluster in which to deploy the host for the self-hosted engine. The default name is Default.

    Please enter the name of the cluster where you want to deploy this hosted-engine host.
    Cluster [Default]:
  10. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.
  11. To deploy with a custom RHV-M Appliance appliance image, specify the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance.

    If you want to deploy with a custom engine appliance image, please specify the path to the OVA archive you would like to use.
     Entering no value will use the image from the rhvm-appliance rpm, installing it if needed.
     Appliance image path []:
  12. Enter the CPU and memory configuration for the Manager virtual machine:

    Please specify the number of virtual CPUs for the VM. The default is the appliance OVF value [4]:
    Please specify the memory size of the VM in MB. The default is the maximum available [6824]:
  13. Specify the FQDN for the Manager virtual machine, such as manager.example.com:

    Please provide the FQDN you would like to use for the engine.
    Note: This will be the FQDN of the engine VM you are now going to launch,
    it should not point to the base host or to any other existing machine.
    Engine VM FQDN []:
  14. Specify the domain of the Manager virtual machine. For example, if the FQDN is manager.example.com, then enter example.com.

    Please provide the domain name you would like to use for the engine appliance.
    Engine VM domain: [example.com]
  15. Create the root password for the Manager, and reenter it to confirm:

    Enter root password that will be used for the engine appliance:
    Confirm appliance root password:
  16. Optional: Enter an SSH public key to enable you to log in to the Manager virtual machine as the root user without entering a password, and specify whether to enable SSH access for the root user:

    You may provide an SSH public key, that will be added by the deployment script to the authorized_keys file of the root user in the engine appliance.
    This should allow you passwordless login to the engine machine after deployment.
    If you provide no key, authorized_keys will not be touched.
    SSH public key []:
    
    Do you want to enable ssh access for the root user (yes, no, without-password) [yes]:
  17. Optional: You can apply the DISA STIG security profile on the Manager virtual machine. The DISA STIG profile is the default OpenSCAP profile.

    Do you want to apply a default OpenSCAP security profile? (Yes, No) [No]:
  18. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.

    You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:3d:34:47]:
  19. Enter the Manager virtual machine’s networking details:

    How should the engine VM network be configured (DHCP, Static)[DHCP]?

    If you specified Static, enter the IP address of the Manager virtual machine:

    Important
    • The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
    • For IPv6, Red Hat Virtualization supports only static addressing.
    Please enter the IP address to be used for the engine VM [x.x.x.x]:
    Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM
    Engine VM DNS (leave it empty to skip):
  20. Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine’s /etc/hosts file. You must ensure that the host names are resolvable.

    Add lines for the appliance itself and for this host to /etc/hosts on the engine VM?
    Note: ensuring that this host could resolve the engine VM hostname is still up to you.
    Add lines to /etc/hosts? (Yes, No)[Yes]:
  21. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications. Alternatively, press Enter to accept the defaults:

    Please provide the name of the SMTP server through which we will send notifications [localhost]:
    Please provide the TCP port number of the SMTP server [25]:
    Please provide the email address from which notifications will be sent [root@localhost]:
    Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
  22. Create a password for the admin@internal user to access the Administration Portal and reenter it to confirm:

    Enter engine admin password:
    Confirm engine admin password:
  23. Specify the hostname of the deployment host:

    Please provide the hostname of this host on the management network [hostname.example.com]:

    The script creates the virtual machine. By default, the script first downloads and installs the RHV-M Appliance, which increases the installation time.

  24. Optional: If you set the variable he_pause_host: true, the deployment pauses after adding the deployment host to the Manager. You can now log in from the deployment host to the Manager virtual machine to customize it. You can log in with either the FQDN or the IP address of the Manager. For example, if the FQDN of the Manager is manager.example.com:

    $ ssh root@manager.example.com
    Tip

    In the installation log, the IP address is in local_vm_ip. The installation log is the most recent instance of /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm*.

    1. Customize the Manager virtual machine as needed.
    2. When you are done, log in to the Administration Portal using a browser with the Manager FQDN and make sure that the host’s state is Up.
    3. Delete the lock file and the deployment script automatically continues, configuring the Manager virtual machine.
  25. Select the type of storage to use:

    Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:
    • For NFS, enter the version, full address and path to the storage, and any mount options:

      Please specify the nfs version you would like to use (auto, v3, v4, v4_1)[auto]:
      Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/nfs
      If needed, specify additional mount options for the connection to the hosted-engine storage domain []:
    • For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.

      Note

      To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.

      Please specify the iSCSI portal IP address:
      Please specify the iSCSI portal port [3260]:
      Please specify the iSCSI discover user:
      Please specify the iSCSI discover password:
      Please specify the iSCSI portal login user:
      Please specify the iSCSI portal login password:
      
      The following targets have been found:
      	[1]	iqn.2017-10.com.redhat.example:he
      		TPGT: 1, portals:
      			192.168.1.xxx:3260
      			192.168.2.xxx:3260
      			192.168.3.xxx:3260
      
      Please select a target (1) [1]: 1
      
      The following luns have been found on the requested target:
        [1] 360003ff44dc75adcb5046390a16b4beb   199GiB  MSFT   Virtual HD
            status: free, paths: 1 active
      
      Please select the destination LUN (1) [1]:
    • For Gluster storage, enter the full address and path to the storage, and any mount options:

      Important

      Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:

      gluster volume set VOLUME_NAME group virt
      gluster volume set VOLUME_NAME performance.strict-o-direct on
      gluster volume set VOLUME_NAME network.remote-dio off
      gluster volume set VOLUME_NAME storage.owner-uid 36
      gluster volume set VOLUME_NAME storage.owner-gid 36
      gluster volume set VOLUME_NAME network.ping-timeout 30
      Please specify the full shared storage connection path to use (example: host:/path): storage.example.com:/hosted_engine/gluster_volume
      If needed, specify additional mount options for the connection to the hosted-engine storage domain []:
    • For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.

      The following luns have been found on the requested target:
      [1] 3514f0c5447600351   30GiB   XtremIO XtremApp
      		status: used, paths: 2 active
      
      [2] 3514f0c5447600352   30GiB   XtremIO XtremApp
      		status: used, paths: 2 active
      
      Please select the destination LUN (1, 2) [1]:
  26. Enter the disk size of the Manager virtual machine:

    Please specify the size of the VM disk in GB: [50]:

    When the deployment completes successfully, one data center, cluster, host, storage domain, and the Manager virtual machine are already running. You can log in to the Administration Portal to add any other resources.

  27. Optional: Install and configure Red Hat Single Sign On so that you can add additional users to the environment. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide.
  28. Optional: Deploy Grafana so you can monitor and display reports from your RHV environment. For more information, see Configuring Grafana in the Administration Guide.

The Manager virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown in the Administration Portal.

Note

Both the Manager’s I/O scheduler and the hypervisor that hosts the Manager reorder I/O requests. This double reordering might delay I/O requests to the storage layer, impacting performance.

Depending on your data center, you might improve performance by changing the I/O scheduler to none. For more information, see Available disk schedulers in Monitoring and managing system status and performance for RHEL.

The next step is to enable the Red Hat Virtualization Manager repositories.

5.4. Enabling the Red Hat Virtualization Manager Repositories

You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories.

Procedure

  1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:

    # subscription-manager register
    Note

    If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager.

  2. Find the Red Hat Virtualization Manager subscription pool and record the pool ID:

    # subscription-manager list --available
  3. Use the pool ID to attach the subscription to the system:

    # subscription-manager attach --pool=pool_id
    Note

    To view currently attached subscriptions:

    # subscription-manager list --consumed

    To list all enabled repositories:

    # dnf repolist
  4. Configure the repositories:

    # subscription-manager repos \
        --disable='*' \
        --enable=rhel-8-for-x86_64-baseos-eus-rpms \
        --enable=rhel-8-for-x86_64-appstream-eus-rpms \
        --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \
        --enable=fast-datapath-for-rhel-8-x86_64-rpms \
        --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \
        --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \
        --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \
        --enable=rhel-8-for-x86_64-appstream-tus-rpms \
        --enable=rhel-8-for-x86_64-baseos-tus-rpms
  5. Set the RHEL version to 8.6:

    # subscription-manager release --set=8.6
  6. Enable the pki-deps module.

    # dnf module -y enable pki-deps
  7. Enable version 12 of the postgresql module.

    # dnf module -y enable postgresql:12
  8. Enable version 14 of the nodejs module:

    # dnf module -y enable nodejs:14
  9. Update the Self-Hosted Engine using the procedure Updating a Self-Hosted Engine in the Upgrade Guide.

Additional resources

For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components

Log in to the Administration Portal, where you can add hosts and storage to the environment:

5.5. Connecting to the Administration Portal

Access the Administration Portal using a web browser.

  1. In a web browser, navigate to https://manager-fqdn/ovirt-engine, replacing manager-fqdn with the FQDN that you provided during installation.

    Note

    You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/. For example:

    # vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf
    SSO_ALTERNATE_ENGINE_FQDNS="alias1.example.com alias2.example.com"

    The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Manager to the list, but using IP addresses instead of DNS-resolvable host names is not recommended.

  2. Click Administration Portal. An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time.
  3. Enter your User Name and Password. If you are logging in for the first time, use the user name admin along with the password that you specified during installation.
  4. Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain.
  5. Click Log In.
  6. You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page.

To log out of the Red Hat Virtualization Administration Portal, click your user name in the header bar and click Sign Out. You are logged out of all portals and the Manager welcome screen displays.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.