此内容没有您所选择的语言版本。

Install Red Hat Update Infrastructure


Red Hat Update Infrastructure 5

List of requirements, setting up nodes, configuring storage, and installing RHUI 5

Red Hat Customer Content Services

Abstract

This document lists the installation requirements and provides detailed instructions to help cloud providers install RHUI 5.

The RHUI 5 installer, while essentially maintaining the same Ansible playbooks as RHUI 5, looks different compared to the previous version of the installer.

  • It is launched as a container image runtime from any RHEL host capable of running containers.
  • It requires --target-host to deploy the RHUA image. Compare this to the current behaviour of the installer where it installs the RHUA on the machine running the installer itself.
  • It requires only one hard parameter, --target-host.
  • It requires some additional command line arguments supplied to the installer to pass the user supplied certificate files. For example, you can supply volume mounts using the -v podman option.
  • It has improved parameter default assignment logic.

Chapter 2. Red Hat Update Infrastructure install types

Standard install

The standard mode when you invoke RHUI 5 installer is to deploy initially the RHUA container image onto the --target-host. In this mode of operation --remote-fs-server is also required.

Maintenance or upgrade of an existing RHUI 5 installation

Once you have deployed the RHUA container image on the target host, you can invoke the installer with --rerun switch to change some of its settings (image version included). In this case, --remote-fs-server is not required, as it will be inferred from the configuration.

In-place migration of a RHUI 5 installation

If the --migrate-from-rhui-4 installation flag is provided, the installer performs an in-place migration of the existing RHUI 4 RHUA installation on the --target-host, and stops the installation if it does not find RHUI 4. In this mode --remote-fs-server is not required, as it will be inferred from the existing RHUI 4 configuration files.

Through installation steps, RHUI 4 services are shut down and PostgreSQL database files are copied (thus doubling the space requirement for the database files) to the location reachable by the RHUI 5 container. The ownership of the Pulp content files, residing on the shared storage, is changed to match UIDs/GIDs used by the RHUI 5 container.

Migration of a RHUI 4 installation to another machine

If --source-host is provided in addition to --migrate-from-rhui-4, the --source-host is checked for an existing RHUI 4 installation. If found, its configuration, together with the database files, is transferred to the --target-host, and the RHUI 5 RHUA container is deployed there. RHUI RHUA services on the --source-host are shut down prior to the migration, and the Pulp content files on the shared storage will have a different owner, but will be otherwise intact. The same filesystem share is then mounted on the --target-host.

Note

RHUI 5 will move to the latest version of PostGresSQL, ensuring the latest security updates. This will require existing RHEL 4 to be on the latest version, and to update their PostGresSQL to version 15 prior to migrating to RHUI 5.

It is worth noting that in this scenario the hostname of the RHUA is changed, and therefore the RHUI 5 configuration and the SSL certificate for Pulp’s Nginx are adjusted accordingly.

Migration can be targeted not only to a different system but also to a different remote file share. This is indicated by the --migration-fs-server which denotes the remote file share that will be mounted by the --target-host.

Note

The content of the file share that includes the Pulp artifacts, namely the directories pulp3, symlinks, and repo-notes need to be copied independently and before the migration process.

Cloning an existing RHUI 5 installation

It is now possible to clone an existing RHUI 5 installation, with some limitations. The main limitation is that the Pulp content must be cloned beforehand, independent of the installation process. Once that is done, the installer can be invoked with --clone flag, which triggers the cloning process. The --clone flag requires both --source-host and --migration-fs-server to be provided, in addition to the standard --target-host argument which is required by default.

Common elements for both types of RHUI 4 migration

Per-artifact sync policies are no longer supported.

For example the following configuration parameters are no longer valid:

  • rpm_sync_policy
  • debug_sync_policy
  • source_sync_policy

The parameter default_sync_policy is still valid. To support different sync policies depending on the artifact type, as well as to provide additional flexibility into selecting the sync policy based on the content in question, two new configuration parameters are available:

  • immediate_repoid_regex
  • on_demand_repoid_regex

Whenever a sync task is submitted, the repoid of the repository is checked against the regex in immediate_repoid_regex first. If it matches, a sync with 'immediate' policy is requested. If not, a match is tested against on_demand_repoid_regex. This match would produce an on_demand sync task. If there is no match, the sync is performed with a policy pointed by default_sync_policy configuration parameter.

In both migration types, no CDS or HAPROXY information is migrated. It is a duty of the RHUI admin to add new CDS and HAPROXY nodes using the RHUI 5 RHUA (either through TUI or CLI). Further, CDS and HAPROXY nodes of the existing RHUI 4 installation are left intact, with their services fully operational. Again, it is a duty of the RHUI admin to shut down those nodes once they are no longer needed. Until then, they still have access to the filesystem share with the Pulp content and they are able to serve RHUI content that has been synced previously and symlinked. After migration, those legacy RHUI 4 CDS nodes will not be able to serve on-demand content not fetched yet, as their configuration points to the RHUI 4 RHUA that has been shut down.

Chapter 3. Providing installation parameters

There are several ways to provide parameters pertaining to RHUI 5 installation. They are, in descending order of priority:

  • Parameters supplied on the command line take absolute precedence over any other parameter provision methods. However, not all installation parameters are supported this way, as we do not want to force users to create an unwieldy and counterintuitive installation command line.
  • Parameters can be provided through an answers file. This method can accommodate a larger set of installation parameters.
  • The installer checks for existence of the required parameters, namely --target-host and --remote-fs-server and exits if they’re not provided.
  • If rhui-tools.conf already exists on the target host, its content is parsed and the values provided there are preserved unless a matching key is provided via command line or the answers file.
  • Some parameters have defaults that are hardcoded in the installer.

Shared storage management

RHUI 5 installer supports NFS only, therefore --remote-fs-type is no longer supported. In addition, providing literal none as --remote-fs-server argument skips the shared NFS storage setup completely. This can come handy in situations where shared storage is managed on some other level or by another product such as OpenShift. It is worth noting that --remote-fs-mountpoint is still supported, but it refers to the filesystem layout on the host, and not the container side. Basically, it helps to determine where you want to mount the filesystem. Remember that the RHUI containers are running in rootless mode, so any filesystem NFS mount needs to happen on the host.

Chapter 4. RHUI 5 install procedure

Before you begin

For RHUI 5, only the container images will be published, and not the individual RPMs. There are separate images for:

  • installer
  • RHUA
  • CDS
  • HAPROXY

Providing local files to the installer

In RHUI 4, the installer would accept local file paths as arguments to some command line switches. This is no longer an option with containerized installations, since the running container has no access to arbitrary files on the host filesystem. Therefore, the RHUI 5 installer is taught to look into some hardcoded file paths to source some files, and those paths can be provided as volume mounts through the podman command line. Unfortunately, those paths cannot be provided through the answers file as the container has already been started at the point the answers file is parsed.

The list of special file paths, local to the container, that the installer will reference:

  • /ssh-keyfile - The private SSH key used to log into target host.
  • /rhua-image.tar - The RHUA container image file in case we want to explicitly transfer it to the target host, the image file must be in the format created by podman save command. In this case, --rhua-container-image and --rhua-container-registry installation parameters are not allowed
  • /answers.yaml - The answers file, which will look similar to the following:

    rhua:
        certs_country: HR
        certs_city: Zadar
        certs_org: RHUI devs
        certs_org_unit: Containerization efforts
        certs_ca_common_name: rhui5-development.example.net
        default_sync_policy: on demand
    Copy to Clipboard Toggle word wrap
  • /rhui-ca.crt and /rhui-ca.key - The RHUI CA certificate and its key.
  • /client-ssl-ca.crt and /client-ssl-ca.key - The CA Certificate for CDS SSL traffic and its key.
  • /client-entitlement-ca.crt and /client-entitlement-ca.key - The CA certificate for client certificate management and its key.
Important

Whenever providing the volume mounts to the container, make sure you have proper SELinux labels for the container, providing either :z or :Z as a volume mount option.

Running the installer image for RHUI 5

To run the installer image you will need to access the public Red Hat registry, registry.redhat.io. The registry is protected by credentials. Also, you must be logged in a machine that has Podman installed (we call it control node), so that you can log in to the registry and subsequently run the installer image against the target host as shown in the following:

Note

The following examples assume that you are using RHEL 9.

$ sudo dnf -y install podman
[...]
$ podman login --username <CCSP_login> --password '<CCSP_password>' registry.redhat.io
Login Succeeded!
Copy to Clipboard Toggle word wrap

After you have logged in to the registry, you can check the available RHUI container images:

$ podman search registry.redhat.io/rhui5
NAME                                      DESCRIPTION
registry.redhat.io/rhui5/cds-rhel9        Red Hat Update Infrastructure 5 Content Deli...
registry.redhat.io/rhui5/installer-rhel9  Red Hat Update Infrastructure 5 Installer
registry.redhat.io/rhui5/rhua-rhel9       Red Hat Update Infrastructure 5 Appliance
registry.redhat.io/rhui5/haproxy-rhel9    Red Hat Update Infrastructure 5 Load Balance...
Copy to Clipboard Toggle word wrap

At this point you are ready to start the installation process assuming all of the following is provided:

  • The target host you want to install RHUA on. This is the --target-host installation parameter.
  • The target host can meet or exceed the following requirements:
  • It should run RHEL 9 or 10 and already be registered with Red Hat.
Note

The target host needs to be registered using the following command: subscription-manager register. When prompted, enter your CCSP user name and password.

  • Hardware should be a minimum of: x86_64, 8+ CPU cords, 8+ GB RAM, 128+ GB disk.
  • The NFS fileshare used for storing Pulp content. This is the --remote-fs-server installation parameter.
  • The target host has accepted your SSH authentication.

Assuming you have launched the target host and it is configured to accept your SSH key, you can run the following commands in Podman:

  • -it This means an interactive session is needed with a proper terminal output.
  • --rm This will remove the container after the operation is finished.
  • -v ~/.ssh/id_rsa:/ssh-keyfile:Z This will volume mount your SSH private key so that the installer container has access to it.

    Note

    Do not forget to supply your SSH passphrase if you have set up your SSH key with a passphrase.

    $ podman run -it --rm -v ~/.ssh/id_rsa:/ssh-keyfile:Z  \
      registry.redhat.io/rhui5/installer-rhel9 rhui-installer  \
      --target-user <target-user> --rhua-container-registry registry.redhat.io \
      --podman-username <CCSP_login> --podman-password '<CCSP_password>' \
      --remote-fs-server <nfs-host:/path> \
      --target-host <rhua-hostname>
    
    Trying to pull registry.redhat.io/rhui5/installer-rhel9:latest...
    ...
    Getting image source signatures
    Copying blob 92efcdccd105 done   |
    Copying blob 19f9949dbedd done   |
    Copying blob 467b1cd556e7 done   |
    Copying blob 5c6a65a8d3b9 done   |
    Copying config be3b9592ab done   |
    Writing manifest to image destination
    
    PLAY [RHUI 5 installation RHUA installation playbook executing on the *target* host] *****************************************************************************************
    
    TASK [Populate service facts] ***********************************************************
    Enter passphrase for key '/ssh-keyfile':
    ok: [<rhua-hostname>]
    
    TASK [Stop the RHUA container that might be running already] *****************************************************************************************
    skipping: [<rhua-hostname>]
    
    TASK [Prepare the dictionary for holding the rhui-tools.conf values] *****************************************************************************************
    ok: [<rhua-hostname>]
    
    TASK [Check whether we have rhui-tools.conf in the designated location] *****************************************************************************************
    ok: [<rhua-hostname>]
    
    [...]
    
    TASK [Enable and start RHUA container as a systemd service] *****************************************************************************************
    changed: [<rhua-hostname>]
    
    PLAY RECAP ******************************************************************************
    <rhua-hostname> : ok=69   changed=43   unreachable=0
    	failed=0	skipped=43   rescued=0	ignored=0
    
    
    PLAY [Attempt to copy the installer log file onto the managed node] *****************************************************************************************
    
    TASK [Copy the log file] ****************************************************************
    changed: [<rhua-hostname>]
    
    PLAY RECAP ******************************************************************************
    <rhua-hostname>: ok=1	changed=1	unreachable=0
          failed=0	skipped=0	rescued=0	ignored=0
    Copy to Clipboard Toggle word wrap

Installation Verification

Your RHUA container is ready and running on the target host. So how do you access it? During the installation, a shell function named rhua has been created to save you from typing the Podman exec invocation. Assuming you are root on the target host enter the following:

[root@rhua ~]# which rhua
rhua ()
{
    default_arg="";
    [ $# -eq 0 ] && default_arg=bash;
    [ "$1" = "-h" ] && echo -e "rhua: executes commands in the RHUA container environment.\n   Usage: rhua command [args ...]" && return 1;
    ( cd /var/lib/rhui;
    sudo -u rhui podman exec -it rhui5-rhua "${default_arg}${@}" )
}
[root@rhua ~]# rhua bash
bash-5.1# cat /etc/rhui/rhui-subscription-sync.conf
[auth]
username = admin
password = <generated_password>
bash-5.1# rhui-manager
Logging into the RHUI.

It is recommended to change the user's password
in the User Management section of RHUI Tools.

RHUI Username: admin
RHUI Password: <generated_password>
Copy to Clipboard Toggle word wrap

Using SSH agent for authentication (Optional)

If you want to use ssh-agent for passing your SSH key, you must run the installer container in the --privileged mode to allow using the ssh-agent sockets inside the container. Additionally, ensure you have ssh-agent working and you have unlocked your SSH private key. Then, run the following command:

$ ssh-add
Enter passphrase for /home/<username>/.ssh/id_rsa:
Identity added: /home/<username>/.ssh/id_rsa (/home/<username>/.ssh/id_rsa)
Copy to Clipboard Toggle word wrap

Next, in your installer invocation, replace:

-v ~/.ssh/id_rsa:/ssh-keyfile:Z
Copy to Clipboard Toggle word wrap

with the following:

--privileged -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:Z -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK
Copy to Clipboard Toggle word wrap
  • --privileged This is so the container has access to the ssh-agent sockets.
  • -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:Z This is to pass the SSH authentication socket to the container filesystem, so that the container can access your SSH key.
  • -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK This is to set the environment variable in the container runtime pointing to the location of the SSH authentication socket.

Legal Notice

Copyright © Red Hat.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。 了解我们当前的更新.

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

Theme

© 2026 Red Hat
返回顶部