此内容没有您所选择的语言版本。
Install Red Hat Update Infrastructure
List of requirements, setting up nodes, configuring storage, and installing RHUI 5
Abstract
The RHUI 5 installer, while essentially maintaining the same Ansible playbooks as RHUI 5, looks different compared to the previous version of the installer.
- It is launched as a container image runtime from any RHEL host capable of running containers.
-
It requires
--target-hostto deploy the RHUA image. Compare this to the current behaviour of the installer where it installs the RHUA on the machine running the installer itself. -
It requires only one hard parameter,
--target-host. -
It requires some additional command line arguments supplied to the installer to pass the user supplied certificate files. For example, you can supply volume mounts using the
-v podmanoption. - It has improved parameter default assignment logic.
Chapter 2. Red Hat Update Infrastructure install types 复制链接链接已复制到粘贴板!
Standard install
The standard mode when you invoke RHUI 5 installer is to deploy initially the RHUA container image onto the --target-host. In this mode of operation --remote-fs-server is also required.
Maintenance or upgrade of an existing RHUI 5 installation
Once you have deployed the RHUA container image on the target host, you can invoke the installer with --rerun switch to change some of its settings (image version included). In this case, --remote-fs-server is not required, as it will be inferred from the configuration.
In-place migration of a RHUI 5 installation
If the --migrate-from-rhui-4 installation flag is provided, the installer performs an in-place migration of the existing RHUI 4 RHUA installation on the --target-host, and stops the installation if it does not find RHUI 4. In this mode --remote-fs-server is not required, as it will be inferred from the existing RHUI 4 configuration files.
Through installation steps, RHUI 4 services are shut down and PostgreSQL database files are copied (thus doubling the space requirement for the database files) to the location reachable by the RHUI 5 container. The ownership of the Pulp content files, residing on the shared storage, is changed to match UIDs/GIDs used by the RHUI 5 container.
Migration of a RHUI 4 installation to another machine
If --source-host is provided in addition to --migrate-from-rhui-4, the --source-host is checked for an existing RHUI 4 installation. If found, its configuration, together with the database files, is transferred to the --target-host, and the RHUI 5 RHUA container is deployed there. RHUI RHUA services on the --source-host are shut down prior to the migration, and the Pulp content files on the shared storage will have a different owner, but will be otherwise intact. The same filesystem share is then mounted on the --target-host.
RHUI 5 will move to the latest version of PostGresSQL, ensuring the latest security updates. This will require existing RHEL 4 to be on the latest version, and to update their PostGresSQL to version 15 prior to migrating to RHUI 5.
It is worth noting that in this scenario the hostname of the RHUA is changed, and therefore the RHUI 5 configuration and the SSL certificate for Pulp’s Nginx are adjusted accordingly.
Migration can be targeted not only to a different system but also to a different remote file share. This is indicated by the --migration-fs-server which denotes the remote file share that will be mounted by the --target-host.
The content of the file share that includes the Pulp artifacts, namely the directories pulp3, symlinks, and repo-notes need to be copied independently and before the migration process.
Cloning an existing RHUI 5 installation
It is now possible to clone an existing RHUI 5 installation, with some limitations. The main limitation is that the Pulp content must be cloned beforehand, independent of the installation process. Once that is done, the installer can be invoked with --clone flag, which triggers the cloning process. The --clone flag requires both --source-host and --migration-fs-server to be provided, in addition to the standard --target-host argument which is required by default.
Common elements for both types of RHUI 4 migration
Per-artifact sync policies are no longer supported.
For example the following configuration parameters are no longer valid:
-
rpm_sync_policy -
debug_sync_policy -
source_sync_policy
The parameter default_sync_policy is still valid. To support different sync policies depending on the artifact type, as well as to provide additional flexibility into selecting the sync policy based on the content in question, two new configuration parameters are available:
-
immediate_repoid_regex -
on_demand_repoid_regex
Whenever a sync task is submitted, the repoid of the repository is checked against the regex in immediate_repoid_regex first. If it matches, a sync with 'immediate' policy is requested. If not, a match is tested against on_demand_repoid_regex. This match would produce an on_demand sync task. If there is no match, the sync is performed with a policy pointed by default_sync_policy configuration parameter.
In both migration types, no CDS or HAPROXY information is migrated. It is a duty of the RHUI admin to add new CDS and HAPROXY nodes using the RHUI 5 RHUA (either through TUI or CLI). Further, CDS and HAPROXY nodes of the existing RHUI 4 installation are left intact, with their services fully operational. Again, it is a duty of the RHUI admin to shut down those nodes once they are no longer needed. Until then, they still have access to the filesystem share with the Pulp content and they are able to serve RHUI content that has been synced previously and symlinked. After migration, those legacy RHUI 4 CDS nodes will not be able to serve on-demand content not fetched yet, as their configuration points to the RHUI 4 RHUA that has been shut down.
Chapter 3. Providing installation parameters 复制链接链接已复制到粘贴板!
There are several ways to provide parameters pertaining to RHUI 5 installation. They are, in descending order of priority:
- Parameters supplied on the command line take absolute precedence over any other parameter provision methods. However, not all installation parameters are supported this way, as we do not want to force users to create an unwieldy and counterintuitive installation command line.
- Parameters can be provided through an answers file. This method can accommodate a larger set of installation parameters.
-
The installer checks for existence of the required parameters, namely
--target-hostand--remote-fs-serverand exits if they’re not provided. -
If
rhui-tools.confalready exists on the target host, its content is parsed and the values provided there are preserved unless a matching key is provided via command line or the answers file. - Some parameters have defaults that are hardcoded in the installer.
Shared storage management
RHUI 5 installer supports NFS only, therefore --remote-fs-type is no longer supported. In addition, providing literal none as --remote-fs-server argument skips the shared NFS storage setup completely. This can come handy in situations where shared storage is managed on some other level or by another product such as OpenShift. It is worth noting that --remote-fs-mountpoint is still supported, but it refers to the filesystem layout on the host, and not the container side. Basically, it helps to determine where you want to mount the filesystem. Remember that the RHUI containers are running in rootless mode, so any filesystem NFS mount needs to happen on the host.
Chapter 4. RHUI 5 install procedure 复制链接链接已复制到粘贴板!
Before you begin
For RHUI 5, only the container images will be published, and not the individual RPMs. There are separate images for:
- installer
- RHUA
- CDS
- HAPROXY
Providing local files to the installer
In RHUI 4, the installer would accept local file paths as arguments to some command line switches. This is no longer an option with containerized installations, since the running container has no access to arbitrary files on the host filesystem. Therefore, the RHUI 5 installer is taught to look into some hardcoded file paths to source some files, and those paths can be provided as volume mounts through the podman command line. Unfortunately, those paths cannot be provided through the answers file as the container has already been started at the point the answers file is parsed.
The list of special file paths, local to the container, that the installer will reference:
-
/ssh-keyfile- The private SSH key used to log into target host. -
/rhua-image.tar- The RHUA container image file in case we want to explicitly transfer it to the target host, the image file must be in the format created bypodman savecommand. In this case, --rhua-container-image and --rhua-container-registry installation parameters are not allowed /answers.yaml - The answers file, which will look similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
/rhui-ca.crtand/rhui-ca.key- The RHUI CA certificate and its key. -
/client-ssl-ca.crtand/client-ssl-ca.key- The CA Certificate for CDS SSL traffic and its key. -
/client-entitlement-ca.crtand/client-entitlement-ca.key- The CA certificate for client certificate management and its key.
Whenever providing the volume mounts to the container, make sure you have proper SELinux labels for the container, providing either :z or :Z as a volume mount option.
Running the installer image for RHUI 5
To run the installer image you will need to access the public Red Hat registry, registry.redhat.io. The registry is protected by credentials. Also, you must be logged in a machine that has Podman installed (we call it control node), so that you can log in to the registry and subsequently run the installer image against the target host as shown in the following:
The following examples assume that you are using RHEL 9.
sudo dnf -y install podman [...] podman login --username <CCSP_login> --password '<CCSP_password>' registry.redhat.io Login Succeeded!
$ sudo dnf -y install podman
[...]
$ podman login --username <CCSP_login> --password '<CCSP_password>' registry.redhat.io
Login Succeeded!
After you have logged in to the registry, you can check the available RHUI container images:
At this point you are ready to start the installation process assuming all of the following is provided:
-
The target host you want to install RHUA on. This is the
--target-hostinstallation parameter. - The target host can meet or exceed the following requirements:
- It should run RHEL 9 or 10 and already be registered with Red Hat.
The target host needs to be registered using the following command: subscription-manager register. When prompted, enter your CCSP user name and password.
- Hardware should be a minimum of: x86_64, 8+ CPU cords, 8+ GB RAM, 128+ GB disk.
-
The NFS fileshare used for storing Pulp content. This is the
--remote-fs-serverinstallation parameter. - The target host has accepted your SSH authentication.
Assuming you have launched the target host and it is configured to accept your SSH key, you can run the following commands in Podman:
-
-itThis means an interactive session is needed with a proper terminal output. -
--rmThis will remove the container after the operation is finished. -v ~/.ssh/id_rsa:/ssh-keyfile:ZThis will volume mount your SSH private key so that the installer container has access to it.NoteDo not forget to supply your SSH passphrase if you have set up your SSH key with a passphrase.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Installation Verification
Your RHUA container is ready and running on the target host. So how do you access it? During the installation, a shell function named rhua has been created to save you from typing the Podman exec invocation. Assuming you are root on the target host enter the following:
Using SSH agent for authentication (Optional)
If you want to use ssh-agent for passing your SSH key, you must run the installer container in the --privileged mode to allow using the ssh-agent sockets inside the container. Additionally, ensure you have ssh-agent working and you have unlocked your SSH private key. Then, run the following command:
ssh-add Enter passphrase for /home/<username>/.ssh/id_rsa: Identity added: /home/<username>/.ssh/id_rsa (/home/<username>/.ssh/id_rsa)
$ ssh-add
Enter passphrase for /home/<username>/.ssh/id_rsa:
Identity added: /home/<username>/.ssh/id_rsa (/home/<username>/.ssh/id_rsa)
Next, in your installer invocation, replace:
-v ~/.ssh/id_rsa:/ssh-keyfile:Z
-v ~/.ssh/id_rsa:/ssh-keyfile:Z
with the following:
--privileged -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:Z -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK
--privileged -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:Z -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK
-
--privilegedThis is so the container has access to the ssh-agent sockets. -
-v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:ZThis is to pass the SSH authentication socket to the container filesystem, so that the container can access your SSH key. -
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCKThis is to set the environment variable in the container runtime pointing to the location of the SSH authentication socket.