Installation and Management Guide
List of requirements, setting up nodes, configuring storage, and installing RHUI 5
Abstract
Chapter 1. About RHUI 5 Copy linkLink copied to clipboard!
1.1. About Red Hat Update Infrastructure 5 Copy linkLink copied to clipboard!
RHUI 5 is a highly scalable, highly redundant framework that enables you to manage repositories and content. It also enables cloud providers to deliver content and updates to RHUI instances. Based on the upstream Pulp project, RHUI allows cloud providers to locally mirror Red Hat-hosted repository content, create custom repositories with their own content, and make those repositories available to a large group of end users through a load-balanced content delivery system.
As a system administrator, you can prepare your infrastructure for participation in the Red Hat Certified Cloud and Service Provider program by installing and configuring the Red Hat Update Appliance (RHUA), content delivery servers (CDS), repositories, shared storage, and load balancing.
Configuring RHUI comprises the following tasks:
- Adding, enabling and synchronizing a Red Hat repository
- Creating client entitlement certificates and client configuration RPMs
- Creating client profiles for the RHUI servers
Experienced RHEL system administrators are the target audience. System administrators with limited RHEL skills should consider engaging Red Hat Consulting to provide a Red Hat Certified Cloud Provider Architecture Service.
Learn about configuring, managing, and updating RHUI with the following topics:
- the RHUI components
- content provider types
- the command line interface (CLI) used to manage the components
- utility commands
- certificate management
- content management
CDS nodes provide content to RHUI clients.
You can use the Content Delivery Server (CDS) Management screen to list, add, delete, and reinstall CDS nodes.
1.2. Notable differences between RHUI 4 installer and RHUI 5 Copy linkLink copied to clipboard!
The RHUI 5 installer, while essentially maintaining the same Ansible playbooks as RHUI 4, looks different compared to the previous version of the installer.
- It is launched as a container image runtime from any RHEL host capable of running containers.
-
It requires
--target-hostto deploy the RHUA image. Compare this to the current behaviour of the installer where it installs the RHUA on the machine running the installer itself. -
It requires
--target-hostto install. -
It requires some additional command line arguments supplied to the installer to pass the user supplied certificate files. For example, you can supply volume mounts using the
-v podmanoption. - It has improved parameter default assignment logic.
1.3. RHUI 5 components Copy linkLink copied to clipboard!
Understanding how each RHUI component interacts with other components will make your job as a system administrator a little easier.
1.3.1. Red Hat Update Appliance Copy linkLink copied to clipboard!
There is one RHUA per RHUI installation, though in many cloud environments there will be one RHUI installation per region or data center, for example, Amazon’s EC2 cloud comprises several regions. In every region, there is a separate RHUI set up with its own RHUA node.
The RHUA allows you to perform the following tasks:
- Download new packages from the Red Hat content delivery network (CDN).
- Copy new packages to the shared network storage.
- Verify the RHUI installation’s health and write the results to a file located on the RHUA. Monitoring solutions use this file to determine the RHUI installation’s health.
- Provide a human-readable view of the RHUI installation’s health through a CLI tool.
RHUI uses two main configuration files: /etc/rhui/rhui-tools.conf and /etc/rhui/rhui-subscription-sync.conf.
The /etc/rhui/rhui-tools.conf configuration file contains general options used by the RHUA, such as the default file locations for certificates, and default configuration parameters for the Red Hat CDN synchronization. This file normally does not require editing.
The /etc/rhui/rhui-subscription-sync.conf configuration file contains the credentials for the Pulp database. These credentials must be used when logging in to the rhui-manager interface.
The RHUA employs several services to synchronize, organize, and distribute content for easy delivery.
RHUA services
- Pulp
- The service that manages the repositories.
- PostgreSQL
- The database that Pulp uses to keep track of currently synchronized repositories, packages, and other crucial metadata.
1.3.2. Content delivery server Copy linkLink copied to clipboard!
The CDS nodes provide the repositories that clients connect to for the updated content. Because RHUI provides a load-balancer with failover capabilities, we recommended that you use multiple CDS nodes.
The CDS nodes host content to end-user RHEL systems. While there is no required number of systems, the CDS works in a round-robin style load-balanced fashion (A, B, C, A, B, C) to deliver content to end-user systems. The CDS uses HTTPS to host content to end-user systems via dnf repositories.
During configuration, you specify the CDS directory where packages are synchronized. Similar to the RHUA, the only requirement is that you mount the directory on the CDS. It is up to the cloud provider to determine the best course of action when allocating the necessary devices. The Red Hat Update Infrastructure Management Tool configuration RPM linked the package directory with the NGINX configuration to serve it.
Currently, RHUI supports the following shared storage solution:
- NFS
If NFS is used,
rhui-installercan configure an NFS share on the RHUA to store the content as well as a directory on the CDS nodes to mount the NFS share. The followingrhui-installeroptions control these settings:-
--remote-fs-mountpointis the file system location where the remote file system share should be mounted (default:/var/lib/rhui/remote_share) -
--remote-fs-serveris the remote mount point for a shared file system to use, for example,nfs.example.com:/path/to/share(no default value)
-
The expected usage is that you use one shared network file system on the RHUA and all CDS nodes, for example, NFS. It is possible the cloud provider will use some form of shared storage that the RHUA writes packages to and each CDS reads from.
The storage solution must provide an NFS endpoint for mounting on the RHUA and CDS nodes. Do not set up the shared file storage on any of the RHUI nodes. You must use an independent storage server.
The only nonstandard logic that takes place on each CDS is the entitlement certificate checking. This checking ensures that the client making requests on the dnf repositories is authorized by the cloud provider to access those repositories. The check ensures the following conditions:
- The entitlement certificate was signed by the cloud provider’s Certificate Authority (CA) Certificate. The CA Certificate is installed on the CDS as part of its configuration to facilitate this verification.
- The requested URI matches an entitlement found in the client’s entitlement certificate.
If the CA verification fails, the client sees an SSL error. See the CDS node’s NGINX logs under /var/log/nginx/ for more information.
[root@cds01 ~]# ls -1 /var/log/nginx/
access.log
error.log
gunicorn-auth.log
gunicorn-content_manager.log
gunicorn-mirror.log
ssl-access.log----
The NGINX configuration is handled through the /etc/nginx/conf.d/ssl.conf file, which is created during the CDS installation.
1.3.3. HAProxy load-balancer Copy linkLink copied to clipboard!
A load-balancing solution must be in place to spread client HTTPS requests across all CDS servers. RHUI uses HAProxy by default, but it is up to you to choose what load-balancing solution (for example, the one from the cloud provider) to use during the installation. If HAProxy is used, you must also decide how many nodes to bring in.
Clients are not configured to go directly to a CDS; their repository files are configured to point to HAProxy, the RHUI load-balancer. HAProxy is a TCP/HTTP reverse proxy particularly suited for high-availability environments.
If you use an existing load-balancer, ensure port 443 is configured in the load-balancer and that all CDSs in the cluster are in the load-balancer’s pool.
The exact configuration depends on the particular load-balancer software you use. See the following configuration, taken from a typical HAProxy setup, to understand how you should configure your load-balancer:
[root@rhui5proxy ~]# cat /etc/haproxy/haproxy.cfg
global
chroot /var/lib/haproxy
daemon
group haproxy
log <HAProxy IP Address> local0
maxconn 4000
pidfile /run/haproxy.pid
stats socket /var/lib/haproxy/stats
user haproxy
defaults
log global
maxconn 8000
option redispatch
retries 3
stats enable
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
listen https00
bind <HAProxy IP Address> :443
balance roundrobin
option tcplog
option tcp-check
server cds01.example.com cds01.example.com:443 check
server cds02.example.com cds02.example.com:443 check
Keep in mind that when clients fail to connect, it is important to review the nginx logs on the CDS under /var/log/nginx/ to ensure that any request reached the CDS. If requests do not reach the CDS, issues such as DNS or general network connectivity may be at fault.
1.3.4. Repositories and content Copy linkLink copied to clipboard!
A repository is a storage location for software packages (RPMs). RHEL uses dnf commands to search a repository, download, install, and update the RPMs. The RPMs contain all the dependencies needed to run an application.
Content, as it relates to RHUI, is the software (such as RPMs) that you download from the Red Hat CDN for use on the RHUA and the CDS nodes. The RPMs provide the files necessary to run specific applications and tools. Clients are granted access by a set of SSL content certificates and keys provided by an rpm package, which also provides a set of generated dnf repository files.
1.3.5. Content provider types Copy linkLink copied to clipboard!
There are three types of cloud computing environments:
- public cloud
- private cloud
- hybrid cloud
This guide focuses on public and private clouds. We assume the audience understands the implications of using public, private, and hybrid clouds.
1.4. Component communications Copy linkLink copied to clipboard!
All RHUI components use the HTTPS communication protocol over port 443.
| Source | Destination | Protocol | Purpose |
|---|---|---|---|
| Red Hat Update Appliance | Red Hat Content Delivery Network | HTTPS | Downloads packages from Red Hat |
| Load-Balancer | Content Delivery Server | HTTPS | Forwards the clients' requests for repository metadata and packages |
| Client | Load-Balancer | HTTPS | Used by dnf on the clients to download content |
| Content Delivery Server | Red Hat Update Appliance | HTTPS | Might request information from Pulp API about content |
RHUI nodes require the following network access to communicate with each other.
Make sure that the network port is open and that network access is restricted to only those nodes that you plan to use.
| Connection | Port | Usage |
|---|---|---|
| RHUA to CDS | 22/TCP | SSH Configuration and access |
| RHUA to HAProxy servers | 22/TCP | SSH configuration and access |
| Clients to HAProxy | 443/TCP | Access to content |
| HAProxy to CDS | 443/TCP | Load balancing |
| NFS ports open for CDS and RHUA | 2049/TCP | File system |
| CDS to RHUA | 443/TCP | Retrieve content that has not been symlinked |
Chapter 2. Installing, Migrating, and Upgrading RHUI Copy linkLink copied to clipboard!
2.1. RHUI Installation options Copy linkLink copied to clipboard!
The following table presents the various RHUI 5 components.
| Component | Acronym | Function | Alternative |
|---|---|---|---|
| Red Hat Update Appliance | RHUA | Downloads content from the Red Hat content delivery network and stores it on the shared storage | None |
| Content Delivery Server | CDS | Provides the repositories that clients connect to for the updated packages | None |
| HAProxy | None | Provides load balancing across CDS nodes | Existing load balancing solution |
| Shared storage | None | Provides shared storage | Existing storage solution |
The following table describes how to perform installation tasks.
| Installation Task | Performed on |
|---|---|
| Install RHEL 9 or higher | RHUA, CDS, and HAProxy |
| Register the system | RHUA, CDS and HAProxy |
|
Install | RHUA |
|
Install | control node |
|
Run | RHUA |
Option 1: Full installation
- A RHUA with shared storage
- Two or more CDS nodes with this shared storage
- One or more HAProxy load-balancers
Option 2: Installation with an existing storage solution
- A RHUA with an existing storage solution
- Two or more CDS nodes with this existing storage solution
- One or more HAProxy load-balancers
Option 3: Installation with an existing load-balancer solution
- A RHUA with shared storage
- Two or more CDS nodes with this shared storage
- An existing load-balancer
Option 4: Installation with existing storage and load-balancer solutions
- A RHUA with an existing storage solution
- Two or more CDS nodes with this existing shared storage
- An existing load-balancer
Red Hat Update Infrastructure must be used with at least two CDS nodes and a load-balancer node. Installation without any load-balancer node and with a single CDS node is unsupported.
The following figure depicts a high-level view of how the various RHUI 5 components interact.
Figure 2.1. Red Hat Update Infrastructure 5 overview
Install the RHUA and CDS nodes on separate x86_64 servers (bare metal or virtual machines). Ensure all the servers and networks that connect to RHUI can access the Red Hat subscription management service.
2.2. Red Hat Update Infrastructure install types Copy linkLink copied to clipboard!
Standard install
The standard mode when you invoke RHUI 5 installer is to deploy initially the RHUA container image onto the --target-host. In this mode of operation --remote-fs-server is also required.
Maintenance or upgrade of an existing RHUI 5 installation
Once you have deployed the RHUA container image on the target host, you can invoke the installer with --rerun switch to change some of its settings (image version included). In this case, --remote-fs-server is not required, as it will be inferred from the configuration.
Cloning an existing RHUI 5 installation
It is now possible to clone an existing RHUI 5 installation, with some limitations. The main limitation is that the Pulp content must be cloned beforehand, independent of the installation process. Once that is done, the installer can be invoked with --clone flag, which triggers the cloning process. The --clone flag requires both --source-host and --migration-fs-server to be provided, in addition to the standard --target-host argument which is required by default.
2.3. Common elements for both types of RHUI 4 migration Copy linkLink copied to clipboard!
Per-artifact sync policies are no longer supported.
For example the following configuration parameters are no longer valid:
-
rpm_sync_policy -
debug_sync_policy -
source_sync_policy
The parameter default_sync_policy is still valid. To support different sync policies depending on the artifact type, as well as to provide additional flexibility into selecting the sync policy based on the content in question, two new configuration parameters are available:
-
immediate_repoid_regex -
on_demand_repoid_regex
Whenever a sync task is submitted, the repoid of the repository is checked against the regex in immediate_repoid_regex first. If it matches, a sync with 'immediate' policy is requested. If not, a match is tested against on_demand_repoid_regex. This match would produce an on_demand sync task. If there is no match, the sync is performed with a policy pointed by default_sync_policy configuration parameter.
In both migration types, no CDS or HAPROXY information is migrated. It is a duty of the RHUI admin to add new CDS and HAPROXY nodes using the RHUI 5 RHUA (either through TUI or CLI). Further, CDS and HAPROXY nodes of the existing RHUI 4 installation are left intact, with their services fully operational. Again, it is a duty of the RHUI admin to shut down those nodes once they are no longer needed. Until then, they still have access to the filesystem share with the Pulp content and they are able to serve RHUI content that has been synced previously and symlinked. After migration, those legacy RHUI 4 CDS nodes will not be able to serve on-demand content not fetched yet, as their configuration points to the RHUI 4 RHUA that has been shut down.
2.4. Red Hat Update Infrastructure migrating existing installation Copy linkLink copied to clipboard!
In-place migration of a RHUI 4 installation
If the --migrate-from-rhui-4 installation flag is provided, the installer performs an in-place migration of the existing RHUI 4 RHUA installation on the --target-host, and stops the installation if it does not find RHUI 4. In this mode --remote-fs-server is not required, as it will be inferred from the existing RHUI 4 configuration files.
Through installation steps, RHUI 4 services are shut down and PostgreSQL database files are copied (thus doubling the space requirement for the database files) to the location reachable by the RHUI 5 container. The files are copied to var/lib/rhui/postgres. You will need enough space for the copy of the database in the volume where the root directory is located. You can use the command du -sh /var/lib/pgsql/data to determine the size of your database and determine the amount of space that the database will need. The ownership of the Pulp content files, residing on the shared storage, is changed to match UIDs/GIDs used by the RHUI 5 container.
Migration of a RHUI 4 installation to another machine
If --source-host is provided in addition to --migrate-from-rhui-4, the --source-host is checked for an existing RHUI 4 installation. If found, its configuration, together with the database files, is transferred to the --target-host, and the RHUI 5 RHUA container is deployed there. RHUI RHUA services on the --source-host are shut down prior to the migration, and the Pulp content files on the shared storage will have a different owner, but will be otherwise intact. The same filesystem share is then mounted on the --target-host.
RHUI 5 will move to the latest version of PostgreSQL, ensuring the latest security updates. This will require existing RHUI 4 to be on the latest version, and to update their PostgreSQL to version 15 prior to migrating to RHUI 5.
It is worth noting that in this scenario the hostname of the RHUA is changed, and therefore the RHUI 5 configuration and the SSL certificate for Pulp’s Nginx are adjusted accordingly.
Migration can be targeted not only to a different system but also to a different remote file share. This is indicated by the --migration-fs-server which denotes the remote file share that will be mounted by the --target-host.
The content of the file share that includes the Pulp artifacts, namely the directories pulp3, symlinks, and repo-notes need to be copied independently and before the migration process.
2.5. Providing installation parameters Copy linkLink copied to clipboard!
There are several ways to provide parameters pertaining to RHUI 5 installation. They are, in descending order of priority:
- Parameters supplied on the command line take absolute precedence over any other parameter provision methods. However, not all installation parameters are supported this way, as we do not want to force users to create an unwieldy and counterintuitive installation command line.
- Parameters can be provided through an answers file. This method can accommodate a larger set of installation parameters.
-
The installer checks for existence of the required parameters, namely
--target-hostand--remote-fs-serverand exits if they’re not provided. -
If
rhui-tools.confalready exists on the target host, its content is parsed and the values provided there are preserved unless a matching key is provided via command line or the answers file. - Some parameters have defaults that are hardcoded in the installer.
Shared storage management
RHUI 5 installer supports NFS only, therefore --remote-fs-type is no longer supported. In addition, providing literal none as --remote-fs-server argument skips the shared NFS storage setup completely. This can come handy in situations where shared storage is managed on some other level or by another product such as OpenShift. It is worth noting that --remote-fs-mountpoint is still supported, but it refers to the filesystem layout on the host, and not the container side. Basically, it helps to determine where you want to mount the filesystem. Remember that the RHUI containers are running in rootless mode, so any filesystem NFS mount needs to happen on the host.
2.6. RHUI 5 install procedure Copy linkLink copied to clipboard!
Before you begin
For RHUI 5, only the container images will be published, and not the individual RPMs. There are separate images for:
- installer
- RHUA
- CDS
- HAPROXY
Providing local files to the installer
In RHUI 4, the installer would accept local file paths as arguments to some command line switches. This is no longer an option with containerized installations, since the running container has no access to arbitrary files on the host filesystem. Therefore, the RHUI 5 installer is taught to look into some hardcoded file paths to source some files, and those paths can be provided as volume mounts through the podman command line. Unfortunately, those paths cannot be provided through the answers file as the container has already been started at the point the answers file is parsed.
The list of special file paths, local to the container, that the installer will reference:
-
/ssh-keyfile- The private SSH key used to log into target host. -
/rhua-image.tar- The RHUA container image file in case we want to explicitly transfer it to the target host, the image file must be in the format created bypodman savecommand. In this case,--rhua-container-imageand--rhua-container-registryinstallation parameters are not allowed /answers.yaml- The answers file, which will look similar to the following:rhua: certs_country: HR certs_city: Zadar certs_org: RHUI devs certs_org_unit: Containerization efforts certs_ca_common_name: rhui5-development.example.net default_sync_policy: on demand-
/rhui-ca.crtand/rhui-ca.key- The RHUI CA certificate and its key. -
/client-ssl-ca.crtand/client-ssl-ca.key- The CA Certificate for CDS SSL traffic and its key. -
/client-entitlement-ca.crtand/client-entitlement-ca.key- The CA certificate for client certificate management and its key.
Whenever providing the volume mounts to the container, make sure you have proper SELinux labels for the container, providing either :z or :Z as a volume mount option.
Running the installer image for RHUI 5
To run the installer image you will need to access the public Red Hat registry, registry.redhat.io. The registry is protected by credentials. Also, you must be logged in a machine that has Podman installed (we call it control node), so that you can log in to the registry and subsequently run the installer image against the target host as shown in the following:
The following examples assume that you are using RHEL 9.
$ sudo dnf -y install podman
[...]
$ podman login --username <CCSP_login> registry.redhat.io
Password:
Login Succeeded!
After you have logged in to the registry, you can check the available RHUI container images:
$ podman search registry.redhat.io/rhui5
NAME DESCRIPTION
registry.redhat.io/rhui5/cds-rhel9 Red Hat Update Infrastructure 5 Content Deli...
registry.redhat.io/rhui5/installer-rhel9 Red Hat Update Infrastructure 5 Installer
registry.redhat.io/rhui5/rhua-rhel9 Red Hat Update Infrastructure 5 Appliance
registry.redhat.io/rhui5/haproxy-rhel9 Red Hat Update Infrastructure 5 Load Balance...
At this point you are ready to start the installation process assuming all of the following is provided:
-
The target host you want to install RHUA on. This is the
--target-hostinstallation parameter. - The target host can meet or exceed the following requirements:
- It should run RHEL 9 or 10 and already be registered with Red Hat.
The target host needs to be registered using the following command: subscription-manager register. When prompted, enter your CCSP user name and password.
- For RHUA hardware should be a minimum of: x86_64, 16 CPU cores, 64 GB RAM, 256+ GB disk.
- For CDS and HAProxy the hardware should be a minimum of: x86_64, 8+ CPU cores, 8+ GB RAM, 128+ GB disk.
-
The NFS fileshare used for storing Pulp content. This is the
--remote-fs-serverinstallation parameter. - The target host has accepted your SSH authentication.
- The target user is the user name that is used when connecting with the remote host. The target user will be authenticated by the SSH key that is authorized in the target user’s home directory.
Assuming you have launched the target host and it is configured to accept your SSH key, you can run the following commands in Podman:
-
-itThis means an interactive session is needed with a proper terminal output. -
--rmThis will remove the container after the operation is finished. -v ~/.ssh/id_rsa:/ssh-keyfile:ZThis will volume mount your SSH private key so that the installer container has access to it.NoteDo not forget to supply your SSH passphrase if you have set up your SSH key with a passphrase.
$ podman run -it --rm -v ~/.ssh/id_rsa:/ssh-keyfile:Z \ registry.redhat.io/rhui5/installer-rhel9 rhui-installer \ --target-user <target-user> --rhua-container-registry registry.redhat.io \ --podman-username <CCSP_login> --podman-password '<CCSP_password>' \ --remote-fs-server <nfs-host:/path> \ --target-host <rhua-hostname> Trying to pull registry.redhat.io/rhui5/installer-rhel9:latest... ... Getting image source signatures Copying blob 92efcdccd105 done | Copying blob 19f9949dbedd done | Copying blob 467b1cd556e7 done | Copying blob 5c6a65a8d3b9 done | Copying config be3b9592ab done | Writing manifest to image destination PLAY [RHUI 5 installation RHUA installation playbook executing on the *target* host] ***************************************************************************************** TASK [Populate service facts] *********************************************************** Enter passphrase for key '/ssh-keyfile': ok: [<rhua-hostname>] TASK [Stop the RHUA container that might be running already] ***************************************************************************************** skipping: [<rhua-hostname>] TASK [Prepare the dictionary for holding the rhui-tools.conf values] ***************************************************************************************** ok: [<rhua-hostname>] TASK [Check whether we have rhui-tools.conf in the designated location] ***************************************************************************************** ok: [<rhua-hostname>] [...] TASK [Enable and start RHUA container as a systemd service] ***************************************************************************************** changed: [<rhua-hostname>] PLAY RECAP ****************************************************************************** <rhua-hostname> : ok=69 changed=43 unreachable=0 failed=0 skipped=43 rescued=0 ignored=0 PLAY [Attempt to copy the installer log file onto the managed node] ***************************************************************************************** TASK [Copy the log file] **************************************************************** changed: [<rhua-hostname>] PLAY RECAP ****************************************************************************** <rhua-hostname>: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Installation Verification
Your RHUA container is ready and running on the target host. So how do you access it? During the installation, a shell function named rhua has been created to save you from typing the Podman exec invocation. Enter the following:
[root@rhua ~]# which rhua
rhua ()
{
default_arg="";
[ $# -eq 0 ] && default_arg=bash;
[ "$1" = "-h" ] && echo -e "rhua: executes commands in the RHUA container environment.\n Usage: rhua command [args ...]" && return 1;
( cd /var/lib/rhui;
sudo -u rhui podman exec -it rhui5-rhua "${default_arg}${@}" )
}
[root@rhua ~]# rhua bash
bash-5.1# cat /etc/rhui/rhui-subscription-sync.conf
[auth]
username = admin
password = <generated_password>
bash-5.1# rhui-manager
Logging into the RHUI.
It is recommended to change the user's password
in the User Management section of RHUI Tools.
RHUI Username: admin
RHUI Password: <generated_password>
Using SSH agent for authentication (Optional)
If you want to use ssh-agent for passing your SSH key, you must run the installer container in the --privileged mode to allow using the ssh-agent sockets inside the container. Additionally, ensure you have ssh-agent working and you have unlocked your SSH private key. Then, run the following command:
$ ssh-add
Enter passphrase for /home/<username>/.ssh/id_rsa:
Identity added: /home/<username>/.ssh/id_rsa (/home/<username>/.ssh/id_rsa)
Next, in your installer invocation, replace:
-v ~/.ssh/id_rsa:/ssh-keyfile:Z
with the following:
--privileged -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:Z -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK
-
--privilegedThis is so the container has access to the ssh-agent sockets. -
-v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:ZThis is to pass the SSH authentication socket to the container filesystem, so that the container can access your SSH key. -
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCKThis is to set the environment variable in the container runtime pointing to the location of the SSH authentication socket.
2.7. Changing the admin password Copy linkLink copied to clipboard!
The rhui-installer sets the initial RHUI login password. It is also written in the /etc/rhui/rhui-subscription-sync.conf file. You can override the initial password with the --rhui-manager-password option.
If you want to change the initial password later, you can change it through the rhui-manager tool or through rhui-installer. Run the rhui-installer --help command to see the full list of rhui-installer options.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
uto select manage RHUI users. From the User Manager screen, press
pto select change admin’s password (followed by logout):-= User Manager =- p change admin's password (followed by logout) rhui (users) => p Warning: After password change you will be logged out. Use ctrl-c to cancel password change. New Password:Enter your new password; reenter it to confirm the change.
New Password: Re-enter Password: [localhost] env PULP_SETTINGS=/etc/pulp/settings.py /usr/bin/pulpcore-manager reset-admin-password -p ********
Verification
The following message displays after you change the admin password:
Password successfully updated. For security reasons you have been logged out.
2.8. Upgrading Red Hat Update Infrastructure (RHUI) 5 Copy linkLink copied to clipboard!
Before you begin
To update RHUI, you will need to rerun the RHUI installer image. This will require you to be logged in to a machine that has Podman installed.
First you will need to check to see if you are logged in to the public Red Hat registry by running the following command:
$ podman login --get-login registry.redhat.io
This command will print the user name that is logged in. If you are not logged in, you will receive an error. If you are logged in, you can move to the upgrade steps.
If you are not logged in run the following command to log in:
podman login --username <CCSP_login> --password '<CCSP_password>' registry.redhat.io
Once you have logged in, you can continue the upgrade process.
Procedure
To upgrade to the latest version of RHUI, rerun the RHUI installer with the following command:
$ podman run --pull=always -it --rm -v ~/.ssh/id_rsa:/ssh-keyfile:Z \ registry.redhat.io/rhui5/installer-rhel9 rhui-installer \ --target-user <target-user> --target-host <rhua-hostname> --rerun- Next, you will need to upgrade the CDS and HAProxy images by running the following command on RHUA:
# rhua rhui-manager --noninteractive cds reinstall --all
# rhua rhui-manager --noninteractive haproxy reinstall --all
Verification
To verify that you have upgraded to the latest version of RHUI run the following:
# rhua rpm -q rhui-tools
Chapter 3. Managing Repositories Copy linkLink copied to clipboard!
3.1. Available repositories Copy linkLink copied to clipboard!
Certified Cloud and Service Provider (CCSP) partners control what repositories and packages are delivered through their service. For the most current information regarding what repositories are available for the various operating system versions but are not yet added in your RHUI, run the following command on the RHUA:
# rhua rhui-manager --noninteractive repo unused --by_repo_id
3.2. Adding a new Red Hat content repository Copy linkLink copied to clipboard!
Your CCSP account enables you to access selected Red Hat repositories and make them available in your Red Hat Update Infrastructure environment.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. -
From the Repository Management screen, press
ato select add a new Red Hat content repository. Wait for the Red Hat Update Infrastructure Management Tool to determine the entitled repositories. This might take several minutes:
rhui (repo) => a Loading latest entitled products from Red Hat... ... listings loaded Determining undeployed products... ... product list calculatedThe Red Hat Update Infrastructure Management Tool prompts for a selection method:
Import Repositories: 1 - All in Certificate 2 - By Product 3 - By Repository Enter value (1-3) or 'b' to abort:-
To add several repositories bundled together as a product—usually all the minor versions of it in one step—press
2to select the By Product method. Alternatively, you can add particular repositories by using the By Repository method. Select which repositories to add by typing the number of the repository at the prompt. You can also choose the range of repositories, for instance, by entering
1-5.Enter value (1-620) to toggle selection, 'c' to confirm selections, or '?' for more commands:- Continue until all repositories you want to add are checked.
Press
cwhen you are finished selecting the repositories. The Red Hat Update Infrastructure Management Tool displays the repositories for deployment and prompts for confirmation:The following products will be deployed: Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI Proceed? (y/n)Press
yto proceed. A message indicates each successful deployment:Importing Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI... Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.1)... Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.0)... Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10)... Importing Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI... Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI (10.1)... Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI (10.0)... Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI (10)... Content will not be downloaded to the newly imported repositories until the next sync is run.
Verification
-
From the Repository Management screen, press
lto check that the correct repositories have been installed.
3.3. Listing repositories currently managed by RHUI 5 Copy linkLink copied to clipboard!
A repository contains downloadable software for a Linux distribution. You use dnf to search for, install, or only download RPMs from the repository.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. From the Repository Management screen, press
lto select list repositories currently managed by the RHUI:... Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10) Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0) Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.1) Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10) Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10.0) Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10.1) Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10) Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.0) Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.1) ...
3.4. Displaying detailed information on a repository Copy linkLink copied to clipboard!
You can use the Repository Management screen to display information about a particular repository.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. From the Repository Management screen, press
i:Enter value (1-1631) to toggle selection, 'c' to confirm selections, or '?' for more commands:- Select the repository by entering the value beside the repository name. Enter one repository selection at a time before confirming your product selection.
Press
cto confirm:Name: Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Debug RPMs) from RHUI (10.1) ID: rhel-10-for-aarch64-appstream-debug-rhui-rpms-8.4 Type: Red Hat Version: 0 Relative Path: content/dist/rhel10/rhui/10.1/aarch64/appstream/debug GPG Check: Yes Custom GPG Keys: (None) Red Hat GPG Key: Yes Content Unit Count: Last Sync: 2026-11-15 15:56:06 Next Sync: 2026-11-15 22:00:00 Name: Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.1) ID: rhel-10-for-aarch64-appstream-rhui-rpms-8.4 Type: Red Hat Version: 0 Relative Path: content/dist/rhel10/rhui/10.1/aarch64/appstream/os GPG Check: Yes Custom GPG Keys: (None) Red Hat GPG Key: Yes Content Unit Count: Last Sync: 2026-11-15 19:50:20 Next Sync: 2026-11-16 01:55:00 Name: Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10.1) ID: rhel-10-for-aarch64-appstream-source-rhui-rpms-8.4 Type: Red Hat Version: 0 Relative Path: content/dist/rhel10/rhui/10.1/aarch64/appstream/source/SRPMS GPG Check: Yes Custom GPG Keys: (None) Red Hat GPG Key: Yes Content Unit Count: Last Sync: 2026-11-15 15:56:51 Next Sync: 2026-11-15 22:00:00
Verification
- A similar output displays for your selections.
3.5. Setting Up On-Demand Syncing of Repositories Copy linkLink copied to clipboard!
RHUI allows you to minimize the amount of content downloaded to storage in advance by setting certain repositories to on_demand sync mode. This way, RHUI will only download and store content when it is requested by client machines, which can result in reduced storage usage and lower costs. However, the downside of this approach is that RHUI’s performance will depend on the connection speed to the Red Hat CDN network.
Setting the Sync Policy
To support different sync policies depending on the artifact type, as well as to provide additional flexibility into selecting the sync policy based on the content in question, two configuration parameters are available:
- immediate_repoid_regex
- on_demand_repoid_regex
Whenever a sync task is submitted, the repoid of the repository is checked against the regex in immediate_repoid_regex first. If it matches, a sync with immediate policy is requested. If not, a match is tested against on_demand_repoid_regex. This match would produce an on_demand sync task. If there is no match, the sync is performed with a policy pointed by default_sync_policy configuration parameter.
Applying the Policy
After updating the configuration file, the next repository synchronization will apply the new policy.
If you switch from on_demand to immediate, the next sync will begin downloading all content for the specified type.
If you switch from immediate to on_demand, the next sync will only download repository metadata. RHUI will then download content as requested by client machines.
Example of setting policy using both types of policy
To set up your repository to use both types of policy, you will need to edit your /etc/rhui/rhui-tools.conf file to the following configuration:
[rhui]
on_demand_repoid_regex: debug|source
immediate_repoid_regex: ^$
default_sync_policy: immediate
This configuration syncs your debug and source repositories using the on-demand policy and your regular repositories using the immediate policy.
Tips and Tricks
- Setting all repositories to on_demand right after installing RHUI can lead to faster deployment and quicker delivery for end-users, as only metadata needs to be initially synced.
-
Utilizing a cache priming strategy can be beneficial if you have a new installation and do not need to support older versions of RHEL clients. By using a client that mirrors end-user configurations and running
dnf update, you can pre-download content to RHUI’s storage.
3.6. Adding a new Red Hat content repository using an input file Copy linkLink copied to clipboard!
In Red Hat Update Infrastructure 5 and later, you can add custom repositories using a configured YAML input file. You can find an example template of the YAML file on the RHUA container in the /usr/share/rhui-tools/examples/repo_add_by_file.yaml directory.
This functionality is only available in the command-line interface (CLI).
Prerequisites
- Ensure that you have root access to the RHUA node.
Procedure
On the RHUA node, create a YAML input file in the following format:
# cat /root/example.yaml name: Example_YAML_File repo_ids: - rhel-10-for-x86_64-baseos-eus-rhui-rpms-10.0Add the repositories listed in the input file using the
rhui-managerutility:# rhua rhui-manager repo add_by_file --file /root/example.yaml --sync_now The name of the repos being added: Example_YAML_File Loading latest entitled products from Red Hat... ... listings loaded Successfully added Red Hat Enterprise Linux 10 for x86_64 - BaseOS - Extended Update Support from RHUI (RPMs) (10.0) (Yum) ... successfully scheduled for the next available timeslot.
Verification
In the CLI, use the following command to list all the installed repositories and check whether the correct repositories have been installed:
# rhua rhui-manager repo list-
In the RHUI Management Tool, on the Repository Management screen, press
lto list all the installed repositories and check whether the correct repositories have been installed.
3.7. Creating a new custom repository (RPM content only) Copy linkLink copied to clipboard!
You can create custom repositories that can be used to distribute updated client configuration packages or other non-Red Hat software to the RHUI clients. A protected repository for 64-bit RHUI servers (for example, client-rhui-x86_64) will be the preferred vehicle for distributing new non-Red Hat packages, such as an updated client configuration package, to the RHUI clients.
Like Red Hat content repositories, all of which are protected, protected custom repositories that differ only in processor architecture (i386 versus AMD64) are consolidated into a single entitlement within an entitlement certificate, using the $basearch dnf variable.
In the event of certificate problems, an unprotected repository for RHUI servers can be used as a fallback method for distributing updated RPMs to the RHUI clients.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. -
From the Repository Management screen, press
cto select create a new custom repository (RPM content only). Enter a unique ID for the repository. Only alphanumeric characters, _ (underscore), and - (hyphen) are permitted. You cannot use spaces in the unique ID. For example,
repo1,repo_1, andrepo-1are valid entries.Unique ID for the custom repository (alphanumerics, _, and - only):Enter a display name for the repository. This name can contain spaces and other characters that could not be used in the ID. The name defaults to the ID.
Display name for the custom repository [repo_1]:Specify the path that will host the repository. The path must be unique across all repositories hosted by RHUI. For example, if you specify the path at this step as
internal/rhel/9/repo_1, then the repository will be located at:https://<yourLB>/pulp/content/protected/internal/rhel/9/repo_1.Unique path at which the repository will be served [repo_1]:Choose whether to protect the new repository. If you answer no to this question, any client can access the repository. If you answer yes, only clients with an appropriate entitlement certificate can access the repository.
WarningAs the name implies, the content in an unprotected repository is available to any system that requests it, without any need for a client entitlement certificate. Be careful when using an unprotected repository to distribute any content, particularly content such as updated client configuration RPMs, which will then provide access to protected repositories.
Answer yes or no to the following questions as they appear:
Should the repository require clients to perform a GPG check and verify packages are signed by a GPG key? (y/n) Will the repository be used to host any Red Hat GPG signed content? (y/n) Will the repository be used to host any custom GPG signed content? (y/n) Enter the absolute path to the public key of the GPG key pair: Would you like to enter another public key? (y/n) Enter the absolute path to the public key of the GPG key pair: Would you like to enter another public key? (y/n)-
The details of the new repository displays. Press
yat the prompt to confirm the information and create the repository.
Verification
-
From the Repository Management screen, press
lto check that the correct repositories have been installed.
3.8. Deleting a repository from RHUI 5 Copy linkLink copied to clipboard!
When the Red Hat Update Infrastructure Management Tool deletes a Red Hat repository, it deletes the repository from the RHUA and all applicable CDS nodes.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. -
From the Repository Management screen, press
dat the prompt to delete a Red Hat repository. A list of all repositories currently being managed by RHUI displays. -
Select which repositories to delete by typing the number of the repository at the prompt. Typing the number of a repository places a checkmark next to the name of that repository. You can also choose the range of repositories, for instance, by entering
1-5. - Continue until all repositories you want to delete are checked.
Press
cat the prompt to confirm.NoteAfter you delete a repository, the content of the repository (such as repodata and packages) will remain in the file system and can still be consumed by clients. Only after the next orphan cleanup task are the repository contents deleted, which means that clients may see 404 errors. In RHUI5 orphaned units are deleted weekly at 4am on Wednesday. An administrator can delete orphaned units at any time. You must update your client configuration RPM to avoid 404 errors.
Repository RPMs are deduplicated at sync time, so the least amount of space is used at any time. When removing a repository (especially minor version specific repositories) it is likely that the RPMs of the repository could be shared with another repository.
For example, if you remove the RHEL 9.5 Appstream repository but you have the RHEL 9.6 Appstream repository, you will not see any change in the amount of disk space used as the RHEL 9.6 repository contains all the same RPMs as the RHEL 9.5 repository If you remove the RHEL 9.6 Appstream repository but keep the RHEL 9.5 Appstream repository enabled, then you would have see the difference between RHEL 9.5 and 9.6 removed during an orphan cleanup task, as RHEL 9.6 contains all the same packages from 9.0 to 9.6.
For more information about removing orphaned artifacts, see Removing orphaned artifacts.
3.9. Uploading content to a custom repository (RPM content only) Copy linkLink copied to clipboard!
You can upload multiple packages and upload to more than one repository at a time. Packages are uploaded to the RHUA immediately but are not available on the CDS node until the next time the CDS node synchronizes.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. From the Repository Management screen, press
u:Select the repositories to upload the package into: - 1: test- Enter the value (1-1) to toggle the selection.
-
Press
cto confirm your selection. Enter the location of the packages to upload. If the location is an RPM, the file will be uploaded. If the location is a directory, all RPMs in that directory will be uploaded:
/root/bear-4.1-1.noarch.rpm The following RPMs will be uploaded: bear-4.1-1.noarch.rpmPress
yto proceed ornto cancel:Copying RPMs to a temporary directory: /tmp/rhui.rpmupload.jsqdub22.tmp .. 1 RPMs copied. Creating repository metadata for 1 packages ... .. repository metadata created for 1 packages. The packages upload task for repo: client-config-rhel-10-x86_64 has been queued: /pulp/api/v3/tasks/01937826-8654-77c1-84f7-e9e07c7a7aeb/ You can inspect its progress via (S)ync screen/(RR) menu option in rhui-manager TUI.
3.10. Uploading content from a remote web site (RPM content only) Copy linkLink copied to clipboard!
You can upload packages that are stored on a remote server without having to manually download them first. The packages must be accessible by HTTP, HTTPS, or FTP.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. From the Repository Management screen, press
ur:Select the repositories to upload the package into: - 1: test- Enter the value (1-1) to toggle the selection.
Press
cto confirm your selection:### WARNING ### WARNING ### WARNING ### WARNING ### WARNING ### WARNING ### # # # Content retrieved from non-Red Hat arbitrary places can contain # # unsupported or malicious software. Proceed at your own risk. # # # ###########################################################################Enter the remote URL of the packages to upload. If the location is an RPM, the file will be uploaded. If the location is a web page, all RPMs linked off that page will be uploaded:
https://repos.fedorapeople.org/pulp/pulp/demo_repos/zoo/bear-4.1-1.noarch.rpm Retrieving https://repos.fedorapeople.org/pulp/pulp/demo_repos/zoo/bear-4.1-1.noarch.rpm The following RPMs will be uploaded: bear-4.1-1.noarch.rpmPress
yto proceed ornto cancel:Copying RPMs to a temporary directory: /tmp/rhui.rpmupload.dwux8rq7.tmp .. 1 RPMs copied. Creating repository metadata for 1 packages ... .. repository metadata created for 1 packages. The packages upload task for repo: test has been queued: /pulp/api/v3/tasks/0193770c-6523-7363-ae5e-8c6429728b4f/ You can inspect its progress via (S)ync screen/(RR) menu option in rhui-manager TUI.
3.11. Importing package group metadata to a custom repository Copy linkLink copied to clipboard!
To allow RHUI users to view and install package groups or language packs from a custom repository, you can import a comps.xml or a comps.xml.gz file to the custom repository.
Red Hat repositories contain these files provided by Red Hat. You can not override them. You can only upload these files to your custom repositories.
This functionality is only available in the command-line interface.
Prerequisites
-
Ensure that you have a valid
comps.xmlorcomps.xml.gzfile relevant to the custom repository. - Ensure you have root access to the RHUA node.
Procedure
On the RHUA node, import data from a
compsfile to your custom repository using therhui-managerutility:# rhua rhui-manager repo add_comps --repo_id Example_Custom_Repo --comps /root/Example-Comps.xml
Verification
On a client system that uses the custom repository:
Refresh the repository data:
# dnf clean metadataList the repository data and verify that the
compsfile has been updated:# dnf grouplist
3.12. Removing content from a custom repository (Custom RPM content only) Copy linkLink copied to clipboard!
You can remove packages from custom repositories using RHUI’s Text User Interface (TUI).
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Enter
rto select manage repositories. On the Repository Management screen, enter
rto select packages to remove from a repository (Custom RPM content only):-= Repository Management =- l list repositories currently managed by RHUI i display detailed information on a repository a add a new Red Hat content repository ac add a new Red Hat container c create a new custom repository (RPM content only) d delete a repository from RHUI u upload content to a custom repository (RPM content only) ur upload content from a remote web site (RPM content only) p list packages in a repository (RPM content only) r select packages to remove from a repository (Custom RPM content only)Enter the value to select the repository:
Choose a repository to delete packages from: 1 - Test-RPM-1 2 - Test-RPM-2Enter the value to select the packages to delete.
Select the packages to remove: - 1: example-package-1.noarch.rpm - 2: example-package-2.noarch.rpmEnter
cto confirm selection.The following packages will be removed: example-package-1.noarch.rpmEnter
yto proceed ornto cancel:Removed example-package-1.noarch.rpm
3.13. Listing the packages in a repository (RPM content only) Copy linkLink copied to clipboard!
When listing repositories within the Red Hat Update Infrastructure Management Tool, only repositories that contain fewer than 100 packages display their contents. Results with more than 100 packages only display a package count.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
rto select manage repositories. -
From the Repository Management screen, press
p. Select the number of the repository you want to view. The Red Hat Update Infrastructure Management Tool asks if you want to filter the results. Leave the line blank to see the results without a filter.
Enter value (1-1631) or 'b' to abort: 1 Enter the first few characters (case insensitive) of an RPM to filter the results (blank line for no filter): Only filtered results that contain less than 100 packages will have their contents displayed. Results with more than 100 packages will display a package count only. Packages: bear-4.1-1.noarch.rpm
Verification
One of three types of messages displays:
Packages: bear-4.1-1.noarch.rpmPackage Count: 8001No packages in the repository.
3.14. Limiting the number of repository versions Copy linkLink copied to clipboard!
In Pulp 3, which is used in Red Hat Update Infrastructure 4, repositories are versioned. When a repository is updated in Red Hat CDN and synchronized in Red Hat Update Infrastructure, Pulp creates a new version.
By default, repositories added using Red Hat Update Infrastructure version 4.6 and earlier were configured to retain all repository versions. This resulted in data accumulating in the database indefinitely, taking up disk space and, in the worst case, the inability to delete a repository. With version 4.7 and newer, repositories are added with a version limit of 5. This means only the latest five versions are kept at all times, and any older version is automatically deleted. However, you may want to set the version limit for existing repositories added earlier and have any older versions deleted. You can do this for all your repositories at once or process one repository at a time.
The command to do this is as follows:
[root@rhua ~]# rhua rhui-manager repo set_retain_versions [--repo_id <ID> or --all] --versions <NUMBER>For example, to limit the number of versions for all repositories to 5, one would run:
[root@rhua ~]# rhua rhui-manager repo set_retain_versions --all --versions 5
Depending on the number of repositories and existing repository versions, It can take more than an hour for all the necessary tasks to be scheduled, and up to a few days for the versions older than the limit to be deleted. You can watch the progress in the rhui-manager text user interface, on the synchronization screen, under running tasks.
3.15. Removing orphaned artifacts Copy linkLink copied to clipboard!
RPM packages, repodata files, and other relates files are kept on the disk even if they are no longer part of a repository; for example, after a repository is deleted and the files do not belong to another repository, or when an update is made available and a new set of repodata files is synchronized.
To remove this obsolete content, one can run the following command:
[root@rhua ~]# rhua rhui-manager repo orphan_cleanup
Depending on the number of files, it can take up to several days for this task to complete. You can watch the progress in the rhui-manager text user interface, on the synchronization screen, under running tasks.
3.16. Generating a status file for RHUI repositories Copy linkLink copied to clipboard!
You can use the rhui-manager command to obtain the status of each repository in a machine-readable format.
Procedure
On the RHUA node, run the following command.
# rhua rhui-manager --noninteractive status --repo_json <output_file>*A JSON file is generated containing a list of dictionaries for all custom and Red Hat repositories. To view the content of the file, run the following command.
# rhua cat <output_file>If you would like to view the JSON file on the host, you can create the file in
/rootusing the following command:# rhua rhui-manager --noninteractive status --repo_json /root/<output_file>Now you will be able to access your output file on the host machine as
/var/lib/rhui/root/<output_file>
3.17. List of dictionary keys in the repository status JSON file Copy linkLink copied to clipboard!
A machine-readable JSON file is created when you run the command to get the status of each RHUI repository. The JSON file contains a list of dictionaries with one dictionary for each repository.
List of dictionary keys for custom repositories
| Key | Description |
|---|---|
| base_path | The path of the repository. |
| description | The name of the repository. |
| group |
The group the repository belongs to. It is always set to the string, |
| id | The repository ID. |
| name | The name of the repository. It is the same as the repository ID. |
List of dictionary keys for Red Hat repositories
| Key | Description |
|---|---|
| base_path | The path of the repository. |
| description | The name of the repository. |
| group |
The group the repository belongs to. It is always set to the string, |
| id | The repository ID. |
| last_sync_date |
The date and time the repository was last synchronized. The value is |
| last_sync_exception |
The exception raised if the repository failed to synchronize. The value is |
| last_sync_result | The result of the synchronization task. The values are:
|
| last_sync_traceback |
The traceback that was logged if the repository failed to synchronize. The value is |
| metadata_available | A boolean value denoting whether metadata is available for the repository. |
| name | The name of the repository. It is the same as the repository ID. |
| next_sync_date |
The date and time of the next scheduled synchronization of the repository. If a synchronization task is currently running, the value is |
| repo_published | A boolean value denoting whether this repository has been published in RHUI. Note that, by default, RHUI is configured to automatically publish repositories. |
Chapter 4. Managing Containers Copy linkLink copied to clipboard!
4.1. Managing containers Copy linkLink copied to clipboard!
You can automate the deployment of applications inside Linux containers using RHUI. Using containers offers the following advantages:
- Requires less storage and in-memory space than VMs: Because the containers hold only what is needed to run an application, saving and sharing is more efficient with containers than it is with VMs that include entire operating systems.
- Improved performance: Because you are not running an entirely separate operating system, a container typically runs faster than an application that carries the overhead of a new VM.
- Secure: Because a container typically has its own network interfaces, file system, and memory, the application running in that container can be isolated and secured from other activities on a host computer.
- Flexible: With an application’s runtime requirements included with the application in the container, a container can run in multiple environments.
4.1.1. Understanding containers in Red Hat Update Infrastructure Copy linkLink copied to clipboard!
A container is an application sandbox. Each container is based on an image that holds necessary configuration data. When you launch a container from an image, a writable layer is added on top of this image. Every time you commit a container, a new image layer is added to store your changes.
An image is a read-only layer that is never modified. All changes are made in the top-most writable layer, and the changes can be saved only by creating a new image. Each image depends on one or more parent images.
A platform image is an image that has no parent. Platform images define the runtime environment, packages, and utilities necessary for a containerized application to run. The platform image is read-only, so any changes are reflected in the copied images stacked on top of it.
4.1.2. Adding a container to Red Hat Update Infrastructure Copy linkLink copied to clipboard!
You can use the rhua rhui-manager tool to add containers using the Repository Management section.
Procedure
To enable container support in the RHUI environment, edit the
/etc/rhui/rhui-tools.conffile and set container support using the following:[container] container_support_enabled: TrueIf you want to save your credentials for the Red Hat container registry in the RHUI configuration, add the following lines to the
containersection:[container] registry_username: your_RH_login registry_password: your_RH_passwordTo apply this new configuration to all of your CDS nodes, run the following:
# rhua rhui-manager --noninteractive cds reinstall --all[container] registry_username: your_RH_login registry_password: your_RH_passwordIf you normally synchronize from a registry different from
registry.redhat.io, also change the values of the registry_url and registry_auth options accordingly.On the RHUA node, run
rhui-manager:# rhua rhui-managerPress
rto access the Repository Management screen.-= Red Hat Update Infrastructure Management Tool =- -= Repository Management =- l list repositories currently managed by the RHUI i display detailed information on a repository a add a new Red Hat content repository ac add a new Red Hat container c create a new custom repository (RPM content only) d delete a repository from the RHUI u upload content to a custom repository (RPM content only) ur upload content from a remote web site (RPM content only) p list packages in a repository (RPM content only) Connected: rhua.example.comPress
acto add a new Red Hat container.rhui (repo) => ac Specify URL of registry [https://registry.redhat.io]:-
If the container you want to add exists in a non-default registry, enter the registry URL. Press
Enterwithout entering anything to use the default registry. Enter the name of the container in the registry:
jboss-eap-6/eap64-openshiftEnter a unique ID for the container.
rhui-managerconverts the name of the container from the registry to the format that is usable in Pulp by replacing slashes and dots with underscores. You can use such a converted name by pressing Enter or by entering a name of your choice.Enter a display name for the container.
jboss-eap-6_eap64-openshift- Optional: Set your login and password in the RHUI configuration if prompted.
Verify the displayed summary.
The following container will be added: Registry URL: https://registry.redhat.io Container Id: jboss-eap-6_eap64-openshift Display Name: jboss-eap-6_eap64-openshift Upstream Container Name: jboss-eap-6/eap64-openshift Proceed? (y/n)Press
yto proceed and add the container.y Successfully added container jboss-eap-6_eap64-openshift
4.1.3. Synchronizing container repositories Copy linkLink copied to clipboard!
After you add your container to Red Hat Update Infrastructure, you can use the rhui-manager tool to synchronize the container.
Procedure
On the RHUA node, run
rhua rhui-manager:# rhua rhui-manager-
Press
sto access thesynchronization status and schedulingscreen. -
Press
srto synchronize an individual repository immediately. - Enter the number of the repository that you wish to synchronize.
-
Press
cto confirm the selection. Verify the repository and press
yto synchronize ornto cancel.The following repositories will be scheduled for synchronization: jboss-eap-6_eap64-openshift Proceed? (y/n) y Scheduling sync for jboss-eap-6_eap64-openshift... ... successfully scheduled for the next available timeslot.
4.1.4. Generating container client configurations Copy linkLink copied to clipboard!
RHUI clients can pull containers from RHUI using client configuration. The RPM contains the load balancer’s certificate and you can use it to add the load balancer to the container registry and to modify container configuration.
Procedure
On the RHUA node, run
rhui-manager:# rhua rhui-manager-
Press
eto access theentitlement certificates and client configuration RPMsscreen. -
Press
dtocreate a container client configuration RPM. Enter the full path of a local directory where you want to save the configuration files.
/root/Enter the name of the RPM.
containertest-
Enter the version number of the configuration RPM. The default is
2.0. -
Enter the release number of the configuration RPM. The default is
1. Enter the number of days the certificate should be valid. The default is
365.Successfully created client configuration RPM. Location: /root/containertest-2.0/build/RPMS/noarch/containertest-2.0-1.noarch.rpm
4.1.5. Installing a container configuration RPM on the client Copy linkLink copied to clipboard!
After generating the container configuration RPM, you can install it on a client by importing it to your local machine.
Procedure
Retrieve the RPM from the RHUA node to your local machine:
# scp root@rhua.example.com:/var/lib/rhui/root/containertest-2.0/build/RPMS/noarch/containertest-2.0-1.noarch.rpm .Transfer the RPM from the local machine to the client.
# scp containertest-2.0-1.noarch.rpm root@cli01.example.com:.Switch to the client and install the RPM:
[root@cli01 ~]# dnf install containertest-2.0-1.noarch.rpm
4.1.6. Testing the podman pull command on the client Copy linkLink copied to clipboard!
You can use the podman pull command to verify the content on the container.
Procedure
Run the
podman pullcommand.[root@cli01 ~]# podman pull jboss-eap-6_eap64-openshift Resolving "jboss-eap-6_eap64-openshift" using unqualified-search registries (/etc/containers/registries.conf) Trying to pull cds.example.com/jboss-eap-6_eap64-openshift:latest... Getting image source signatures Copying blob b0e0b761a531 done Copying blob aa23ac04e287 done Copying blob 0d30ea1353f9 done Copying config 3d0728c907 done Writing manifest to image destination Storing signatures 3d0728c907d55d9faedc4d19de003f21e2a1ebdf3533b3d670a4e2f77c6b35d2If the
podman pullcommand fails, check therhui-managerstatus. The synchronization probably has not been performed yet and you have to wait until it synchronizes.Resolving "jboss-eap-6_eap64-openshift" using unqualified-search registries (/etc/containers/registries.conf) Trying to pull cds.example.com/jboss-eap-6_eap64-openshift:latest... Error: initializing source docker://cds.example.com/jboss-eap-6_eap64-openshift:latest: reading manifest latest in cds.example.com/jboss-eap-6_eap64-openshift: manifest unknown: Manifest not found.
Chapter 5. Creating Entitlement Certificates and Client Configuration RPM Copy linkLink copied to clipboard!
5.1. Creating an entitlement certificate and a client configuration RPM Copy linkLink copied to clipboard!
RHUI uses entitlement certificates to ensure that the client making requests on the repositories is authorized by the cloud provider to access those repositories. The entitlement certificate must be signed by the cloud provider’s Certificate Authority (CA) Certificate. The CA Certificate is installed on the CDS as part of its configuration.
5.2. Creating a client entitlement certificate with the Red Hat Update Infrastructure Management Tool Copy linkLink copied to clipboard!
When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you decide how to subdivide your clients and create a separate certificate for each one. Each certificate can then be used to create individual RPMs.
Prerequisites
- The entitlement certificate must be signed by the cloud provider’s CA Certificate.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
eto select create entitlement certificates and client configuration RPMs. -
Press
eto select generate an entitlement certificate. Select which repositories to include in the entitlement certificate by typing the number of the repository at the prompt. Typing the number of a repository places an x next to the name of that repository. Continue until all repositories you want to add have been checked.
ImportantInclude only repositories for a single RHEL version in a single entitlement. Adding repositories for multiple RHEL versions leads to an unusable
dnfconfiguration file.-
Press
cat the prompt to confirm. Enter a name for the certificate. This name helps identify the certificate within the Red Hat Update Infrastructure Management Tool and generate the name of the certificate and key files.
Name of the certificate. This will be used as the name of the certificate file (name.crt) and its associated private key (name.key). Choose something that will help identify the products contained with it.- Enter a path to save the certificate. Leave the field blank to save it to the current working directory.
Enter the number of days the certificate should be valid for. Leave the field blank for 365 days. The details of the repositories to be included in the certificate display.
Repositories to be included in the entitlement certificate: Red Hat Repositories Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Debug RPMs) from RHUI Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI Proceed? (y/n)-
Press
yat the prompt to confirm the information and create the entitlement certificate.
Verification
You will see a similar message if the entitlement certificate was created:
..........................+++++ ....+++++ Entitlement certificate created at ./rhel10-for-rhui5.crt ------------------------------------------------------------------------------
5.3. Creating a client entitlement certificate with the CLI Copy linkLink copied to clipboard!
When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you decide how to subdivide your clients and create a separate certificate for each one. Each certificate can then be used to create individual RPMs.
Prerequisites
- The entitlement certificate must be signed by the cloud provider’s CA Certificate.
Procedure
Use the following command to create an entitlement certificate from the RHUI CLI:
# rhua rhui-manager client cert --repo_label rhel-10-for-x86_64-appstream-eus-rhui-source-rpms --name rhuiclientexample --days 365 --dir /root/clientcert .............................................+++++ ...............................................................................+++++ Entitlement certificate created at /root/clientcert/rhuiclientexample.crtNoteUse Red Hat repository labels, not IDs. To get a list of all labels, run the
rhui-manager client labelscommand. If you include a protected custom repository in the certificate, use the repository’s ID instead.
Verification
A similar message displays if you successfully created and entitlement certificate:
Entitlement certificate created at /root/clientcert/rhuiclientexample.crt
5.4. Creating a client configuration RPM with the CLI Copy linkLink copied to clipboard!
When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you need to decide how to subdivide your clients and create a separate certificate for each one. You can then use each certificate to create individual RPMs for installation on the appropriate guest images.
Use this procedure to create RPMs with the CLI.
Procedure
Use the following command to create an RPM with the RHUI CLI:
# rhua rhui-manager client rpm --entitlement_cert /root/clientcert/rhuiclientexample.crt --private_key /root/clientcert/rhuiclientexample.key --rpm_name clientrpmtest --dir /root --unprotected_repos unprotected_repo1 Successfully created client configuration RPM. Location: /root/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpmNoteWhen using the CLI, you can also specify the URL of the proxy server to use with RHUI repositories, or you can use
_none_(including the underscores) to override any globaldnfsettings on a client machine. To specify a proxy, use the--proxyparameter.
Verification
A similar message displays if you successfully created a client configuration RPM:
Successfully created client configuration RPM. Location: /root/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm
5.5. Creating a client configuration RPM with the Red Hat Update Infrastructure Management Tool Copy linkLink copied to clipboard!
When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you need to decide how to subdivide your clients and create a separate certificate for each one. You can then use each certificate to create individual RPMs for installation on the appropriate guest images.
Use this procedure to create RPMs with the RHUI Management Tool.
The following procedure creates RPMs in the RHUA container. For best results, use a directory in your container that is also available on the host as a mount point.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
eto select create entitlement certificates and client configuration RPMs. -
From the Client Entitlement Management screen, press
cto select create a client configuration RPM from an entitlement certificate. Enter the full path of a local directory to save the configuration files to:
Full path to local directory in which the client configuration files generated by this tool should be stored (if this directory does not exist, it will be created):- Enter the name of the RPM.
- Enter the version of the configuration RPM. The default version is 2.0.
- Enter the release of the configuration RPM. The default release is 1.
- Enter the full path to the entitlement certificate authorizing the client to access specific repositories.
- Enter the full path to the private key for the entitlement certificate.
- Select any unprotected custom repositories to be included in the client configuration.
-
Press
cto confirm selections or?for more commands.
Verification
A similar message displays if the RPM was successfully created:
Successfully created client configuration RPM. Location: /root/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm
5.6. Changing the repository ID prefix in a client configuration RPM using the CLI Copy linkLink copied to clipboard!
When creating RPMs, you can either set a custom repository ID prefix or remove it entirely. This is set by editing the main configuration file /etc/rhui/rhui-tools.conf in the RHUA container. By default, the prefix is rhui-.
Procedure
On the RHUA node, edit or remove the prefix from the main configuration file:
Set a custom prefix:
[rhui] client_repo_prefix: myrhui-If you don’t want to use any prefix, set an empty value:
[rhui] client_repo_prefix:
5.7. Typical client RPM workflow Copy linkLink copied to clipboard!
As a CCSP, you can offer various versions of RHEL and a variety of layered products available on top of it. In addition to the Red Hat repositories that provide this content, you will need custom repositories to provide updates to client configuration RPMs for these RHEL versions and layered products. You must create a custom repository for each RHEL version and each layered product sold separately. For example, you will need separate custom repositories for the base RHEL 10 offering and for SAP on RHEL. These custom repositories will store the corresponding client configuration RPMs. Whenever you update these RPMs—for example, to add a new repository or to update an expiring certificate—you will upload newer versions to the corresponding custom repositories.
It is good practice to sign all RPMs with a GPG key, ensuring that users are installing official packages from you that have not been tampered with. However, signing packages is outside the scope of RHUI, so you need to sign your client configuration RPMs using tools available in your company. To create the custom repository, you only need the public GPG key on the RHUA to configure it for use with the custom repository. Note that rhui-manager will automatically include the key in the client configuration RPM and use it for the custom repository in dnf configuration.
Procedure
In the following example, you will create a custom repository for the client configuration RPM for base RHEL 10 on the x86_64 architecture:
# rhua rhui-manager repo create_custom --protected --repo_id client-config-rhel-10-x86_64 --display_name "RHUI Client Configuration for RHEL 9 on x86_64" --gpg_public_keys /root/RPM-GPG-KEY-my-cloudYou can use a different repository ID and display name if desired, and ensure you specify the actual GPG key file.
Add the relevant Red Hat repositories. The following YAML file contains the typical set of repositories for base RHEL 10 on the x86_64 architecture, using unversioned repositories:
# cat rhel-10-x86_64.yaml name: Red Hat Enterprise Linux 10 on x86_64 repo_ids: - codeready-builder-for-rhel-10-x86_64-rhui-debug-rpms-8 - codeready-builder-for-rhel-10-x86_64-rhui-rpms-8 - codeready-builder-for-rhel-10-x86_64-rhui-source-rpms-8 - rhel-10-for-x86_64-appstream-rhui-debug-rpms-8 - rhel-10-for-x86_64-appstream-rhui-rpms-8 - rhel-10-for-x86_64-appstream-rhui-source-rpms-8 - rhel-10-for-x86_64-baseos-rhui-debug-rpms-8 - rhel-10-for-x86_64-baseos-rhui-rpms-8 - rhel-10-for-x86_64-baseos-rhui-source-rpms-8 - rhel-10-for-x86_64-supplementary-rhui-debug-rpms-8 - rhel-10-for-x86_64-supplementary-rhui-rpms-8 - rhel-10-for-x86_64-supplementary-rhui-source-rpms-8To add and synchronize all these repositories using the YAML file above, run the following command:
# rhua rhui-manager repo add_by_file --file rhel-10-x86_64.yaml --sync_nowCreate an entitlement certificate. You will need a list of repository labels that are to be allowed in the certificate. Repository labels are often identical to repository IDs, except when the repository ID contains a specific RHEL minor version, in which case the label does not contain the minor version but only the major version. In the case of base RHEL repositories, the IDs are identical, so you can extract them from the YAML file above, using the following Python code:
import yaml with open("rhel-10-x86_64.yaml") as repoyaml: repodata = yaml.safe_load(repoyaml) print(",".join(repodata["repo_ids"]))Copy the output to the clipboard and store it as an environment variable; for example, $labels:
# labels=<paste the contents of the clipboard here>In addition to the RHEL repository labels, you also need to add the custom repository to the comma-separated list of labels when creating the entitlement certificate. Run the following command to create the entitlement certificate allowing access to both the RHEL repositories and the custom repository:
# rhua rhui-manager client cert --name rhel-10-x86_64 --dir /root --days 3650 --repo_label $labels,client-config-rhel-10-x86_64If your company’s policy allows certificates to be valid for only one year, two years, etc., change the value of the
--daysargument accordingly.This command creates the files
/root/rhel-10-x86_64.crtand/root/rhel-10-x86_64.key. You will need them in the next step.Create a client configuration RPM:
# rhua rhui-manager client rpm --dir /tmp --rpm_name rhui-client-rhel-10-x86_64 --rpm_version 1.0 --entitlement_cert /root/rhel-10-x86_64.crt --private_key /root/rhel-10-x86_64.keyUse an RPM name or version of your choice. With the values above, the command creates the RPM and prints its location, which is:
`/tmp/rhui-client-rhel-10-x86_64-1.0/build/RPMS/noarch/rhui-client-rhel-10-x86_64-1.0-1.noarch.rpm` . Transfer this RPM from the RHUA to your system and sign it with the appropriate GPG key—the private key that corresponds to the public key that you used as the `--gpg_public_keys` parameter when you created the custom repository. You can then, for example, have the signed RPM preinstalled on RHEL 8 x86_64 images in your cloud environment. You also need to transfer the signed RPM back to the RHUA and upload it to the custom repository for RHEL 8 on x86_64:# rhua rhui-manager packages upload --repo_id client-config-rhel-10-x86_64 --packages /root/signed/rhui-client-rhel-10-x86_64-1.0-1.noarch.rpm
Verification
Check the contents of the custom repository:
# rhua rhui-manager packages list --repo_id client-config-rhel-10-x86_64This command is supposed to print the RPM file that you have uploaded.
Once you have configured your CDS and HAProxy nodes, which is described later in this guide, you can also install the client configuration RPM on a test VM and verify access to all the relevant repositories by running the following command on the test VM:
# dnf -v repolistThis command is supposed to print the configured RHEL 8 repositories and the custom repository for client configuration RPMs.
Updating the client configuration RPM
When it is necessary to rebuild the client configuration RPM, increase the version number.
If you used
1.0in the previous invocation, use e.g.2.0now, and keep the rest of the parameters:# rhua rhui-manager client rpm --dir /tmp --rpm_name rhui-client-rhel-10-x86_64 --rpm_version 2.0 ...Then, again, sign the newer RPM, transfer it to the RHUA, and upload it to the custom repository:
# rhua rhui-manager packages upload --repo_id client-config-rhel-10-x86_64 --packages /root/signed/rhui-client-rhel-10-x86_64-2.0-1.noarch.rpm- Client VMs on which the previous version of the RPM is installed will now be able to update to the newer version. Note that it may be necessary to clean the dnf cache on the client VM to make dnf reload the repodata, which was updated when the newer RPM was uploaded.
Do not combine x86_64 and ARM64 repositories in one entitlement certificate. The client configuration RPM created by rhui-manager using such a certificate would provide access to both architectures on the target client VM, which might cause conflicts. You would have to modify the rh-cloud.repo file and rebuild the RPM outside of rhui-manager. Note that, as long as you used --dir /tmp when creating the client configuration RPM, the artifacts are now stored in /tmp/rhui-client-rhel-10-x86_64-1.0/build/. For detailed information about rebuilding RPMs, see Packaging and distributing software in the RHEL documentation.
It is currently impossible to make rhui-manager create the rh-cloud.repo file with certain repositories—for example, -debug and -source repositories—disabled by default. You would have to modify the rh-cloud.repo file and rebuild the RPM outside of rhui-manager. This issue is tracked in BZ#1772156.
Chapter 6. Managing Red Hat Certificates Copy linkLink copied to clipboard!
6.1. Red Hat Update Appliance certificates Copy linkLink copied to clipboard!
The RHUA in RHUI uses the following certificates and keys:
- Content certificate and private key
- Entitlement certificate and private key
- SSL certificate and private key
- Cloud provider’s CA certificate
The RHUA is configured with the content certificate and the entitlement certificate. The RHUA uses the content certificate to connect to the Red Hat CDN. It also uses the Red Hat CA certificate to verify the connection to the Red Hat CDN. As the RHUA is the only component that connects to the Red Hat CDN, it is the only RHUI component that has this certificate deployed. It should be noted that multiple RHUI installations can use the same content certificate. For instance, the Amazon EC2 cloud runs multiple RHUI installations (one per region), but each RHUI installation uses the same content certificate.
Clients use the entitlement certificate to permit access to packages in RHUI. To perform an environment health check, the RHUA attempts a dnf request against each CDS. To succeed, the dnf request must specify a valid entitlement certificate.
6.2. Content delivery server certificates Copy linkLink copied to clipboard!
Each CDS node in RHUI uses the following certificates and keys:
- SSL certificate and private key
- Cloud provider’s CA certificate
The only certificate necessary for the CDS is an SSL certificate, which permits HTTPS communications between the client and the CDS. The SSL certificates are scoped to a specific hostname, so a unique SSL certificate is required for each CDS node. If SSL errors occur when connecting to a CDS, verify that the certificate’s common name is set to the fully qualified domain name (FQDN) of the CDS on which it is installed.
The CA certificate is used to verify that the entitlement certificate sent by the client as part of a dnf request was signed by the cloud provider. This prevents a rogue instance from generating its own entitlement certificate for unauthorized use within RHUI.
6.3. Client certificates Copy linkLink copied to clipboard!
Each client in the RHUI uses an entitlement certificate and private key as well as the cloud provider’s CA certificate.
The entitlement certificate and its private key enable information encryption from the CDS back to the client. Each client uses the entitlement certificate when connecting to the CDS to prove it has permission to download its packages. All clients use a single entitlement certificate.
The cloud provider’s CA certificate is used to verify the CDS’s SSL certificate when connecting to it. This ensures that a rogue instance is not impersonating the CDS and introducing potentially malicious packages into the client.
The CA certificate verifies the SSL certificate, not the entitlement certificate. The reverse is true for the CDS node. The SSL certificate and private key are used to encrypt data from the client to the CDS. The CA certificate present on the CDS verifies that the CDS node should trust the entitlement certificate sent by the client.
6.4. Listing the entitled products for a certificate Copy linkLink copied to clipboard!
The Entitlements Manager screen is used to list entitled products in the current Red Hat content certificates and to upload new certificates.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
nto select manage Red Hat entitlement certificates. From the Entitlements Manager screen, press
lto list data about the current content certificate:rhui (entitlements) => l Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Debug RPMs) from RHUI Expiration: 02-27-2027 Certificate: c885597492374720bb5d398c3f65d1ed.pem Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI Expiration: 02-27-2027 Certificate: c885597492374720bb5d398c3f65d1ed.pem Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI Expiration: 02-27-2027 Certificate: c885597492374720bb5d398c3f65d1ed.pem Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI Expiration: 02-27-2027 Certificate: c885597492374720bb5d398c3f65d1ed.pem Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI Expiration: 02-27-2027 Certificate: c885597492374720bb5d398c3f65d1ed.pem Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Source RPMs) from RHUI Expiration: 02-27-2027 Certificate: c885597492374720bb5d398c3f65d1ed.pem
Verification
You will see a list of the entitled products in the current Red Hat content certificates.
Chapter 7. Managing Content Delivery Servers Copy linkLink copied to clipboard!
7.1. Managing content delivery servers Copy linkLink copied to clipboard!
CDS nodes provide content to RHUI clients.
You can use the Content Delivery Server (CDS) Management screen to list, add, delete, and reinstall CDS nodes.
7.2. Registering a new CDS Copy linkLink copied to clipboard!
The Red Hat Update Infrastructure Management Tool provides several options for configuring a CDS within the RHUI.
Prerequisites
-
Make sure
sshdis running on the CDS node and thatport 443is open.
Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is registered. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
cto select manage content delivery servers (CDS). -
From the Content Delivery Server (CDS) Management screen, press
ato add a new CDS instance. Enter the hostname of the CDS to add:
Hostname of the CDS instance to register: cds1.example.comEnter the user name that will have SSH access to the CDS and have sudo privileges.
Username with SSH access to <cds1.example.com> and sudo privileges: <cloud-user>Enter the absolute path to the SSH private key for logging in to the CDS and press
Enter.Absolute path to an SSH private key to log into <cds1.example.com> as <cloud-user>: /home/<cloud-user>/.ssh/id_rsa_rhuaUpdate the instance with the latest versions of available packages
Update instance after registering? (y/n): yOptional: If you wish to use custom SSL certificates, enter the absolute path to the custom SSL certificate, SSL Key, and SSL crt files.
NoteIf you do not provide an SSL certificate, it will be automatically generated.
Optional absolute path to user supplied SSL key file: /home/<cloud-user>/custom_ssl.key Optional absolute path to user supplied SSL crt file: /home/<cloud-user>/custom_ssl.crt ......................................................................... The following CDS has been successfully added: Hostname: <cds1.example.com> SSH Username: <cloud-user> SSH Private Key: /home/<cloud-user>/.ssh/id_rsa_rhua The CDS will now be configured: .................................................................... The CDS was successfully configured.- If adding the content delivery server fails, check that the firewall rules permit access between the RHUA and the CDS.
Run the
mountcommand to see if shared storage is mounted as read-write.[root@rhua ~]# mount | grep rhui nfs.example.com:/export on /var/lib/rhui/remote_share type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.8.41.163,local_lock=none,addr=10.8.41.163)- After successful configuration, repeat these steps for all remaining CDS nodes.
7.3. Listing all known CDS instances managed by RHUI 5 Copy linkLink copied to clipboard!
You can use the Content Delivery Server (CDS) Management screen to list all CDS nodes managed by RHUI 5}.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
cto select manage content delivery servers (CDS): From the Content Delivery Server (CDS) Management screen, press
lto list all known CDS nodes that RHUI 5} manages:Hostname: <cds1.example.com> SSH Username: <cloud-user> SSH Private Key: /<cloud-user>/.ssh/id_rsa_rhua
7.4. Reinstalling and reapplying configuration to a CDS Copy linkLink copied to clipboard!
You may encounter a situation where you need to reinstall and reapply the configuration for a CDS. The Red Hat Update Infrastructure Management Tool provides an easy way to accomplish this task.
Prerequisites
- At least one installed CDS
Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is reinstalled. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
cto select manage content delivery servers (CDS). -
From the Content Delivery Server (CDS) Management screen, press
rto select reinstall and reapply configuration to an existing CDS instance. The Red Hat Update Infrastructure Management Tool automatically performs all reinstallation and reconfiguration tasks. Select the CDS to reinstall:
1 - Hostname: <cds1.example.com> SSH Username: <cloud-user> SSH Private Key: /<cloud-user>/.ssh/id_rsa_rhua-
Enter a value or
bto abort: 1: 1 Update instance(s) after reinstalling? (y/n): y
Checking that the RHUA services are reachable from the instance... Done. Installing and configuring the CDS... PLAY [Registering a CDS instance] ********************************************** ... TASK [Update CDS instance] ***************************************************** ok: [cds1.example.com] PLAY RECAP ********************************************************************* cloud-user@cds1.example.com : ok=24 changed=10 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 Done.
Verification
Check that you successfully reinstalled and reconfigured the CDS by viewing the code output:
Ensuring that instance ports are reachable ...
Done.
7.5. Configuring a CDS to accept legacy CAs Copy linkLink copied to clipboard!
A CDS node normally accepts only entitlement certificates signed by the Certificate Authority (CA) that is currently configured on RHUI 5. You may want to accept other previously created CAs so that clients can continue to work if you change your main CA or when the CA certificate expires. RHUI 5 supports the concept of legacy CAs, where you can install other CA certificates on CDS nodes and make them usable.
Prerequisites
-
Make sure all your RHUI nodes are running version 5.0 or later. If you originally installed RHUI from an older version, reinstall your CDS nodes in
rhui-managerfirst.
Procedure
-
Transfer your legacy CA certificate to your CDS nodes and save it in the
/etc/pki/rhui/legacy-ca/directory. Get the subject hash value from the certificate and keep it in a shell variable:
#hash=`openssl x509 -hash -noout -in /etc/pki/rhui/legacy-ca/YOUR_CERT.crt`Create a symbolic link to the certificate file in the
/etc/pki/tls/certs/directory with the hash and an unused number, starting from 0, as the symbolic link name:#ln -s /etc/pki/rhui/legacy-ca/YOUR_CERT.crt /etc/pki/tls/certs/$hash.0This action takes effect immediately.
If you decide to stop accepting the certificate, delete the symbolic link and the certificate file; restart the httpd service.
7.6. Configuring a CDS to stop accepting legacy CAs Copy linkLink copied to clipboard!
To limit your content delivery servers (CDS) nodes from accepting legacy certificate authorities (CAs), remove the respective CA certificates.
Prerequisites
- Clients are no longer using the CA.
Procedure
On the CDS node, navigate to the
/etc/pki/rhui/legacy/directory:# cd /etc/pki/rhui/legacy/- Optional: Back up the existing CA certificates:
Delete the CA certificate that corresponds to the CA you want to limit:
# rm example-legacy.crt
Verification
- The CDS node stops accepting legacy CAs as soon as you delete the CA certificate.
7.7. Unregistering a CDS Copy linkLink copied to clipboard!
You can unregister (delete) a CDS instance that you are not going to use.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
cto select manage content delivery servers (CDS). -
From the Content Delivery Server (CDS) Management screen, press
dto delete a CDS instance. Enter the hostname of the CDS to delete:
Hostname of the CDS instance to unregister: cds1.example.com
Chapter 8. Managing HAProxy load balance instances Copy linkLink copied to clipboard!
8.1. Managing an HAProxy load-balancer instance Copy linkLink copied to clipboard!
A load-balancing solution must be in place to spread client HTTPS requests across all CDS servers. Red Hat Update Infrastructure5 uses HAProxy by default, but it is up to you to choose what load-balancing solution (for example, the one from the cloud provider) to use during the installation. If HAProxy is used, you must also decide how many nodes to bring in.
8.2. Registering a new HAProxy load-balancer Copy linkLink copied to clipboard!
RHUI 5} uses DNS to reach the CDN. In most cases, your instance should be preconfigured to talk to the proper DNS servers hosted as part of the cloud’s infrastructure. If you run your own DNS servers or update your client DNS configuration, there is a chance you will see errors similar to dnf Could not contact any CDS load balancers. In these cases, check that your DNS server is forwarding to the cloud’s DNS servers for the request or that your DNS client is configured to fall back to the cloud’s DNS server for name resolution.
Using more than one HAProxy node requires a round-robin DNS entry for the hostname used as the value of the --cds-lb-hostname parameter when rhui-installer is run ( cds.example.com in this guide) that resolves to the IP addresses of all HAProxy nodes. How to Configure DNS Round Robin presents one way to configure a round-robin DNS. In the context of RHUI 5}, these will be the IP addresses of the HAProxy nodes, and they are to be mapped to the hostname specified as --cds-lb-hostname while calling rhui-installer.
Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is registered. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.
Prerequisites
-
Make sure
sshdis running on the HAProxy load-balancer node and thatport 443is open.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
lto select manage HAProxy load-balancer instances. -
From the Load-balancer (HAProxy) Management screen, press
ato add a new load-balancer instance. Enter the hostname of the new load-balancer:
Hostname of the HAProxy Load-balancer instance to register: <haproxy1.example.com>Enter the username that will have SSH access to the load-balancer and have sudo privileges:
Username with SSH access to cds.example.com and sudo privileges: <cloud-user>Enter the absolute path to the SSH private key for logging in to the load-balancer instance and press
Enter:Absolute path to an SSH private key to log into cds.example.com as <cloud-user>: /<cloud-user>/.ssh/id_rsa_rhuaUpdate the instance with the latest versions of available packages
Update instance after registering? (y/n): yOptional: Enter an optional absolute path to a user supplied HAProxy configuration file and press
Enter.If you do not specify the path to a custom configuration file, the default file,
/usr/share/rhui-tools/templates/haproxy.cfg, is used instead.Optional absolute path to user supplied HAProxy config file: ......................................................................... The following load-balancer has been successfully added: Hostname: <haproxy1.example.com> SSH Username: <cloud-user> SSH Private Key: /<cloud-user>/.ssh/id_rsa_rhua The load-balancer will now be configured:- If the load-balancer fails to add, check that the firewall rules permit access between the RHUA and the load-balancer.
- After successful configuration, repeat these steps for any remaining load-balancer instances.
Verification
The following message displays:
The HAProxy Load-balancer was successfully configured.
8.3. Listing all known HAProxy load-balancer instances managed by RHUI 5 Copy linkLink copied to clipboard!
You can use Load-balancer (HAProxy) Management screen to show all known HAProxy load-balancer instances that RHUI 5} is managing.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
lto select manage HAProxy load-balancer instances. From the Load-balancer (HAProxy) Management screen, press
lto list the load-balancer instances that RHUI manages:Hostname: <haproxy1.example.com> SSH Username: <cloud-user> SSH Private Key: /<cloud-user>/.ssh/id_rsa_rhua
8.4. Reinstalling and reapplying the configuration to an HAProxy load-balancer Copy linkLink copied to clipboard!
You may encounter a situation where you need to reinstall and reapply the configuration for an HAProxy load-balancer. The Red Hat Update Infrastructure Management Tool provides an easy way to accomplish this task.
Prerequisites
-
Make sure
sshdis running on the HAProxy load-balancer node and thatport 443is open.
It is crucial that the files included in the restore retain their current attributes.
Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is reinstalled. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
lto select manage HAProxy load-balancer instances. From the Load-balancer (HAProxy) Management screen, press
rto reinstall and reapply the configuration to a load-balancer instance.The Red Hat Update Infrastructure Management Tool automatically performs all reinstallation and reconfiguration tasks.
Select the load-balancer to reinstall:
1 - Hostname: <haproxy1.example.com> SSH Username: <cloud-user> SSH Private Key: /<cloud-user>/.ssh/id_rsa_rhua-
Enter a value or
bto abort: 1: 1 Update instance(s) after reinstalling? (y/n): y
Installing and configuring the HAProxy Load-balancer... PLAY [Registering a load balancer instance] ************************************ ... TASK [Update load balancer instance] ******************************************* ok: [haproxy1.example.com] PLAY RECAP ********************************************************************* cloud-user@haproxy1.example.com : ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Done.
Verification
Check that you successfully reinstalled and reconfigured the load-balancer by viewing the code output:
Ensuring that HAProxy is available...
Done.
8.5. Unregistering an HAProxy load-balancer Copy linkLink copied to clipboard!
You can unregister (delete) an HAProxy load-balancer instance that you are not going to use.
Prerequisites
-
Make sure
sshdis running on the HAProxy load-balancer node and thatport 443is open.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
lto select manage HAProxy load-balancer instances. -
From the Load-balancer (HAProxy) Management screen, press
dto delete a load-balancer instance. Enter the hostname of the load-balancer to delete:
Hostname of the load-balancer instance to unregister: <haproxy1.example.com>
Chapter 9. Synchronization Status and Scheduling Copy linkLink copied to clipboard!
9.1. Checking synchronization status and scheduling Copy linkLink copied to clipboard!
A repository is a storage location for software packages (RPMs). RHEL uses dnf commands to search a repository, download, install, and update the RPMs. The RPMs contain all the dependencies needed to run an application.
The length of the initial synchronization of Red Hat content can vary. If you choose to synchronize repositories as soon as possible, you can synchronize all repositories in Red Hat Update Infrastructure 5 by running rhui-manager repo sync_all in the CLI.
9.2. Displaying repository synchronization summary Copy linkLink copied to clipboard!
You can use the Synchronization Status screen to display information about a particular repository.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. From the Synchronization Status screen, press
dr:-= Repository Summary Synchronization Status =- Last Refreshed: 02:01:22 (updated every 5 seconds, ctrl+c to exit) Last Sync Last Result ------------------------------------------------- Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10) Never None .... .... Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.1) 2026-07-29 17:45:41 Running Associating Content: 11001 (97%) Downloading Artifacts: 7376
9.3. Displaying running synchronizations Copy linkLink copied to clipboard!
You can use the Synchronization Status screen to check the status on running synchronization tasks.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. From the Synchronization Status screen, press
rr:Last Refreshed: 02:06:46 (updated every 5 seconds, ctrl+c to exit) Current Sync Result ------------------------------------------------- Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0) 2026-07-29 17:45:41 Running Associating Content: 11001 (97%) Downloading Artifacts: 7376
9.4. Viewing the details of the last repository synchronization Copy linkLink copied to clipboard!
You can use the Synchronization Status screen to view the details of the last repository synchronization.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. -
From the Synchronization Status screen, press
vr. Enter the number for the repository that you want to see details for:
Enter value (1-66) or 'b' to abort:
Verification
A similar message displays if the selected repository has not been synchronized:
Repo: Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (Debug RPMs) (8.2)
No syncs have been completed for this repository.
9.5. Synchronizing an individual repository immediately Copy linkLink copied to clipboard!
The initial synchronization of content can take a while, typically 10 to 20 minutes. If you choose to synchronize repositories as soon as possible, you can synchronize all repositories in RHUI 5} by running rhui-manager repo sync_all in the CLI.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. From the Synchronization Status screen, press
sr:Select one or more repositories to schedule to be synchronized before its scheduled time. The sync will happen as soon as possible depending on other tasks that may be executing in the RHUI. Sync requests for repositories with tasks in running or pending state will be ignored. Last Result Next Sync Repository -------------------------------------------------Select the repository by entering the value beside the repository name. Enter one repository selection at a time before confirming your product selection:
x 714: Error 2026-11-17 20:30:00 Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)Press
cto confirm:The following repositories will be scheduled for synchronization: Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0) Proceed? (y/n) yPress
yto proceed:Scheduling sync for Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)... ... successfully scheduled for the next available timeslot.NoteThis message displays if a task for the selected repository is running.
Ignoring sync request for Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0) as the repo is currently reserved by a running task.
9.6. Canceling active synchronization tasks Copy linkLink copied to clipboard!
Most environments synchronize repositories on a scheduled basis. You may encounter a situation where you need to cancel active synchronization tasks.
Prerequisites
- There are existing repositories.
- There are active synchronization tasks.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. -
From the Synchronization Status screen, press
cato select cancel active sync tasks. Enter the value for the task or tasks that you want to cancel:
Select one or more repositories for which you want to cancel their active tasks. - 1: Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0) Enter value (1-1) to toggle selection, 'c' to confirm selections, or '?' for more commands:-
Press
cto confirm your selection. Press
yto cancel the synchronization task or tasks:The active tasks will be canceled for the following repositories: Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0) Proceed? (y/n)
Verification
A similar message displays if you cancel an active synchronization task:
Canceling active task for repo Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0) ...
... done
9.7. Viewing and changing a repository auto-publish status Copy linkLink copied to clipboard!
You can use the Synchronization Status screen to look at and modify a repository’s auto-publish status.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. From the Synchronization Status screen, press
ap:rhui (sync) => ap Select one or more repositories to toggle the auto-publish status. The operation will be executed as soon as possible depending on other tasks that may be executing in the RHUI. Status | Repository -------------------------------------------------------------------------- Select one or more repositories: Custom Repositories Red Hat Repositories: dnf - 713: AUTO Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10) - 714: AUTO Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.1)Enter a value (
1-1631) to toggle the selection,cto confirm selections, or?for more commands:The following repositories will have their auto-publish status changed: Red Hat Repositories dnf Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)-
Press
cto confirm your selection. -
Press
yto proceed.
Verification
A similar message displays when you make and confirm a selection:
Scheduling a task to turn off auto-publish status of repository Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10)
9.8. Viewing and advancing repository workflow Copy linkLink copied to clipboard!
You can use the Synchronization Status screen to look at and change a repository’s workflow.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. -
From the Synchronization Status screen, press
wf. Enter a value (
1-1631) to toggle the selection,cto confirm selections, or?for more commands:The following repositories will be scheduled for workflow push: Red Hat Repositories dnf Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10)-
Press
yto proceed:
Verification
A similar message displays if the scheduling was successful:
Scheduling a task for generating metadata version 0 for repo Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10) ...
... task scheduled.
9.9. Exporting a repository to the file system Copy linkLink copied to clipboard!
Repositories are exported automatically after the latest synchronization that updated their contents.
You can use the Synchronization Status screen to forcibly export a repository to a file system at any time.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhua rhui-manager-
Press
sto select synchronization status and scheduling. -
From the Synchronization Status screen, press
ex. - Enter a value to toggle the selection.
Press
cto confirm the selection:The following repositories will be exported: Red Hat Repositories dnf Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10)-
Press
yto proceed.
Verification
A similar message displays if the repository is exported to a file system:
[1/1] Exporting version 1 of the repo Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10).
9.10. RHUI systemd/Timers Copy linkLink copied to clipboard!
Systemd timers are systemd unit files that end in .timer that control .service files or events. Several repetitive tasks are automatically scheduled and run in Red Hat Update Infrastructure, on the RHUA node in particular.
| Timer file | Purpose | Frequency | Log file |
|---|---|---|---|
|
| export synchronized content | every 5 minutes; randomized |
|
|
| delete orphaned content | weekly, on 4am every Wednesday |
|
|
| delete temporary files left behind by Pulp | weekly, on 3am every Tuesday |
|
|
| clean up temporary directories from uploads to custom repositories | every 5 minutes; randomized |
|
| rhui-repo-sync-timer | synchronize repositories, if they are due | every 5 minutes; randomized | (none; see the systemd journal instead) |
| rhui-symlinks-cleanup-deepscan.timer | delete all broken symlinks to deleted artifacts | yearly, at 4am on the Tuesday in January that is between the 16th and the 22nd | /root/.rhui/rhui.log |
| rhui-synchronize-subscriptions.timer | check for changes to the entitlement certificate, and import a new one if needed | hourly; randomized |
|
| rhui-update-mappings.timer | update the information about available minor versions | every six hours; randomized |
|
Parameters for systemd/timers:
-
Timer files are stored in
/usr/lib/systemd/systemin the RHUA container. -
All the log file paths are in the RHUA container. Note that
/var/log/rhuiis also available on the RHUA host as/var/lib/rhui/log, and/root/.rhuiis available as/var/lib/rhui/root/.rhui - Timers are randomized when the RHUA container starts or is restarted.
-
To view all the RHUI timers, run the following command in the RHUA container:
systemctl list-timers --all rhui\*
Chapter 10. Backing up RHUI Copy linkLink copied to clipboard!
10.1. Backing up Red Hat Update Infrastructure Copy linkLink copied to clipboard!
After you have installed and configured your RHUI servers, you might want to back them up. Backing up RHUI is useful if you encounter any problems with RHUI. In such cases, you can return to a previous working configuration by restoring RHUI.
To successfully back up RHUI, you must back up your Red Hat Update Appliance (RHUA).
10.2. Backing up Red Hat Update Appliance Copy linkLink copied to clipboard!
To back up Red Hat Update Appliance (RHUA), you must back up all the associated files and storage.
To back up RHUA, you must stop the associated services. However, stopping services does not disable any client instances from updating or installing packages because clients are connected only to the content delivery servers (CDSs). Consequently, If you have an automated monitoring solution in place, your monitoring may fail during the backup process.
Procedure
Stop RHUA services:
# systemctl stop rhui_rhuaVerify whether the services have stopped:
# systemctl status rhui_rhuaBack up the following files.
# rsync -av --exclude .local --exclude remote_share /var/lib/rhui/ /BACKUP/DIRECTORY/ImportantEnsure that the files retain their current attributes when you back them up.
Back up any generated client entitlement certificates and client configuration RPMs.
Optional: If you want to back up the remote share from the RHUA without using a different backup solution for the file server, use the following command:
# rsync -av /var/lib/rhui/remote_share/ /ANOTHER/BACKUP/DIRECTORY/
Restart RHUI services.
# systemctl start rhui_rhua
10.3. Backing up content delivery servers Copy linkLink copied to clipboard!
To back up CDSs, you must back up all the associated files and storage.
To avoid complete loss of service, back up a single CDS node at a time. Clients will automatically switch to other running CDS nodes.
Procedure
Stop the
nginxservice:# systemctl stop nginxVerify that the
nginxservice has stopped:# systemctl status nginxBack up the following files.
# cp -a <source_files_path> <destination_files_path>ImportantEnsure that the files retain their current attributes when you back them up.
List of files:
- /etc/nginx/*
- /var/log/nginx/*
- /etc/pki/rhui/*
Restart RHUI services.
# rhui-services-restart
Chapter 11. Configuration Files, Exit Codes, Log Files Copy linkLink copied to clipboard!
11.1. Configuration Files Copy linkLink copied to clipboard!
The following configuration files, RHUI manager exit codes, and log files are used in RHUI 5.
| Component | File or Directory | Usage |
|---|---|---|
| Red Hat Update Appliance |
| Pulp config files |
|
| rhui-manager config files | |
|
| ||
|
| Certificates for Red Hat Update Infrastructure | |
|
| Configuration for the subscription synchronization script | |
| Content Delivery Server |
| Certificates for CDS |
| Content Delivery Server |
| SSL configuration file |
| HAProxy |
| HAProxy configuration file |
11.2. RHUI Manager Exit Codes Copy linkLink copied to clipboard!
RHUI Manager uses the following codes to indicate the result of running the rhui-manager status command and running the rhui-manager CLI commands.
| Status Code | Description |
|---|---|
| 0 | Success |
| 1 | General error or a repository synchronization error |
| 2 | SSL certificate error on a CDS |
| 32 | Entitlement CA or SSL certificate expiration warning |
| 64 | Entitlement CA or SSL certificate expiration error |
| 128 | One or more RHUI services is not running on the RHUA, CDS, or HAProxy nodes |
| 238 | No packages to upload to the specified custom repository were found. |
| 239 | A repository could not be deleted because it does not exist. |
| 240 | There was an issue with a required resource. For example, it was impossible to build a client configuration RPM because no valid repository was found. |
| 241 | A synchronization task could not be scheduled because an unknown repository was specified. To troubleshoot * Check the spelling * Add the repository first * Check logs for Pulp issues |
| 242 | A custom repository could not be created due to a Pulp issue. Check the message and logs for details. |
| 243 | Red Hat repositories could not be added because some of them already exist in RHUI and some of them were not available in the entitlement. |
| 244 | A custom repository could not be created because it already exists in RHUI. |
| 245 | A Red Hat repository could not be added because it already exists in RHUI. |
| 246 |
A Red Hat repository could not be added because it is not available in the entitlement. Check the spelling or remove the repository mapping cache using the command |
| 247 | A Red Hat repository could not be added due to a Pulp issue. Check the message and logs for details. |
| 248 |
Migration from RHUI 3 to RHUI 4 was stopped because one or more Red Hat repositories are already present in RHUI 4. You must remove the repositories or use the |
| 249 |
The RHUI configuration, |
| 250 | The entitlement certificate is not writable. |
| 251 | The entitlement certificate has expired. |
| 252 | The entitlement certificate is invalid because it does not contain RHUI repositories. |
| 253 | The entitlement certificate file is not a valid certificate. |
| 254 | Command-line Error: The RHUI CLI could not run due to a network issue. |
| 255 | Argument Error: A required argument was not supplied. |
11.3. Log Files Copy linkLink copied to clipboard!
The paths in this table are valid in the RHUI containers.
| Component | File or Directory | Usage |
|---|---|---|
| Red Hat Update Appliance |
| Red Hat Update Infrastructure Management Tool logs |
|
| Pulp logs; for example, repository synchronization | |
|
| nginx logs | |
|
| CDS and HAproxy management log, service status log | |
|
| Subscription synchronization log | |
|
| Repository export log | |
|
| Temporary directory cleanup log | |
|
| Repository version mapping log | |
| Content Delivery Server |
| nginx logs |
|
| clients' requests for content | |
|
| CDS authorizer plug-in logs; by default, requests without an entitlement certificate | |
|
| CDS content manager plug-in logs; for example, on-demand package downloads | |
|
| CDS mirror plug-in logs; by default, only logs from starting and stopping the plug-in | |
| Client |
| dnf command logs |
| Client |
| dnf command logs |
|
| Client syslog |
See also older logs saved with a number or a time stamp as an extension, possibly compressed by gzip.
Chapter 12. Certified CCSP Certification Workflow Copy linkLink copied to clipboard!
12.1. Certified Cloud and Service Provider certification workflow Copy linkLink copied to clipboard!
The Certified Cloud Provider Agreement requires that Red Hat certifies the images (templates) from which tenant instances are created to ensure a fully supported configuration for end customers.
There are two methods for certifying the images for Red Hat Enterprise Linux. The preferred method is to use the Certified Cloud and Service Provider (CCSP) image certification workflow.
After certifications have been reviewed by Red Hat, a pass/fail will be assigned and certification will be posted to the public Red Hat certification website at Red Hat Ecosystem Catalog.
Chapter 13. Changing Proxy Settings Copy linkLink copied to clipboard!
13.1. Changing proxy settings Copy linkLink copied to clipboard!
RHUI can use a proxy server to sync Red Hat content through. If no proxy server is specified while installing RHUI, none is used. Otherwise, this proxy server is used with all RHUI repositories that you add. This chapter describes how the proxy server configuration can be changed.
13.2. Configuring a new proxy server or unconfiguring an existing proxy server Copy linkLink copied to clipboard!
Follow these steps if you wish to:
- start using a proxy server in a RHUI environment that was installed with no proxy server configuration
- edit the current proxy server configuration, for example, if the server hostname has changed
- stop using the proxy server that a RHUI environment was installed with
Procedure
To configure (or unconfigure) proxy server settings, you will be creating (or editing) the local overrides file,
/etc/rhui/rhui-tools.conf, so that it contains:[proxy] proxy_protocol: <PROTOCOL> proxy_host: <HOSTNAME> proxy_port: <PORT> proxy_user: <USERNAME> proxy_pass: <PASSWORD>Then, the parameters are as follows:
-
PROTOCOLis eitherhttporhttpsif configuring the proxy server; if unconfiguring it, when using the local file, leave the value empty -
HOSTNAMEis the new proxy server hostname; if clearing the configuration when using the local file, leave the value empty -
PORTis the TCP port where the proxy server is listening, typically3128; if clearing the configuration when using the local file, leave the value empty -
USERNAMEis an optional parameter. Only use it if the proxy server requires credentials. If it does not or you are clearing the configuration when using the local file, leave the value empty or do not use theproxy_user:option at all -
PASSWORDis an optional parameter. Only use it if the proxy server requires credentials. If it does not or you are clearing the configuration when using the local file, leave the value empty or do not use theproxy_user:option at all All commands must be run in the RHUA container
ImportantThis new configuration will only affect Red Hat repositories added after the configuration is updated. To apply this new configuration to existing repositories, it is necessary to remove, add, and re-synchronize the repositories.
This will cause an outage that will last from the moment you remove them until you re-sync them. However, already synchronized packages will not have to be re-downloaded from the Red Hat CDN. RHUI will mainly have to parse all the repodata files and determine which package belongs where. This can take up to several hours.
Although there are technical means outside of
rhui-managerwhereby the proxy fields can be modified for the existing repositories—or rather, for the so-called remotes—using such means is unsupported.
-
-
Make sure you have a list (or lists) of your repositories so that you can add them again. If you do not have such a list, you can use
rhui-managerto generate a file with all your currently added Red Hat repositories. To generate a list of Red Hat repositories, first create a raw list with one ID per line:
rhui-manager --noninteractive repo list --redhat_only --ids_only > /root/rawlistThen create a YAML file with repositories. Start by creating a stub:
echo -e "name: all Red Hat repositories\nrepo_ids:" > /root/repo_list.ymlNext, append the repositories from the raw list as YAML list items:
sed "s/^/ - /" /root/rawlist >> /root/repo_list.ymlDelete all Red Hat repositories from your RHUI:
Use the text user interface, or delete them one by one on the command line. For the latter, you can use the raw list created earlier:
while read repo; do rhui-manager --noninteractive repo delete --repo_id $repo; done < /root/rawlistNoteRepositories are deleted in asynchronous background tasks: queued and executed by available Pulp workers. It may take tens of minutes, or hours, to actually delete all the repositories. Be patient.
When the repositories have been deleted, re-add them. They will be added with the new proxy settings (or with no proxy URL) this time. It is also necessary to re-synchronize the repositories. You can add and re-synchronize them in one step on the command line:
rhui-manager --noninteractive repo add_by_file --file /root/repo_list.yml --sync_nowAlternatively, use your own methods to synchronize the repositories, for example, in a specific order. Lastly, you can also simply wait for the synchronization to start automatically: in six hours, or in any other time defined as
repo_sync_frequencyin/etc/rhui/rhui-tools.conf.ImportantIn any case, the repositories will not be available in the meantime.
Examples:
Start using a proxy server, and this server requires no credentials. With the local file:
[proxy] proxy_host: squid.example.com proxy_protocol: http proxy_port: 3128Change the proxy server hostname, everything else remains the same. With the local file:
[proxy] proxy_host: newsquid.example.comStop using the proxy server. With the local file:
[proxy] proxy_protocol: proxy_host: proxy_port:
Verification
The rhui-manager tool does not display information about the proxy server that is used with a repository. However, you can use the pulpcore-manager tool as outlined below:
sudo -u pulp env PULP_SETTINGS=/etc/pulp/settings.py /usr/bin/pulpcore-manager shell << EOM
from pulpcore.app.models import Remote
rem = Remote.objects.get(name="rhel-10-for-x86_64-baseos-rhui-rpms-8")
print(rem.proxy_url)
EOM
The output should look like this for a configured proxy server:
http://squid.example.com:3128
or None if no proxy server is configured with the specified repository.