Installation and Management Guide


Red Hat Update Infrastructure 5

List of requirements, setting up nodes, configuring storage, and installing RHUI 5

Red Hat Customer Content Services

Abstract

This document lists the installation requirements and provides detailed instructions to help cloud providers install RHUI 5.

Chapter 1. About RHUI 5

1.1. About Red Hat Update Infrastructure 5

RHUI 5 is a highly scalable, highly redundant framework that enables you to manage repositories and content. It also enables cloud providers to deliver content and updates to RHUI instances. Based on the upstream Pulp project, RHUI allows cloud providers to locally mirror Red Hat-hosted repository content, create custom repositories with their own content, and make those repositories available to a large group of end users through a load-balanced content delivery system.

As a system administrator, you can prepare your infrastructure for participation in the Red Hat Certified Cloud and Service Provider program by installing and configuring the Red Hat Update Appliance (RHUA), content delivery servers (CDS), repositories, shared storage, and load balancing.

Configuring RHUI comprises the following tasks:

  • Adding, enabling and synchronizing a Red Hat repository
  • Creating client entitlement certificates and client configuration RPMs
  • Creating client profiles for the RHUI servers

Experienced RHEL system administrators are the target audience. System administrators with limited RHEL skills should consider engaging Red Hat Consulting to provide a Red Hat Certified Cloud Provider Architecture Service.

Learn about configuring, managing, and updating RHUI with the following topics:

  • the RHUI components
  • content provider types
  • the command line interface (CLI) used to manage the components
  • utility commands
  • certificate management
  • content management

CDS nodes provide content to RHUI clients.

You can use the Content Delivery Server (CDS) Management screen to list, add, delete, and reinstall CDS nodes.

The RHUI 5 installer, while essentially maintaining the same Ansible playbooks as RHUI 4, looks different compared to the previous version of the installer.

  • It is launched as a container image runtime from any RHEL host capable of running containers.
  • It requires --target-host to deploy the RHUA image. Compare this to the current behaviour of the installer where it installs the RHUA on the machine running the installer itself.
  • It requires --target-host to install.
  • It requires some additional command line arguments supplied to the installer to pass the user supplied certificate files. For example, you can supply volume mounts using the -v podman option.
  • It has improved parameter default assignment logic.

1.3. RHUI 5 components

Understanding how each RHUI component interacts with other components will make your job as a system administrator a little easier.

1.3.1. Red Hat Update Appliance

There is one RHUA per RHUI installation, though in many cloud environments there will be one RHUI installation per region or data center, for example, Amazon’s EC2 cloud comprises several regions. In every region, there is a separate RHUI set up with its own RHUA node.

The RHUA allows you to perform the following tasks:

  • Download new packages from the Red Hat content delivery network (CDN).
  • Copy new packages to the shared network storage.
  • Verify the RHUI installation’s health and write the results to a file located on the RHUA. Monitoring solutions use this file to determine the RHUI installation’s health.
  • Provide a human-readable view of the RHUI installation’s health through a CLI tool.

RHUI uses two main configuration files: /etc/rhui/rhui-tools.conf and /etc/rhui/rhui-subscription-sync.conf.

The /etc/rhui/rhui-tools.conf configuration file contains general options used by the RHUA, such as the default file locations for certificates, and default configuration parameters for the Red Hat CDN synchronization. This file normally does not require editing.

The /etc/rhui/rhui-subscription-sync.conf configuration file contains the credentials for the Pulp database. These credentials must be used when logging in to the rhui-manager interface.

The RHUA employs several services to synchronize, organize, and distribute content for easy delivery.

RHUA services

Pulp
The service that manages the repositories.
PostgreSQL
The database that Pulp uses to keep track of currently synchronized repositories, packages, and other crucial metadata.

1.3.2. Content delivery server

The CDS nodes provide the repositories that clients connect to for the updated content. Because RHUI provides a load-balancer with failover capabilities, we recommended that you use multiple CDS nodes.

The CDS nodes host content to end-user RHEL systems. While there is no required number of systems, the CDS works in a round-robin style load-balanced fashion (A, B, C, A, B, C) to deliver content to end-user systems. The CDS uses HTTPS to host content to end-user systems via dnf repositories.

During configuration, you specify the CDS directory where packages are synchronized. Similar to the RHUA, the only requirement is that you mount the directory on the CDS. It is up to the cloud provider to determine the best course of action when allocating the necessary devices. The Red Hat Update Infrastructure Management Tool configuration RPM linked the package directory with the NGINX configuration to serve it.

Currently, RHUI supports the following shared storage solution:

NFS

If NFS is used, rhui-installer can configure an NFS share on the RHUA to store the content as well as a directory on the CDS nodes to mount the NFS share. The following rhui-installer options control these settings:

  • --remote-fs-mountpoint is the file system location where the remote file system share should be mounted (default: /var/lib/rhui/remote_share)
  • --remote-fs-server is the remote mount point for a shared file system to use, for example, nfs.example.com:/path/to/share (no default value)

The expected usage is that you use one shared network file system on the RHUA and all CDS nodes, for example, NFS. It is possible the cloud provider will use some form of shared storage that the RHUA writes packages to and each CDS reads from.

Note

The storage solution must provide an NFS endpoint for mounting on the RHUA and CDS nodes. Do not set up the shared file storage on any of the RHUI nodes. You must use an independent storage server.

The only nonstandard logic that takes place on each CDS is the entitlement certificate checking. This checking ensures that the client making requests on the dnf repositories is authorized by the cloud provider to access those repositories. The check ensures the following conditions:

  • The entitlement certificate was signed by the cloud provider’s Certificate Authority (CA) Certificate. The CA Certificate is installed on the CDS as part of its configuration to facilitate this verification.
  • The requested URI matches an entitlement found in the client’s entitlement certificate.

If the CA verification fails, the client sees an SSL error. See the CDS node’s NGINX logs under /var/log/nginx/ for more information.

[root@cds01 ~]# ls -1 /var/log/nginx/
access.log
error.log
gunicorn-auth.log
gunicorn-content_manager.log
gunicorn-mirror.log
ssl-access.log----
Note

The NGINX configuration is handled through the /etc/nginx/conf.d/ssl.conf file, which is created during the CDS installation.

1.3.3. HAProxy load-balancer

A load-balancing solution must be in place to spread client HTTPS requests across all CDS servers. RHUI uses HAProxy by default, but it is up to you to choose what load-balancing solution (for example, the one from the cloud provider) to use during the installation. If HAProxy is used, you must also decide how many nodes to bring in.

Clients are not configured to go directly to a CDS; their repository files are configured to point to HAProxy, the RHUI load-balancer. HAProxy is a TCP/HTTP reverse proxy particularly suited for high-availability environments.

Note

If you use an existing load-balancer, ensure port 443 is configured in the load-balancer and that all CDSs in the cluster are in the load-balancer’s pool.

The exact configuration depends on the particular load-balancer software you use. See the following configuration, taken from a typical HAProxy setup, to understand how you should configure your load-balancer:

[root@rhui5proxy ~]# cat /etc/haproxy/haproxy.cfg
global
  chroot  /var/lib/haproxy
  daemon
  group  haproxy
  log  <HAProxy IP Address> local0
  maxconn  4000
  pidfile  /run/haproxy.pid
  stats  socket /var/lib/haproxy/stats
  user  haproxy

defaults
  log  global
  maxconn  8000
  option  redispatch
  retries  3
  stats  enable
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

listen https00
  bind <HAProxy IP Address> :443
  balance roundrobin
  option tcplog
  option tcp-check
    server cds01.example.com cds01.example.com:443 check
    server cds02.example.com cds02.example.com:443 check

Keep in mind that when clients fail to connect, it is important to review the nginx logs on the CDS under /var/log/nginx/ to ensure that any request reached the CDS. If requests do not reach the CDS, issues such as DNS or general network connectivity may be at fault.

1.3.4. Repositories and content

A repository is a storage location for software packages (RPMs). RHEL uses dnf commands to search a repository, download, install, and update the RPMs. The RPMs contain all the dependencies needed to run an application.

Content, as it relates to RHUI, is the software (such as RPMs) that you download from the Red Hat CDN for use on the RHUA and the CDS nodes. The RPMs provide the files necessary to run specific applications and tools. Clients are granted access by a set of SSL content certificates and keys provided by an rpm package, which also provides a set of generated dnf repository files.

1.3.5. Content provider types

There are three types of cloud computing environments:

  • public cloud
  • private cloud
  • hybrid cloud

This guide focuses on public and private clouds. We assume the audience understands the implications of using public, private, and hybrid clouds.

1.4. Component communications

All RHUI components use the HTTPS communication protocol over port 443.

Expand
Table 1.1. Red Hat Update Infrastructure communication protocols
SourceDestinationProtocolPurpose

Red Hat Update Appliance

Red Hat Content Delivery Network

HTTPS

Downloads packages from Red Hat

Load-Balancer

Content Delivery Server

HTTPS

Forwards the clients' requests for repository metadata and packages

Client

Load-Balancer

HTTPS

Used by dnf on the clients to download content

Content Delivery Server

Red Hat Update Appliance

HTTPS

Might request information from Pulp API about content

RHUI nodes require the following network access to communicate with each other.

Note

Make sure that the network port is open and that network access is restricted to only those nodes that you plan to use.

Expand
Table 1.2. Red Hat Update Infrastructure network access
ConnectionPortUsage

RHUA to CDS

22/TCP

SSH Configuration and access

RHUA to HAProxy servers

22/TCP

SSH configuration and access

Clients to HAProxy

443/TCP

Access to content

HAProxy to CDS

443/TCP

Load balancing

NFS ports open for CDS and RHUA

2049/TCP

File system

CDS to RHUA

443/TCP

Retrieve content that has not been symlinked

2.1. RHUI Installation options

The following table presents the various RHUI 5 components.

Expand
Table 2.1. Red Hat Update Infrastructure components and functions
ComponentAcronymFunctionAlternative

Red Hat Update Appliance

RHUA

Downloads content from the Red Hat content delivery network and stores it on the shared storage

None

Content Delivery Server

CDS

Provides the repositories that clients connect to for the updated packages

None

HAProxy

None

Provides load balancing across CDS nodes

Existing load balancing solution

Shared storage

None

Provides shared storage

Existing storage solution

The following table describes how to perform installation tasks.

Expand
Table 2.2. Red Hat Update Infrastructure installation tasks
Installation TaskPerformed on

Install RHEL 9 or higher

RHUA, CDS, and HAProxy

Register the system

RHUA, CDS and HAProxy

Install rhui-installer

RHUA

Install podman on control node

control node

Run rhui-installer

RHUA

Option 1: Full installation

  • A RHUA with shared storage
  • Two or more CDS nodes with this shared storage
  • One or more HAProxy load-balancers

Option 2: Installation with an existing storage solution

  • A RHUA with an existing storage solution
  • Two or more CDS nodes with this existing storage solution
  • One or more HAProxy load-balancers

Option 3: Installation with an existing load-balancer solution

  • A RHUA with shared storage
  • Two or more CDS nodes with this shared storage
  • An existing load-balancer

Option 4: Installation with existing storage and load-balancer solutions

  • A RHUA with an existing storage solution
  • Two or more CDS nodes with this existing shared storage
  • An existing load-balancer
Important

Red Hat Update Infrastructure must be used with at least two CDS nodes and a load-balancer node. Installation without any load-balancer node and with a single CDS node is unsupported.

The following figure depicts a high-level view of how the various RHUI 5 components interact.

Figure 2.1. Red Hat Update Infrastructure 5 overview

Red Hat Update Infrastructure Overview

Install the RHUA and CDS nodes on separate x86_64 servers (bare metal or virtual machines). Ensure all the servers and networks that connect to RHUI can access the Red Hat subscription management service.

2.2. Red Hat Update Infrastructure install types

Standard install

The standard mode when you invoke RHUI 5 installer is to deploy initially the RHUA container image onto the --target-host. In this mode of operation --remote-fs-server is also required.

Maintenance or upgrade of an existing RHUI 5 installation

Once you have deployed the RHUA container image on the target host, you can invoke the installer with --rerun switch to change some of its settings (image version included). In this case, --remote-fs-server is not required, as it will be inferred from the configuration.

Cloning an existing RHUI 5 installation

It is now possible to clone an existing RHUI 5 installation, with some limitations. The main limitation is that the Pulp content must be cloned beforehand, independent of the installation process. Once that is done, the installer can be invoked with --clone flag, which triggers the cloning process. The --clone flag requires both --source-host and --migration-fs-server to be provided, in addition to the standard --target-host argument which is required by default.

Per-artifact sync policies are no longer supported.

For example the following configuration parameters are no longer valid:

  • rpm_sync_policy
  • debug_sync_policy
  • source_sync_policy

The parameter default_sync_policy is still valid. To support different sync policies depending on the artifact type, as well as to provide additional flexibility into selecting the sync policy based on the content in question, two new configuration parameters are available:

  • immediate_repoid_regex
  • on_demand_repoid_regex

Whenever a sync task is submitted, the repoid of the repository is checked against the regex in immediate_repoid_regex first. If it matches, a sync with 'immediate' policy is requested. If not, a match is tested against on_demand_repoid_regex. This match would produce an on_demand sync task. If there is no match, the sync is performed with a policy pointed by default_sync_policy configuration parameter.

In both migration types, no CDS or HAPROXY information is migrated. It is a duty of the RHUI admin to add new CDS and HAPROXY nodes using the RHUI 5 RHUA (either through TUI or CLI). Further, CDS and HAPROXY nodes of the existing RHUI 4 installation are left intact, with their services fully operational. Again, it is a duty of the RHUI admin to shut down those nodes once they are no longer needed. Until then, they still have access to the filesystem share with the Pulp content and they are able to serve RHUI content that has been synced previously and symlinked. After migration, those legacy RHUI 4 CDS nodes will not be able to serve on-demand content not fetched yet, as their configuration points to the RHUI 4 RHUA that has been shut down.

In-place migration of a RHUI 4 installation

If the --migrate-from-rhui-4 installation flag is provided, the installer performs an in-place migration of the existing RHUI 4 RHUA installation on the --target-host, and stops the installation if it does not find RHUI 4. In this mode --remote-fs-server is not required, as it will be inferred from the existing RHUI 4 configuration files.

Through installation steps, RHUI 4 services are shut down and PostgreSQL database files are copied (thus doubling the space requirement for the database files) to the location reachable by the RHUI 5 container. The files are copied to var/lib/rhui/postgres. You will need enough space for the copy of the database in the volume where the root directory is located. You can use the command du -sh /var/lib/pgsql/data to determine the size of your database and determine the amount of space that the database will need. The ownership of the Pulp content files, residing on the shared storage, is changed to match UIDs/GIDs used by the RHUI 5 container.

Migration of a RHUI 4 installation to another machine

If --source-host is provided in addition to --migrate-from-rhui-4, the --source-host is checked for an existing RHUI 4 installation. If found, its configuration, together with the database files, is transferred to the --target-host, and the RHUI 5 RHUA container is deployed there. RHUI RHUA services on the --source-host are shut down prior to the migration, and the Pulp content files on the shared storage will have a different owner, but will be otherwise intact. The same filesystem share is then mounted on the --target-host.

Note

RHUI 5 will move to the latest version of PostgreSQL, ensuring the latest security updates. This will require existing RHUI 4 to be on the latest version, and to update their PostgreSQL to version 15 prior to migrating to RHUI 5.

It is worth noting that in this scenario the hostname of the RHUA is changed, and therefore the RHUI 5 configuration and the SSL certificate for Pulp’s Nginx are adjusted accordingly.

Migration can be targeted not only to a different system but also to a different remote file share. This is indicated by the --migration-fs-server which denotes the remote file share that will be mounted by the --target-host.

Note

The content of the file share that includes the Pulp artifacts, namely the directories pulp3, symlinks, and repo-notes need to be copied independently and before the migration process.

2.5. Providing installation parameters

There are several ways to provide parameters pertaining to RHUI 5 installation. They are, in descending order of priority:

  • Parameters supplied on the command line take absolute precedence over any other parameter provision methods. However, not all installation parameters are supported this way, as we do not want to force users to create an unwieldy and counterintuitive installation command line.
  • Parameters can be provided through an answers file. This method can accommodate a larger set of installation parameters.
  • The installer checks for existence of the required parameters, namely --target-host and --remote-fs-server and exits if they’re not provided.
  • If rhui-tools.conf already exists on the target host, its content is parsed and the values provided there are preserved unless a matching key is provided via command line or the answers file.
  • Some parameters have defaults that are hardcoded in the installer.

Shared storage management

RHUI 5 installer supports NFS only, therefore --remote-fs-type is no longer supported. In addition, providing literal none as --remote-fs-server argument skips the shared NFS storage setup completely. This can come handy in situations where shared storage is managed on some other level or by another product such as OpenShift. It is worth noting that --remote-fs-mountpoint is still supported, but it refers to the filesystem layout on the host, and not the container side. Basically, it helps to determine where you want to mount the filesystem. Remember that the RHUI containers are running in rootless mode, so any filesystem NFS mount needs to happen on the host.

2.6. RHUI 5 install procedure

Before you begin

For RHUI 5, only the container images will be published, and not the individual RPMs. There are separate images for:

  • installer
  • RHUA
  • CDS
  • HAPROXY

Providing local files to the installer

In RHUI 4, the installer would accept local file paths as arguments to some command line switches. This is no longer an option with containerized installations, since the running container has no access to arbitrary files on the host filesystem. Therefore, the RHUI 5 installer is taught to look into some hardcoded file paths to source some files, and those paths can be provided as volume mounts through the podman command line. Unfortunately, those paths cannot be provided through the answers file as the container has already been started at the point the answers file is parsed.

The list of special file paths, local to the container, that the installer will reference:

  • /ssh-keyfile - The private SSH key used to log into target host.
  • /rhua-image.tar - The RHUA container image file in case we want to explicitly transfer it to the target host, the image file must be in the format created by podman save command. In this case, --rhua-container-image and --rhua-container-registry installation parameters are not allowed
  • /answers.yaml - The answers file, which will look similar to the following:

    rhua:
        certs_country: HR
        certs_city: Zadar
        certs_org: RHUI devs
        certs_org_unit: Containerization efforts
        certs_ca_common_name: rhui5-development.example.net
        default_sync_policy: on demand
  • /rhui-ca.crt and /rhui-ca.key - The RHUI CA certificate and its key.
  • /client-ssl-ca.crt and /client-ssl-ca.key - The CA Certificate for CDS SSL traffic and its key.
  • /client-entitlement-ca.crt and /client-entitlement-ca.key - The CA certificate for client certificate management and its key.
Important

Whenever providing the volume mounts to the container, make sure you have proper SELinux labels for the container, providing either :z or :Z as a volume mount option.

Running the installer image for RHUI 5

To run the installer image you will need to access the public Red Hat registry, registry.redhat.io. The registry is protected by credentials. Also, you must be logged in a machine that has Podman installed (we call it control node), so that you can log in to the registry and subsequently run the installer image against the target host as shown in the following:

Note

The following examples assume that you are using RHEL 9.

$ sudo dnf -y install podman
[...]
$ podman login --username <CCSP_login> registry.redhat.io
Password:
Login Succeeded!

After you have logged in to the registry, you can check the available RHUI container images:

$ podman search registry.redhat.io/rhui5
NAME                                      DESCRIPTION
registry.redhat.io/rhui5/cds-rhel9        Red Hat Update Infrastructure 5 Content Deli...
registry.redhat.io/rhui5/installer-rhel9  Red Hat Update Infrastructure 5 Installer
registry.redhat.io/rhui5/rhua-rhel9       Red Hat Update Infrastructure 5 Appliance
registry.redhat.io/rhui5/haproxy-rhel9    Red Hat Update Infrastructure 5 Load Balance...

At this point you are ready to start the installation process assuming all of the following is provided:

  • The target host you want to install RHUA on. This is the --target-host installation parameter.
  • The target host can meet or exceed the following requirements:
  • It should run RHEL 9 or 10 and already be registered with Red Hat.
Note

The target host needs to be registered using the following command: subscription-manager register. When prompted, enter your CCSP user name and password.

  • For RHUA hardware should be a minimum of: x86_64, 16 CPU cores, 64 GB RAM, 256+ GB disk.
  • For CDS and HAProxy the hardware should be a minimum of: x86_64, 8+ CPU cores, 8+ GB RAM, 128+ GB disk.
  • The NFS fileshare used for storing Pulp content. This is the --remote-fs-server installation parameter.
  • The target host has accepted your SSH authentication.
  • The target user is the user name that is used when connecting with the remote host. The target user will be authenticated by the SSH key that is authorized in the target user’s home directory.

Assuming you have launched the target host and it is configured to accept your SSH key, you can run the following commands in Podman:

  • -it This means an interactive session is needed with a proper terminal output.
  • --rm This will remove the container after the operation is finished.
  • -v ~/.ssh/id_rsa:/ssh-keyfile:Z This will volume mount your SSH private key so that the installer container has access to it.

    Note

    Do not forget to supply your SSH passphrase if you have set up your SSH key with a passphrase.

    $ podman run -it --rm -v ~/.ssh/id_rsa:/ssh-keyfile:Z  \
      registry.redhat.io/rhui5/installer-rhel9 rhui-installer  \
      --target-user <target-user> --rhua-container-registry registry.redhat.io \
      --podman-username <CCSP_login> --podman-password '<CCSP_password>' \
      --remote-fs-server <nfs-host:/path> \
      --target-host <rhua-hostname>
    
    Trying to pull registry.redhat.io/rhui5/installer-rhel9:latest...
    ...
    Getting image source signatures
    Copying blob 92efcdccd105 done   |
    Copying blob 19f9949dbedd done   |
    Copying blob 467b1cd556e7 done   |
    Copying blob 5c6a65a8d3b9 done   |
    Copying config be3b9592ab done   |
    Writing manifest to image destination
    
    PLAY [RHUI 5 installation RHUA installation playbook executing on the *target* host] *****************************************************************************************
    
    TASK [Populate service facts] ***********************************************************
    Enter passphrase for key '/ssh-keyfile':
    ok: [<rhua-hostname>]
    
    TASK [Stop the RHUA container that might be running already] *****************************************************************************************
    skipping: [<rhua-hostname>]
    
    TASK [Prepare the dictionary for holding the rhui-tools.conf values] *****************************************************************************************
    ok: [<rhua-hostname>]
    
    TASK [Check whether we have rhui-tools.conf in the designated location] *****************************************************************************************
    ok: [<rhua-hostname>]
    
    [...]
    
    TASK [Enable and start RHUA container as a systemd service] *****************************************************************************************
    changed: [<rhua-hostname>]
    
    PLAY RECAP ******************************************************************************
    <rhua-hostname> : ok=69   changed=43   unreachable=0
    	failed=0	skipped=43   rescued=0	ignored=0
    
    
    PLAY [Attempt to copy the installer log file onto the managed node] *****************************************************************************************
    
    TASK [Copy the log file] ****************************************************************
    changed: [<rhua-hostname>]
    
    PLAY RECAP ******************************************************************************
    <rhua-hostname>: ok=1	changed=1	unreachable=0
          failed=0	skipped=0	rescued=0	ignored=0

Installation Verification

Your RHUA container is ready and running on the target host. So how do you access it? During the installation, a shell function named rhua has been created to save you from typing the Podman exec invocation. Enter the following:

[root@rhua ~]# which rhua
rhua ()
{
    default_arg="";
    [ $# -eq 0 ] && default_arg=bash;
    [ "$1" = "-h" ] && echo -e "rhua: executes commands in the RHUA container environment.\n   Usage: rhua command [args ...]" && return 1;
    ( cd /var/lib/rhui;
    sudo -u rhui podman exec -it rhui5-rhua "${default_arg}${@}" )
}
[root@rhua ~]# rhua bash
bash-5.1# cat /etc/rhui/rhui-subscription-sync.conf
[auth]
username = admin
password = <generated_password>
bash-5.1# rhui-manager
Logging into the RHUI.

It is recommended to change the user's password
in the User Management section of RHUI Tools.

RHUI Username: admin
RHUI Password: <generated_password>

Using SSH agent for authentication (Optional)

If you want to use ssh-agent for passing your SSH key, you must run the installer container in the --privileged mode to allow using the ssh-agent sockets inside the container. Additionally, ensure you have ssh-agent working and you have unlocked your SSH private key. Then, run the following command:

$ ssh-add
Enter passphrase for /home/<username>/.ssh/id_rsa:
Identity added: /home/<username>/.ssh/id_rsa (/home/<username>/.ssh/id_rsa)

Next, in your installer invocation, replace:

-v ~/.ssh/id_rsa:/ssh-keyfile:Z

with the following:

--privileged -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:Z -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK
  • --privileged This is so the container has access to the ssh-agent sockets.
  • -v $SSH_AUTH_SOCK:$SSH_AUTH_SOCK:Z This is to pass the SSH authentication socket to the container filesystem, so that the container can access your SSH key.
  • -e SSH_AUTH_SOCK=$SSH_AUTH_SOCK This is to set the environment variable in the container runtime pointing to the location of the SSH authentication socket.

2.7. Changing the admin password

The rhui-installer sets the initial RHUI login password. It is also written in the /etc/rhui/rhui-subscription-sync.conf file. You can override the initial password with the --rhui-manager-password option.

If you want to change the initial password later, you can change it through the rhui-manager tool or through rhui-installer. Run the rhui-installer --help command to see the full list of rhui-installer options.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press u to select manage RHUI users.
  3. From the User Manager screen, press p to select change admin’s password (followed by logout):

    -= User Manager =-
    
       p   change admin's password (followed by logout)
    
       rhui (users) => p
    
       Warning: After password change you will be logged out.
       Use ctrl-c to cancel password change.
       New Password:
  4. Enter your new password; reenter it to confirm the change.

    New Password:
    Re-enter Password:
    
    [localhost] env PULP_SETTINGS=/etc/pulp/settings.py /usr/bin/pulpcore-manager reset-admin-password -p ********

Verification

  1. The following message displays after you change the admin password:

    Password successfully updated. For security reasons you have been logged out.

Before you begin

To update RHUI, you will need to rerun the RHUI installer image. This will require you to be logged in to a machine that has Podman installed.

First you will need to check to see if you are logged in to the public Red Hat registry by running the following command:

$ podman login --get-login registry.redhat.io

This command will print the user name that is logged in. If you are not logged in, you will receive an error. If you are logged in, you can move to the upgrade steps.

If you are not logged in run the following command to log in:

podman login --username <CCSP_login> --password '<CCSP_password>' registry.redhat.io

Once you have logged in, you can continue the upgrade process.

Procedure

  1. To upgrade to the latest version of RHUI, rerun the RHUI installer with the following command:

    $ podman run --pull=always -it --rm -v ~/.ssh/id_rsa:/ssh-keyfile:Z \
        registry.redhat.io/rhui5/installer-rhel9 rhui-installer \
        --target-user <target-user> --target-host <rhua-hostname> --rerun
  2. Next, you will need to upgrade the CDS and HAProxy images by running the following command on RHUA:
# rhua rhui-manager --noninteractive cds reinstall --all
# rhua rhui-manager --noninteractive haproxy reinstall --all

Verification

To verify that you have upgraded to the latest version of RHUI run the following:

# rhua rpm -q rhui-tools

Chapter 3. Managing Repositories

3.1. Available repositories

Certified Cloud and Service Provider (CCSP) partners control what repositories and packages are delivered through their service. For the most current information regarding what repositories are available for the various operating system versions but are not yet added in your RHUI, run the following command on the RHUA:

# rhua rhui-manager --noninteractive repo unused --by_repo_id

3.2. Adding a new Red Hat content repository

Your CCSP account enables you to access selected Red Hat repositories and make them available in your Red Hat Update Infrastructure environment.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press a to select add a new Red Hat content repository.
  4. Wait for the Red Hat Update Infrastructure Management Tool to determine the entitled repositories. This might take several minutes:

    rhui (repo) => a
    
    Loading latest entitled products from Red Hat...
    ... listings loaded
    Determining undeployed products...
    ... product list calculated
  5. The Red Hat Update Infrastructure Management Tool prompts for a selection method:

    Import Repositories:
        1 - All in Certificate
        2 - By Product
        3 - By Repository
    Enter value (1-3) or 'b' to abort:
  6. To add several repositories bundled together as a product—usually all the minor versions of it in one step—press 2 to select the By Product method. Alternatively, you can add particular repositories by using the By Repository method.
  7. Select which repositories to add by typing the number of the repository at the prompt. You can also choose the range of repositories, for instance, by entering 1 - 5.

    Enter value (1-620) to toggle selection, 'c' to confirm selections, or '?' for more commands:
  8. Continue until all repositories you want to add are checked.
  9. Press c when you are finished selecting the repositories. The Red Hat Update Infrastructure Management Tool displays the repositories for deployment and prompts for confirmation:

    The following products will be deployed:
      Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI
      Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI
    Proceed? (y/n)
  10. Press y to proceed. A message indicates each successful deployment:

    Importing Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI...
      Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.1)...
      Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.0)...
      Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10)...
    
    Importing Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI...
      Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI (10.1)...
      Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI (10.0)...
      Importing product repository Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI (10)...
    
    Content will not be downloaded to the newly imported repositories
    until the next sync is run.

Verification

  1. From the Repository Management screen, press l to check that the correct repositories have been installed.

A repository contains downloadable software for a Linux distribution. You use dnf to search for, install, or only download RPMs from the repository.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press l to select list repositories currently managed by the RHUI:

    ...
    
    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10)
    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)
    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.1)
    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10)
    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10.0)
    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10.1)
    Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10)
    Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.0)
    Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10.1)
    
    
    ...

You can use the Repository Management screen to display information about a particular repository.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press i:

    Enter value (1-1631) to toggle selection, 'c' to confirm selections, or '?' for more commands:
  4. Select the repository by entering the value beside the repository name. Enter one repository selection at a time before confirming your product selection.
  5. Press c to confirm:

    Name:                Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Debug RPMs) from RHUI (10.1)
    ID:                  rhel-10-for-aarch64-appstream-debug-rhui-rpms-8.4
    Type:                Red Hat
    Version:             0
    Relative Path:       content/dist/rhel10/rhui/10.1/aarch64/appstream/debug
    GPG Check:           Yes
    Custom GPG Keys:     (None)
    Red Hat GPG Key:     Yes
    Content Unit Count:
    Last Sync:           2026-11-15 15:56:06
    Next Sync:           2026-11-15 22:00:00
    
    Name:                Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.1)
    ID:                  rhel-10-for-aarch64-appstream-rhui-rpms-8.4
    Type:                Red Hat
    Version:             0
    Relative Path:       content/dist/rhel10/rhui/10.1/aarch64/appstream/os
    GPG Check:           Yes
    Custom GPG Keys:     (None)
    Red Hat GPG Key:     Yes
    Content Unit Count:
    Last Sync:           2026-11-15 19:50:20
    Next Sync:           2026-11-16 01:55:00
    
    Name:                Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10.1)
    ID:                  rhel-10-for-aarch64-appstream-source-rhui-rpms-8.4
    Type:                Red Hat
    Version:             0
    Relative Path:       content/dist/rhel10/rhui/10.1/aarch64/appstream/source/SRPMS
    GPG Check:           Yes
    Custom GPG Keys:     (None)
    Red Hat GPG Key:     Yes
    Content Unit Count:
    Last Sync:           2026-11-15 15:56:51
    Next Sync:           2026-11-15 22:00:00

Verification

  1. A similar output displays for your selections.

3.5. Setting Up On-Demand Syncing of Repositories

RHUI allows you to minimize the amount of content downloaded to storage in advance by setting certain repositories to on_demand sync mode. This way, RHUI will only download and store content when it is requested by client machines, which can result in reduced storage usage and lower costs. However, the downside of this approach is that RHUI’s performance will depend on the connection speed to the Red Hat CDN network.

Setting the Sync Policy

To support different sync policies depending on the artifact type, as well as to provide additional flexibility into selecting the sync policy based on the content in question, two configuration parameters are available:

  • immediate_repoid_regex
  • on_demand_repoid_regex

Whenever a sync task is submitted, the repoid of the repository is checked against the regex in immediate_repoid_regex first. If it matches, a sync with immediate policy is requested. If not, a match is tested against on_demand_repoid_regex. This match would produce an on_demand sync task. If there is no match, the sync is performed with a policy pointed by default_sync_policy configuration parameter.

Applying the Policy

After updating the configuration file, the next repository synchronization will apply the new policy.

If you switch from on_demand to immediate, the next sync will begin downloading all content for the specified type.

If you switch from immediate to on_demand, the next sync will only download repository metadata. RHUI will then download content as requested by client machines.

Example of setting policy using both types of policy

To set up your repository to use both types of policy, you will need to edit your /etc/rhui/rhui-tools.conf file to the following configuration:

[rhui]
on_demand_repoid_regex: debug|source
immediate_repoid_regex: ^$
default_sync_policy: immediate

This configuration syncs your debug and source repositories using the on-demand policy and your regular repositories using the immediate policy.

Tips and Tricks

  1. Setting all repositories to on_demand right after installing RHUI can lead to faster deployment and quicker delivery for end-users, as only metadata needs to be initially synced.
  2. Utilizing a cache priming strategy can be beneficial if you have a new installation and do not need to support older versions of RHEL clients. By using a client that mirrors end-user configurations and running dnf update, you can pre-download content to RHUI’s storage.

In Red Hat Update Infrastructure 5 and later, you can add custom repositories using a configured YAML input file. You can find an example template of the YAML file on the RHUA container in the /usr/share/rhui-tools/examples/repo_add_by_file.yaml directory.

This functionality is only available in the command-line interface (CLI).

Prerequisites

  • Ensure that you have root access to the RHUA node.

Procedure

  1. On the RHUA node, create a YAML input file in the following format:

    # cat /root/example.yaml
    name: Example_YAML_File
    repo_ids:
        - rhel-10-for-x86_64-baseos-eus-rhui-rpms-10.0
  2. Add the repositories listed in the input file using the rhui-manager utility:

    # rhua rhui-manager repo add_by_file --file /root/example.yaml --sync_now
    The name of the repos being added: Example_YAML_File
    Loading latest entitled products from Red Hat...
    ... listings loaded
    Successfully added Red Hat Enterprise Linux 10 for x86_64 - BaseOS - Extended Update Support from RHUI (RPMs) (10.0) (Yum)
    ... successfully scheduled for the next available timeslot.

Verification

  • In the CLI, use the following command to list all the installed repositories and check whether the correct repositories have been installed:

    # rhua rhui-manager repo list
  • In the RHUI Management Tool, on the Repository Management screen, press l to list all the installed repositories and check whether the correct repositories have been installed.

You can create custom repositories that can be used to distribute updated client configuration packages or other non-Red Hat software to the RHUI clients. A protected repository for 64-bit RHUI servers (for example, client-rhui-x86_64) will be the preferred vehicle for distributing new non-Red Hat packages, such as an updated client configuration package, to the RHUI clients.

Like Red Hat content repositories, all of which are protected, protected custom repositories that differ only in processor architecture (i386 versus AMD64) are consolidated into a single entitlement within an entitlement certificate, using the $basearch dnf variable.

In the event of certificate problems, an unprotected repository for RHUI servers can be used as a fallback method for distributing updated RPMs to the RHUI clients.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press c to select create a new custom repository (RPM content only).
  4. Enter a unique ID for the repository. Only alphanumeric characters, _ (underscore), and - (hyphen) are permitted. You cannot use spaces in the unique ID. For example, repo1, repo_1, and repo-1 are valid entries.

    Unique ID for the custom repository (alphanumerics, _, and - only):
  5. Enter a display name for the repository. This name can contain spaces and other characters that could not be used in the ID. The name defaults to the ID.

    Display name for the custom repository [repo_1]:
  6. Specify the path that will host the repository. The path must be unique across all repositories hosted by RHUI. For example, if you specify the path at this step as internal/rhel/9/repo_1, then the repository will be located at: https://<yourLB>/pulp/content/protected/internal/rhel/9/repo_1.

    Unique path at which the repository will be served [repo_1]:
  7. Choose whether to protect the new repository. If you answer no to this question, any client can access the repository. If you answer yes, only clients with an appropriate entitlement certificate can access the repository.

    Warning

    As the name implies, the content in an unprotected repository is available to any system that requests it, without any need for a client entitlement certificate. Be careful when using an unprotected repository to distribute any content, particularly content such as updated client configuration RPMs, which will then provide access to protected repositories.

  8. Answer yes or no to the following questions as they appear:

    Should the repository require clients to perform a GPG check and verify packages are signed by a GPG key? (y/n)
    
    Will the repository be used to host any Red Hat GPG signed content? (y/n)
    
    Will the repository be used to host any custom GPG signed content? (y/n)
    
    Enter the absolute path to the public key of the GPG key pair:
    
    Would you like to enter another public key? (y/n)
    
    Enter the absolute path to the public key of the GPG key pair:
    
    Would you like to enter another public key? (y/n)
  9. The details of the new repository displays. Press y at the prompt to confirm the information and create the repository.

Verification

  1. From the Repository Management screen, press l to check that the correct repositories have been installed.

3.8. Deleting a repository from RHUI 5

When the Red Hat Update Infrastructure Management Tool deletes a Red Hat repository, it deletes the repository from the RHUA and all applicable CDS nodes.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press d at the prompt to delete a Red Hat repository. A list of all repositories currently being managed by RHUI displays.
  4. Select which repositories to delete by typing the number of the repository at the prompt. Typing the number of a repository places a checkmark next to the name of that repository. You can also choose the range of repositories, for instance, by entering 1 - 5.
  5. Continue until all repositories you want to delete are checked.
  6. Press c at the prompt to confirm.

    Note

    After you delete a repository, the content of the repository (such as repodata and packages) will remain in the file system and can still be consumed by clients. Only after the next orphan cleanup task are the repository contents deleted, which means that clients may see 404 errors. In RHUI5 orphaned units are deleted weekly at 4am on Wednesday. An administrator can delete orphaned units at any time. You must update your client configuration RPM to avoid 404 errors.

Repository RPMs are deduplicated at sync time, so the least amount of space is used at any time. When removing a repository (especially minor version specific repositories) it is likely that the RPMs of the repository could be shared with another repository.

For example, if you remove the RHEL 9.5 Appstream repository but you have the RHEL 9.6 Appstream repository, you will not see any change in the amount of disk space used as the RHEL 9.6 repository contains all the same RPMs as the RHEL 9.5 repository If you remove the RHEL 9.6 Appstream repository but keep the RHEL 9.5 Appstream repository enabled, then you would have see the difference between RHEL 9.5 and 9.6 removed during an orphan cleanup task, as RHEL 9.6 contains all the same packages from 9.0 to 9.6.

For more information about removing orphaned artifacts, see Removing orphaned artifacts.

You can upload multiple packages and upload to more than one repository at a time. Packages are uploaded to the RHUA immediately but are not available on the CDS node until the next time the CDS node synchronizes.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press u:

    Select the repositories to upload the package into:
      -    1: test
  4. Enter the value (1-1) to toggle the selection.
  5. Press c to confirm your selection.
  6. Enter the location of the packages to upload. If the location is an RPM, the file will be uploaded. If the location is a directory, all RPMs in that directory will be uploaded:

    /root/bear-4.1-1.noarch.rpm
    
    The following RPMs will be uploaded:
      bear-4.1-1.noarch.rpm
  7. Press y to proceed or n to cancel:

    Copying RPMs to a temporary directory: /tmp/rhui.rpmupload.jsqdub22.tmp
    .. 1 RPMs copied.
    Creating repository metadata for 1 packages ...
    .. repository metadata created for 1 packages.
    The packages upload task for repo: client-config-rhel-10-x86_64 has been queued: /pulp/api/v3/tasks/01937826-8654-77c1-84f7-e9e07c7a7aeb/
    You can inspect its progress via (S)ync screen/(RR) menu option in rhui-manager TUI.

You can upload packages that are stored on a remote server without having to manually download them first. The packages must be accessible by HTTP, HTTPS, or FTP.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press ur:

    Select the repositories to upload the package into:
      -    1: test
  4. Enter the value (1-1) to toggle the selection.
  5. Press c to confirm your selection:

    ### WARNING ### WARNING ### WARNING ### WARNING ### WARNING ### WARNING ###
    #                                                                         #
    #   Content retrieved from non-Red Hat arbitrary places can contain       #
    #   unsupported or malicious software.  Proceed at your own risk.         #
    #                                                                         #
    ###########################################################################
  6. Enter the remote URL of the packages to upload. If the location is an RPM, the file will be uploaded. If the location is a web page, all RPMs linked off that page will be uploaded:

    https://repos.fedorapeople.org/pulp/pulp/demo_repos/zoo/bear-4.1-1.noarch.rpm
    Retrieving https://repos.fedorapeople.org/pulp/pulp/demo_repos/zoo/bear-4.1-1.noarch.rpm
    
    The following RPMs will be uploaded:
      bear-4.1-1.noarch.rpm
  7. Press y to proceed or n to cancel:

    Copying RPMs to a temporary directory: /tmp/rhui.rpmupload.dwux8rq7.tmp
    .. 1 RPMs copied.
    Creating repository metadata for 1 packages ...
    .. repository metadata created for 1 packages.
    The packages upload task for repo: test has been queued: /pulp/api/v3/tasks/0193770c-6523-7363-ae5e-8c6429728b4f/
    You can inspect its progress via (S)ync screen/(RR) menu option in rhui-manager TUI.

To allow RHUI users to view and install package groups or language packs from a custom repository, you can import a comps.xml or a comps.xml.gz file to the custom repository.

Note

Red Hat repositories contain these files provided by Red Hat. You can not override them. You can only upload these files to your custom repositories.

This functionality is only available in the command-line interface.

Prerequisites

  • Ensure that you have a valid comps.xml or comps.xml.gz file relevant to the custom repository.
  • Ensure you have root access to the RHUA node.

Procedure

  • On the RHUA node, import data from a comps file to your custom repository using the rhui-manager utility:

    # rhua rhui-manager repo add_comps --repo_id Example_Custom_Repo --comps /root/Example-Comps.xml

Verification

  • On a client system that uses the custom repository:

    1. Refresh the repository data:

      # dnf clean metadata
    2. List the repository data and verify that the comps file has been updated:

      # dnf grouplist

You can remove packages from custom repositories using RHUI’s Text User Interface (TUI).

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Enter r to select manage repositories.
  3. On the Repository Management screen, enter r to select packages to remove from a repository (Custom RPM content only):

    -= Repository Management =-
    
       l   list repositories currently managed by RHUI
       i   display detailed information on a repository
       a   add a new Red Hat content repository
       ac  add a new Red Hat container
       c   create a new custom repository (RPM content only)
       d   delete a repository from RHUI
       u   upload content to a custom repository (RPM content only)
       ur  upload content from a remote web site (RPM content only)
       p   list packages in a repository (RPM content only)
       r   select packages to remove from a repository (Custom RPM content only)
  4. Enter the value to select the repository:

    Choose a repository to delete packages from:
        1 - Test-RPM-1
        2 - Test-RPM-2
  5. Enter the value to select the packages to delete.

    Select the packages to remove:
      -    1: example-package-1.noarch.rpm
      -    2: example-package-2.noarch.rpm
  6. Enter c to confirm selection.

    The following packages will be removed:
      example-package-1.noarch.rpm
  7. Enter y to proceed or n to cancel:

    Removed example-package-1.noarch.rpm

When listing repositories within the Red Hat Update Infrastructure Management Tool, only repositories that contain fewer than 100 packages display their contents. Results with more than 100 packages only display a package count.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press r to select manage repositories.
  3. From the Repository Management screen, press p.
  4. Select the number of the repository you want to view. The Red Hat Update Infrastructure Management Tool asks if you want to filter the results. Leave the line blank to see the results without a filter.

    Enter value (1-1631) or 'b' to abort: 1
    
    Enter the first few characters (case insensitive) of an RPM to filter the results
    (blank line for no filter):
    
    Only filtered results that contain less than 100 packages will have their
    contents displayed. Results with more than 100 packages will display
    a package count only.
    
    Packages:
      bear-4.1-1.noarch.rpm

Verification

  1. One of three types of messages displays:

    Packages:
      bear-4.1-1.noarch.rpm
    Package Count: 8001
    No packages in the repository.

3.14. Limiting the number of repository versions

In Pulp 3, which is used in Red Hat Update Infrastructure 4, repositories are versioned. When a repository is updated in Red Hat CDN and synchronized in Red Hat Update Infrastructure, Pulp creates a new version.

By default, repositories added using Red Hat Update Infrastructure version 4.6 and earlier were configured to retain all repository versions. This resulted in data accumulating in the database indefinitely, taking up disk space and, in the worst case, the inability to delete a repository. With version 4.7 and newer, repositories are added with a version limit of 5. This means only the latest five versions are kept at all times, and any older version is automatically deleted. However, you may want to set the version limit for existing repositories added earlier and have any older versions deleted. You can do this for all your repositories at once or process one repository at a time.

  • The command to do this is as follows:

    [root@rhua ~]# rhua rhui-manager repo set_retain_versions [--repo_id <ID> or --all] --versions <NUMBER>
  • For example, to limit the number of versions for all repositories to 5, one would run:

    [root@rhua ~]# rhua rhui-manager repo set_retain_versions --all --versions 5

Depending on the number of repositories and existing repository versions, It can take more than an hour for all the necessary tasks to be scheduled, and up to a few days for the versions older than the limit to be deleted. You can watch the progress in the rhui-manager text user interface, on the synchronization screen, under running tasks.

3.15. Removing orphaned artifacts

RPM packages, repodata files, and other relates files are kept on the disk even if they are no longer part of a repository; for example, after a repository is deleted and the files do not belong to another repository, or when an update is made available and a new set of repodata files is synchronized.

  • To remove this obsolete content, one can run the following command:

    [root@rhua ~]# rhua rhui-manager repo orphan_cleanup

Depending on the number of files, it can take up to several days for this task to complete. You can watch the progress in the rhui-manager text user interface, on the synchronization screen, under running tasks.

You can use the rhui-manager command to obtain the status of each repository in a machine-readable format.

Procedure

  1. On the RHUA node, run the following command.

    # rhua rhui-manager --noninteractive status --repo_json <output_file>*

    A JSON file is generated containing a list of dictionaries for all custom and Red Hat repositories. To view the content of the file, run the following command.

    # rhua cat <output_file>
  2. If you would like to view the JSON file on the host, you can create the file in /root using the following command:

    # rhua rhui-manager --noninteractive status --repo_json /root/<output_file>

    Now you will be able to access your output file on the host machine as /var/lib/rhui/root/<output_file>

A machine-readable JSON file is created when you run the command to get the status of each RHUI repository. The JSON file contains a list of dictionaries with one dictionary for each repository.

List of dictionary keys for custom repositories

Expand
Table 3.1. List of dictionary keys for custom repositories
KeyDescription

base_path

The path of the repository.

description

The name of the repository.

group

The group the repository belongs to. It is always set to the string, custom.

id

The repository ID.

name

The name of the repository. It is the same as the repository ID.

List of dictionary keys for Red Hat repositories

Expand
Table 3.2. List of dictionary keys for Red Hat repositories
KeyDescription

base_path

The path of the repository.

description

The name of the repository.

group

The group the repository belongs to. It is always set to the string, redhat.

id

The repository ID.

last_sync_date

The date and time the repository was last synchronized. The value is null if the repository was never synchronized.

last_sync_exception

The exception raised if the repository failed to synchronize. The value is null if the repository was synchronized correctly.

last_sync_result

The result of the synchronization task.

The values are:

  • completed: If the repository synchronized correctly.
  • null: If the repository was never synchronized.
  • failed: If the synchronization failed.
  • running: If a synchronization task is currently running.

last_sync_traceback

The traceback that was logged if the repository failed to synchronize. The value is null if the repository was synchronized correctly or was never synchronized.

metadata_available

A boolean value denoting whether metadata is available for the repository.

name

The name of the repository. It is the same as the repository ID.

next_sync_date

The date and time of the next scheduled synchronization of the repository. If a synchronization task is currently running, the value is running.

repo_published

A boolean value denoting whether this repository has been published in RHUI. Note that, by default, RHUI is configured to automatically publish repositories.

Chapter 4. Managing Containers

4.1. Managing containers

You can automate the deployment of applications inside Linux containers using RHUI. Using containers offers the following advantages:

  • Requires less storage and in-memory space than VMs: Because the containers hold only what is needed to run an application, saving and sharing is more efficient with containers than it is with VMs that include entire operating systems.
  • Improved performance: Because you are not running an entirely separate operating system, a container typically runs faster than an application that carries the overhead of a new VM.
  • Secure: Because a container typically has its own network interfaces, file system, and memory, the application running in that container can be isolated and secured from other activities on a host computer.
  • Flexible: With an application’s runtime requirements included with the application in the container, a container can run in multiple environments.

A container is an application sandbox. Each container is based on an image that holds necessary configuration data. When you launch a container from an image, a writable layer is added on top of this image. Every time you commit a container, a new image layer is added to store your changes.

An image is a read-only layer that is never modified. All changes are made in the top-most writable layer, and the changes can be saved only by creating a new image. Each image depends on one or more parent images.

A platform image is an image that has no parent. Platform images define the runtime environment, packages, and utilities necessary for a containerized application to run. The platform image is read-only, so any changes are reflected in the copied images stacked on top of it.

You can use the rhua rhui-manager tool to add containers using the Repository Management section.

Procedure

  1. To enable container support in the RHUI environment, edit the /etc/rhui/rhui-tools.conf file and set container support using the following:

    [container]
    container_support_enabled: True
  2. If you want to save your credentials for the Red Hat container registry in the RHUI configuration, add the following lines to the container section:

    [container]
    registry_username: your_RH_login
    registry_password: your_RH_password
  3. To apply this new configuration to all of your CDS nodes, run the following:

    # rhua rhui-manager --noninteractive cds reinstall --all
    [container]
    registry_username: your_RH_login
    registry_password: your_RH_password

    If you normally synchronize from a registry different from registry.redhat.io, also change the values of the registry_url and registry_auth options accordingly.

  4. On the RHUA node, run rhui-manager:

    # rhua rhui-manager
  5. Press r to access the Repository Management screen.

    -= Red Hat Update Infrastructure Management Tool =-
    
    
    -= Repository Management =-
    
      l list repositories currently managed by the RHUI
      i display detailed information on a repository
      a add a new Red Hat content repository
      ac add a new Red Hat container
      c create a new custom repository (RPM content only)
      d delete a repository from the RHUI
      u upload content to a custom repository (RPM content only)
      ur upload content from a remote web site (RPM content only)
      p list packages in a repository (RPM content only)
    
    Connected: rhua.example.com
  6. Press ac to add a new Red Hat container.

    rhui (repo) => ac Specify URL of registry [https://registry.redhat.io]:
  7. If the container you want to add exists in a non-default registry, enter the registry URL. Press Enter without entering anything to use the default registry.
  8. Enter the name of the container in the registry:

    jboss-eap-6/eap64-openshift
  9. Enter a unique ID for the container.

    rhui-manager converts the name of the container from the registry to the format that is usable in Pulp by replacing slashes and dots with underscores. You can use such a converted name by pressing Enter or by entering a name of your choice.

  10. Enter a display name for the container.

    jboss-eap-6_eap64-openshift
  11. Optional: Set your login and password in the RHUI configuration if prompted.
  12. Verify the displayed summary.

    The following container will be added:
      Registry URL: https://registry.redhat.io
      Container Id: jboss-eap-6_eap64-openshift
      Display Name: jboss-eap-6_eap64-openshift
      Upstream Container Name: jboss-eap-6/eap64-openshift
    Proceed? (y/n)
  13. Press y to proceed and add the container.

    y
    Successfully added container jboss-eap-6_eap64-openshift

4.1.3. Synchronizing container repositories

After you add your container to Red Hat Update Infrastructure, you can use the rhui-manager tool to synchronize the container.

Procedure

  1. On the RHUA node, run rhua rhui-manager:

    # rhua rhui-manager
  2. Press s to access the synchronization status and scheduling screen.
  3. Press sr to synchronize an individual repository immediately.
  4. Enter the number of the repository that you wish to synchronize.
  5. Press c to confirm the selection.
  6. Verify the repository and press y to synchronize or n to cancel.

    The following repositories will be scheduled for synchronization: jboss-eap-6_eap64-openshift
    Proceed? (y/n) y
    Scheduling sync for jboss-eap-6_eap64-openshift...
    ... successfully scheduled for the next available timeslot.

4.1.4. Generating container client configurations

RHUI clients can pull containers from RHUI using client configuration. The RPM contains the load balancer’s certificate and you can use it to add the load balancer to the container registry and to modify container configuration.

Procedure

  1. On the RHUA node, run rhui-manager:

    # rhua rhui-manager
  2. Press e to access the entitlement certificates and client configuration RPMs screen.
  3. Press d to create a container client configuration RPM.
  4. Enter the full path of a local directory where you want to save the configuration files.

    /root/
  5. Enter the name of the RPM.

    containertest
  6. Enter the version number of the configuration RPM. The default is 2.0.
  7. Enter the release number of the configuration RPM. The default is 1.
  8. Enter the number of days the certificate should be valid. The default is 365.

    Successfully created client configuration RPM.
    Location: /root/containertest-2.0/build/RPMS/noarch/containertest-2.0-1.noarch.rpm

After generating the container configuration RPM, you can install it on a client by importing it to your local machine.

Procedure

  1. Retrieve the RPM from the RHUA node to your local machine:

    # scp root@rhua.example.com:/var/lib/rhui/root/containertest-2.0/build/RPMS/noarch/containertest-2.0-1.noarch.rpm .
  2. Transfer the RPM from the local machine to the client.

    # scp containertest-2.0-1.noarch.rpm root@cli01.example.com:.
  3. Switch to the client and install the RPM:

    [root@cli01 ~]# dnf install containertest-2.0-1.noarch.rpm

You can use the podman pull command to verify the content on the container.

Procedure

  1. Run the podman pull command.

    [root@cli01 ~]# podman pull jboss-eap-6_eap64-openshift
    
    Resolving "jboss-eap-6_eap64-openshift" using unqualified-search registries (/etc/containers/registries.conf)
    Trying to pull cds.example.com/jboss-eap-6_eap64-openshift:latest...
    Getting image source signatures
    Copying blob b0e0b761a531 done
    Copying blob aa23ac04e287 done
    Copying blob 0d30ea1353f9 done
    Copying config 3d0728c907 done
    Writing manifest to image destination
    Storing signatures
    3d0728c907d55d9faedc4d19de003f21e2a1ebdf3533b3d670a4e2f77c6b35d2
  2. If the podman pull command fails, check the rhui-manager status. The synchronization probably has not been performed yet and you have to wait until it synchronizes.

    Resolving "jboss-eap-6_eap64-openshift" using unqualified-search registries (/etc/containers/registries.conf)
    Trying to pull cds.example.com/jboss-eap-6_eap64-openshift:latest...
    Error: initializing source docker://cds.example.com/jboss-eap-6_eap64-openshift:latest: reading manifest latest in cds.example.com/jboss-eap-6_eap64-openshift: manifest unknown: Manifest not found.

RHUI uses entitlement certificates to ensure that the client making requests on the repositories is authorized by the cloud provider to access those repositories. The entitlement certificate must be signed by the cloud provider’s Certificate Authority (CA) Certificate. The CA Certificate is installed on the CDS as part of its configuration.

When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you decide how to subdivide your clients and create a separate certificate for each one. Each certificate can then be used to create individual RPMs.

Prerequisites

  • The entitlement certificate must be signed by the cloud provider’s CA Certificate.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press e to select create entitlement certificates and client configuration RPMs.
  3. Press e to select generate an entitlement certificate.
  4. Select which repositories to include in the entitlement certificate by typing the number of the repository at the prompt. Typing the number of a repository places an x next to the name of that repository. Continue until all repositories you want to add have been checked.

    Important

    Include only repositories for a single RHEL version in a single entitlement. Adding repositories for multiple RHEL versions leads to an unusable dnf configuration file.

  5. Press c at the prompt to confirm.
  6. Enter a name for the certificate. This name helps identify the certificate within the Red Hat Update Infrastructure Management Tool and generate the name of the certificate and key files.

    Name of the certificate. This will be used as the name of the certificate file
    (name.crt) and its associated private key (name.key). Choose something that will
    help identify the products contained with it.
  7. Enter a path to save the certificate. Leave the field blank to save it to the current working directory.
  8. Enter the number of days the certificate should be valid for. Leave the field blank for 365 days. The details of the repositories to be included in the certificate display.

    Repositories to be included in the entitlement certificate:
    
      Red Hat Repositories
        Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Debug RPMs) from RHUI
        Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI
        Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI
    
        Proceed? (y/n)
  9. Press y at the prompt to confirm the information and create the entitlement certificate.

Verification

  1. You will see a similar message if the entitlement certificate was created:

    ..........................+++++
    ....+++++
    Entitlement certificate created at ./rhel10-for-rhui5.crt
    
    ------------------------------------------------------------------------------

When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you decide how to subdivide your clients and create a separate certificate for each one. Each certificate can then be used to create individual RPMs.

Prerequisites

  • The entitlement certificate must be signed by the cloud provider’s CA Certificate.

Procedure

  1. Use the following command to create an entitlement certificate from the RHUI CLI:

    # rhua rhui-manager client cert --repo_label rhel-10-for-x86_64-appstream-eus-rhui-source-rpms --name rhuiclientexample --days 365 --dir /root/clientcert
    .............................................+++++
    ...............................................................................+++++
    Entitlement certificate created at /root/clientcert/rhuiclientexample.crt
    Note

    Use Red Hat repository labels, not IDs. To get a list of all labels, run the rhui-manager client labels command. If you include a protected custom repository in the certificate, use the repository’s ID instead.

Verification

  1. A similar message displays if you successfully created and entitlement certificate:

    Entitlement certificate created at /root/clientcert/rhuiclientexample.crt

When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you need to decide how to subdivide your clients and create a separate certificate for each one. You can then use each certificate to create individual RPMs for installation on the appropriate guest images.

Use this procedure to create RPMs with the CLI.

Procedure

  1. Use the following command to create an RPM with the RHUI CLI:

    # rhua rhui-manager client rpm --entitlement_cert /root/clientcert/rhuiclientexample.crt --private_key /root/clientcert/rhuiclientexample.key --rpm_name clientrpmtest --dir /root --unprotected_repos unprotected_repo1
    Successfully created client configuration RPM.
    Location: /root/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm
    Note

    When using the CLI, you can also specify the URL of the proxy server to use with RHUI repositories, or you can use _none_ (including the underscores) to override any global dnf settings on a client machine. To specify a proxy, use the --proxy parameter.

Verification

  1. A similar message displays if you successfully created a client configuration RPM:

    Successfully created client configuration RPM.
    Location: /root/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm

When Red Hat issues the original entitlement certificate, it grants access to the repositories you requested. When you create client entitlement certificates, you need to decide how to subdivide your clients and create a separate certificate for each one. You can then use each certificate to create individual RPMs for installation on the appropriate guest images.

Use this procedure to create RPMs with the RHUI Management Tool.

Note

The following procedure creates RPMs in the RHUA container. For best results, use a directory in your container that is also available on the host as a mount point.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press e to select create entitlement certificates and client configuration RPMs.
  3. From the Client Entitlement Management screen, press c to select create a client configuration RPM from an entitlement certificate.
  4. Enter the full path of a local directory to save the configuration files to:

    Full path to local directory in which the client configuration files generated by this tool
    should be stored (if this directory does not exist, it will be created):
  5. Enter the name of the RPM.
  6. Enter the version of the configuration RPM. The default version is 2.0.
  7. Enter the release of the configuration RPM. The default release is 1.
  8. Enter the full path to the entitlement certificate authorizing the client to access specific repositories.
  9. Enter the full path to the private key for the entitlement certificate.
  10. Select any unprotected custom repositories to be included in the client configuration.
  11. Press c to confirm selections or ? for more commands.

Verification

  1. A similar message displays if the RPM was successfully created:

    Successfully created client configuration RPM.
    Location: /root/clientrpmtest-2.0/build/RPMS/noarch/clientrpmtest-2.0-1.noarch.rpm

When creating RPMs, you can either set a custom repository ID prefix or remove it entirely. This is set by editing the main configuration file /etc/rhui/rhui-tools.conf in the RHUA container. By default, the prefix is rhui-.

Procedure

  • On the RHUA node, edit or remove the prefix from the main configuration file:

    • Set a custom prefix:

      [rhui]
      client_repo_prefix: myrhui-
    • If you don’t want to use any prefix, set an empty value:

      [rhui]
      client_repo_prefix:

5.7. Typical client RPM workflow

As a CCSP, you can offer various versions of RHEL and a variety of layered products available on top of it. In addition to the Red Hat repositories that provide this content, you will need custom repositories to provide updates to client configuration RPMs for these RHEL versions and layered products. You must create a custom repository for each RHEL version and each layered product sold separately. For example, you will need separate custom repositories for the base RHEL 10 offering and for SAP on RHEL. These custom repositories will store the corresponding client configuration RPMs. Whenever you update these RPMs—for example, to add a new repository or to update an expiring certificate—you will upload newer versions to the corresponding custom repositories.

It is good practice to sign all RPMs with a GPG key, ensuring that users are installing official packages from you that have not been tampered with. However, signing packages is outside the scope of RHUI, so you need to sign your client configuration RPMs using tools available in your company. To create the custom repository, you only need the public GPG key on the RHUA to configure it for use with the custom repository. Note that rhui-manager will automatically include the key in the client configuration RPM and use it for the custom repository in dnf configuration.

Procedure

  1. In the following example, you will create a custom repository for the client configuration RPM for base RHEL 10 on the x86_64 architecture:

    # rhua rhui-manager repo create_custom --protected --repo_id client-config-rhel-10-x86_64 --display_name "RHUI Client Configuration for RHEL 9 on x86_64" --gpg_public_keys /root/RPM-GPG-KEY-my-cloud

    You can use a different repository ID and display name if desired, and ensure you specify the actual GPG key file.

  2. Add the relevant Red Hat repositories. The following YAML file contains the typical set of repositories for base RHEL 10 on the x86_64 architecture, using unversioned repositories:

    # cat rhel-10-x86_64.yaml
    name: Red Hat Enterprise Linux 10 on x86_64
    repo_ids:
      - codeready-builder-for-rhel-10-x86_64-rhui-debug-rpms-8
      - codeready-builder-for-rhel-10-x86_64-rhui-rpms-8
      - codeready-builder-for-rhel-10-x86_64-rhui-source-rpms-8
      - rhel-10-for-x86_64-appstream-rhui-debug-rpms-8
      - rhel-10-for-x86_64-appstream-rhui-rpms-8
      - rhel-10-for-x86_64-appstream-rhui-source-rpms-8
      - rhel-10-for-x86_64-baseos-rhui-debug-rpms-8
      - rhel-10-for-x86_64-baseos-rhui-rpms-8
      - rhel-10-for-x86_64-baseos-rhui-source-rpms-8
      - rhel-10-for-x86_64-supplementary-rhui-debug-rpms-8
      - rhel-10-for-x86_64-supplementary-rhui-rpms-8
      - rhel-10-for-x86_64-supplementary-rhui-source-rpms-8

    To add and synchronize all these repositories using the YAML file above, run the following command:

    # rhua rhui-manager repo add_by_file --file rhel-10-x86_64.yaml --sync_now
  3. Create an entitlement certificate. You will need a list of repository labels that are to be allowed in the certificate. Repository labels are often identical to repository IDs, except when the repository ID contains a specific RHEL minor version, in which case the label does not contain the minor version but only the major version. In the case of base RHEL repositories, the IDs are identical, so you can extract them from the YAML file above, using the following Python code:

    import yaml
    with open("rhel-10-x86_64.yaml") as repoyaml:
        repodata = yaml.safe_load(repoyaml)
        print(",".join(repodata["repo_ids"]))

    Copy the output to the clipboard and store it as an environment variable; for example, $labels:

    # labels=<paste the contents of the clipboard here>

    In addition to the RHEL repository labels, you also need to add the custom repository to the comma-separated list of labels when creating the entitlement certificate. Run the following command to create the entitlement certificate allowing access to both the RHEL repositories and the custom repository:

    # rhua rhui-manager client cert --name rhel-10-x86_64 --dir /root --days 3650 --repo_label $labels,client-config-rhel-10-x86_64

    If your company’s policy allows certificates to be valid for only one year, two years, etc., change the value of the --days argument accordingly.

    This command creates the files /root/rhel-10-x86_64.crt and /root/rhel-10-x86_64.key. You will need them in the next step.

  4. Create a client configuration RPM:

    # rhua rhui-manager client rpm --dir /tmp --rpm_name rhui-client-rhel-10-x86_64 --rpm_version 1.0 --entitlement_cert /root/rhel-10-x86_64.crt --private_key /root/rhel-10-x86_64.key

    Use an RPM name or version of your choice. With the values above, the command creates the RPM and prints its location, which is:

     `/tmp/rhui-client-rhel-10-x86_64-1.0/build/RPMS/noarch/rhui-client-rhel-10-x86_64-1.0-1.noarch.rpm`
    . Transfer this RPM from the RHUA to your system and sign it with the appropriate GPG key—the
    private key that corresponds to the public key that you used as the `--gpg_public_keys` parameter
    when you created the custom repository. You can then, for example, have the signed RPM
    preinstalled on RHEL 8 x86_64 images in your cloud environment. You also need to transfer the
    signed RPM back to the RHUA and upload it to the custom repository for RHEL 8 on x86_64:
    # rhua rhui-manager packages upload --repo_id client-config-rhel-10-x86_64 --packages /root/signed/rhui-client-rhel-10-x86_64-1.0-1.noarch.rpm

Verification

  1. Check the contents of the custom repository:

    # rhua rhui-manager packages list --repo_id client-config-rhel-10-x86_64

    This command is supposed to print the RPM file that you have uploaded.

  2. Once you have configured your CDS and HAProxy nodes, which is described later in this guide, you can also install the client configuration RPM on a test VM and verify access to all the relevant repositories by running the following command on the test VM:

    # dnf -v repolist

    This command is supposed to print the configured RHEL 8 repositories and the custom repository for client configuration RPMs.

Updating the client configuration RPM

When it is necessary to rebuild the client configuration RPM, increase the version number.

  1. If you used 1.0 in the previous invocation, use e.g. 2.0 now, and keep the rest of the parameters:

    # rhua rhui-manager client rpm --dir /tmp --rpm_name rhui-client-rhel-10-x86_64 --rpm_version 2.0 ...
  2. Then, again, sign the newer RPM, transfer it to the RHUA, and upload it to the custom repository:

    # rhua rhui-manager packages upload --repo_id client-config-rhel-10-x86_64 --packages /root/signed/rhui-client-rhel-10-x86_64-2.0-1.noarch.rpm
  3. Client VMs on which the previous version of the RPM is installed will now be able to update to the newer version. Note that it may be necessary to clean the dnf cache on the client VM to make dnf reload the repodata, which was updated when the newer RPM was uploaded.
Note

Do not combine x86_64 and ARM64 repositories in one entitlement certificate. The client configuration RPM created by rhui-manager using such a certificate would provide access to both architectures on the target client VM, which might cause conflicts. You would have to modify the rh-cloud.repo file and rebuild the RPM outside of rhui-manager. Note that, as long as you used --dir /tmp when creating the client configuration RPM, the artifacts are now stored in /tmp/rhui-client-rhel-10-x86_64-1.0/build/. For detailed information about rebuilding RPMs, see Packaging and distributing software in the RHEL documentation.

Note

It is currently impossible to make rhui-manager create the rh-cloud.repo file with certain repositories—for example, -debug and -source repositories—disabled by default. You would have to modify the rh-cloud.repo file and rebuild the RPM outside of rhui-manager. This issue is tracked in BZ#1772156.

Chapter 6. Managing Red Hat Certificates

6.1. Red Hat Update Appliance certificates

The RHUA in RHUI uses the following certificates and keys:

  • Content certificate and private key
  • Entitlement certificate and private key
  • SSL certificate and private key
  • Cloud provider’s CA certificate

The RHUA is configured with the content certificate and the entitlement certificate. The RHUA uses the content certificate to connect to the Red Hat CDN. It also uses the Red Hat CA certificate to verify the connection to the Red Hat CDN. As the RHUA is the only component that connects to the Red Hat CDN, it is the only RHUI component that has this certificate deployed. It should be noted that multiple RHUI installations can use the same content certificate. For instance, the Amazon EC2 cloud runs multiple RHUI installations (one per region), but each RHUI installation uses the same content certificate.

Clients use the entitlement certificate to permit access to packages in RHUI. To perform an environment health check, the RHUA attempts a dnf request against each CDS. To succeed, the dnf request must specify a valid entitlement certificate.

6.2. Content delivery server certificates

Each CDS node in RHUI uses the following certificates and keys:

  • SSL certificate and private key
  • Cloud provider’s CA certificate

The only certificate necessary for the CDS is an SSL certificate, which permits HTTPS communications between the client and the CDS. The SSL certificates are scoped to a specific hostname, so a unique SSL certificate is required for each CDS node. If SSL errors occur when connecting to a CDS, verify that the certificate’s common name is set to the fully qualified domain name (FQDN) of the CDS on which it is installed.

The CA certificate is used to verify that the entitlement certificate sent by the client as part of a dnf request was signed by the cloud provider. This prevents a rogue instance from generating its own entitlement certificate for unauthorized use within RHUI.

6.3. Client certificates

Each client in the RHUI uses an entitlement certificate and private key as well as the cloud provider’s CA certificate.

The entitlement certificate and its private key enable information encryption from the CDS back to the client. Each client uses the entitlement certificate when connecting to the CDS to prove it has permission to download its packages. All clients use a single entitlement certificate.

The cloud provider’s CA certificate is used to verify the CDS’s SSL certificate when connecting to it. This ensures that a rogue instance is not impersonating the CDS and introducing potentially malicious packages into the client.

The CA certificate verifies the SSL certificate, not the entitlement certificate. The reverse is true for the CDS node. The SSL certificate and private key are used to encrypt data from the client to the CDS. The CA certificate present on the CDS verifies that the CDS node should trust the entitlement certificate sent by the client.

The Entitlements Manager screen is used to list entitled products in the current Red Hat content certificates and to upload new certificates.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press n to select manage Red Hat entitlement certificates.
  3. From the Entitlements Manager screen, press l to list data about the current content certificate:

    rhui (entitlements) => l
    
    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Debug RPMs) from RHUI
       Expiration: 02-27-2027     Certificate: c885597492374720bb5d398c3f65d1ed.pem
    
       Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI
       Expiration: 02-27-2027     Certificate: c885597492374720bb5d398c3f65d1ed.pem
    
       Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI
       Expiration: 02-27-2027     Certificate: c885597492374720bb5d398c3f65d1ed.pem
    
       Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI
       Expiration: 02-27-2027     Certificate: c885597492374720bb5d398c3f65d1ed.pem
    
       Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (RPMs) from RHUI
       Expiration: 02-27-2027     Certificate: c885597492374720bb5d398c3f65d1ed.pem
    
       Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Source RPMs) from RHUI
       Expiration: 02-27-2027     Certificate: c885597492374720bb5d398c3f65d1ed.pem

Verification

You will see a list of the entitled products in the current Red Hat content certificates.

Chapter 7. Managing Content Delivery Servers

7.1. Managing content delivery servers

CDS nodes provide content to RHUI clients.

You can use the Content Delivery Server (CDS) Management screen to list, add, delete, and reinstall CDS nodes.

7.2. Registering a new CDS

The Red Hat Update Infrastructure Management Tool provides several options for configuring a CDS within the RHUI.

Prerequisites

  • Make sure sshd is running on the CDS node and that port 443 is open.
Note

Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is registered. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press c to select manage content delivery servers (CDS).
  3. From the Content Delivery Server (CDS) Management screen, press a to add a new CDS instance.
  4. Enter the hostname of the CDS to add:

    Hostname of the CDS instance to register:
    cds1.example.com
  5. Enter the user name that will have SSH access to the CDS and have sudo privileges.

    Username with SSH access to <cds1.example.com> and sudo privileges:
    <cloud-user>
  6. Enter the absolute path to the SSH private key for logging in to the CDS and press Enter.

    Absolute path to an SSH private key to log into <cds1.example.com> as <cloud-user>:
    /home/<cloud-user>/.ssh/id_rsa_rhua
  7. Update the instance with the latest versions of available packages

    Update instance after registering? (y/n): y
  8. Optional: If you wish to use custom SSL certificates, enter the absolute path to the custom SSL certificate, SSL Key, and SSL crt files.

    Note

    If you do not provide an SSL certificate, it will be automatically generated.

    Optional absolute path to user supplied SSL key file:
    /home/<cloud-user>/custom_ssl.key
    
    Optional absolute path to user supplied SSL crt file:
    /home/<cloud-user>/custom_ssl.crt
    
    .........................................................................
    The following CDS has been successfully added:
    
      Hostname:             <cds1.example.com>
      SSH Username:         <cloud-user>
      SSH Private Key:      /home/<cloud-user>/.ssh/id_rsa_rhua
    
    The CDS will now be configured:
    ....................................................................
    The CDS was successfully configured.
  9. If adding the content delivery server fails, check that the firewall rules permit access between the RHUA and the CDS.
  10. Run the mount command to see if shared storage is mounted as read-write.

    [root@rhua ~]# mount | grep rhui
    
    nfs.example.com:/export on /var/lib/rhui/remote_share type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.8.41.163,local_lock=none,addr=10.8.41.163)
  11. After successful configuration, repeat these steps for all remaining CDS nodes.

You can use the Content Delivery Server (CDS) Management screen to list all CDS nodes managed by RHUI 5}.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press c to select manage content delivery servers (CDS):
  3. From the Content Delivery Server (CDS) Management screen, press l to list all known CDS nodes that RHUI 5} manages:

    Hostname:             <cds1.example.com>
    SSH Username:     <cloud-user>
    SSH Private Key:     /<cloud-user>/.ssh/id_rsa_rhua

You may encounter a situation where you need to reinstall and reapply the configuration for a CDS. The Red Hat Update Infrastructure Management Tool provides an easy way to accomplish this task.

Prerequisites

  • At least one installed CDS
Note

Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is reinstalled. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press c to select manage content delivery servers (CDS).
  3. From the Content Delivery Server (CDS) Management screen, press r to select reinstall and reapply configuration to an existing CDS instance. The Red Hat Update Infrastructure Management Tool automatically performs all reinstallation and reconfiguration tasks.
  4. Select the CDS to reinstall:

        1 -
        Hostname:             <cds1.example.com>
        SSH Username:     <cloud-user>
        SSH Private Key:     /<cloud-user>/.ssh/id_rsa_rhua
  5. Enter a value or b to abort: 1: 1
  6. Update instance(s) after reinstalling? (y/n): y

    Checking that the RHUA services are reachable from the instance...
    Done.
    
    
    Installing and configuring the CDS...
    
    PLAY [Registering a CDS instance] **********************************************
    
    ...
    
    TASK [Update CDS instance] *****************************************************
    ok: [cds1.example.com]
    
    PLAY RECAP *********************************************************************
    cloud-user@cds1.example.com : ok=24   changed=10   unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
    
    Done.

Verification

Check that you successfully reinstalled and reconfigured the CDS by viewing the code output:

Ensuring that instance ports are reachable ...
Done.

7.5. Configuring a CDS to accept legacy CAs

A CDS node normally accepts only entitlement certificates signed by the Certificate Authority (CA) that is currently configured on RHUI 5. You may want to accept other previously created CAs so that clients can continue to work if you change your main CA or when the CA certificate expires. RHUI 5 supports the concept of legacy CAs, where you can install other CA certificates on CDS nodes and make them usable.

Prerequisites

  • Make sure all your RHUI nodes are running version 5.0 or later. If you originally installed RHUI from an older version, reinstall your CDS nodes in rhui-manager first.

Procedure

  1. Transfer your legacy CA certificate to your CDS nodes and save it in the /etc/pki/rhui/legacy-ca/ directory.
  2. Get the subject hash value from the certificate and keep it in a shell variable:

    #hash=`openssl x509 -hash -noout -in /etc/pki/rhui/legacy-ca/YOUR_CERT.crt`
  3. Create a symbolic link to the certificate file in the /etc/pki/tls/certs/ directory with the hash and an unused number, starting from 0, as the symbolic link name:

    #ln -s /etc/pki/rhui/legacy-ca/YOUR_CERT.crt /etc/pki/tls/certs/$hash.0

    This action takes effect immediately.

Note

If you decide to stop accepting the certificate, delete the symbolic link and the certificate file; restart the httpd service.

To limit your content delivery servers (CDS) nodes from accepting legacy certificate authorities (CAs), remove the respective CA certificates.

Prerequisites

  • Clients are no longer using the CA.

Procedure

  1. On the CDS node, navigate to the /etc/pki/rhui/legacy/ directory:

    # cd /etc/pki/rhui/legacy/
  2. Optional: Back up the existing CA certificates:
  3. Delete the CA certificate that corresponds to the CA you want to limit:

    # rm example-legacy.crt

Verification

  • The CDS node stops accepting legacy CAs as soon as you delete the CA certificate.

7.7. Unregistering a CDS

You can unregister (delete) a CDS instance that you are not going to use.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press c to select manage content delivery servers (CDS).
  3. From the Content Delivery Server (CDS) Management screen, press d to delete a CDS instance.
  4. Enter the hostname of the CDS to delete:

    Hostname of the CDS instance to unregister:
    cds1.example.com

8.1. Managing an HAProxy load-balancer instance

A load-balancing solution must be in place to spread client HTTPS requests across all CDS servers. Red Hat Update Infrastructure5 uses HAProxy by default, but it is up to you to choose what load-balancing solution (for example, the one from the cloud provider) to use during the installation. If HAProxy is used, you must also decide how many nodes to bring in.

8.2. Registering a new HAProxy load-balancer

RHUI 5} uses DNS to reach the CDN. In most cases, your instance should be preconfigured to talk to the proper DNS servers hosted as part of the cloud’s infrastructure. If you run your own DNS servers or update your client DNS configuration, there is a chance you will see errors similar to dnf Could not contact any CDS load balancers. In these cases, check that your DNS server is forwarding to the cloud’s DNS servers for the request or that your DNS client is configured to fall back to the cloud’s DNS server for name resolution.

Using more than one HAProxy node requires a round-robin DNS entry for the hostname used as the value of the --cds-lb-hostname parameter when rhui-installer is run ( cds.example.com in this guide) that resolves to the IP addresses of all HAProxy nodes. How to Configure DNS Round Robin presents one way to configure a round-robin DNS. In the context of RHUI 5}, these will be the IP addresses of the HAProxy nodes, and they are to be mapped to the hostname specified as --cds-lb-hostname while calling rhui-installer.

Note

Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is registered. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.

Prerequisites

  1. Make sure sshd is running on the HAProxy load-balancer node and that port 443 is open.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press l to select manage HAProxy load-balancer instances.
  3. From the Load-balancer (HAProxy) Management screen, press a to add a new load-balancer instance.
  4. Enter the hostname of the new load-balancer:

    Hostname of the HAProxy Load-balancer instance to register:
    <haproxy1.example.com>
  5. Enter the username that will have SSH access to the load-balancer and have sudo privileges:

    Username with SSH access to cds.example.com and sudo privileges:
    <cloud-user>
  6. Enter the absolute path to the SSH private key for logging in to the load-balancer instance and press Enter:

    Absolute path to an SSH private key to log into cds.example.com as <cloud-user>:
    /<cloud-user>/.ssh/id_rsa_rhua
  7. Update the instance with the latest versions of available packages

    Update instance after registering? (y/n): y
  8. Optional: Enter an optional absolute path to a user supplied HAProxy configuration file and press Enter.

    If you do not specify the path to a custom configuration file, the default file, /usr/share/rhui-tools/templates/haproxy.cfg, is used instead.

    Optional absolute path to user supplied HAProxy config file:
    
    .........................................................................
    The following load-balancer has been successfully added:
    
    Hostname:         <haproxy1.example.com>
    SSH Username:     <cloud-user>
    SSH Private Key:  /<cloud-user>/.ssh/id_rsa_rhua
    
    The load-balancer will now be configured:
  9. If the load-balancer fails to add, check that the firewall rules permit access between the RHUA and the load-balancer.
  10. After successful configuration, repeat these steps for any remaining load-balancer instances.

Verification

  • The following message displays:

    The HAProxy Load-balancer was successfully configured.

You can use Load-balancer (HAProxy) Management screen to show all known HAProxy load-balancer instances that RHUI 5} is managing.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press l to select manage HAProxy load-balancer instances.
  3. From the Load-balancer (HAProxy) Management screen, press l to list the load-balancer instances that RHUI manages:

    Hostname:             <haproxy1.example.com>
    SSH Username:     <cloud-user>
    SSH Private Key:     /<cloud-user>/.ssh/id_rsa_rhua

You may encounter a situation where you need to reinstall and reapply the configuration for an HAProxy load-balancer. The Red Hat Update Infrastructure Management Tool provides an easy way to accomplish this task.

Prerequisites

  • Make sure sshd is running on the HAProxy load-balancer node and that port 443 is open.
Important

It is crucial that the files included in the restore retain their current attributes.

Note

Answering yes (y) to the below question: Update instance(s) after reinstalling? (y/n): will result in a dnf update being run on the instance after it is reinstalled. This may require a reboot of the instance. Answering no (n) to this question will result in the dnf update not being run.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press l to select manage HAProxy load-balancer instances.
  3. From the Load-balancer (HAProxy) Management screen, press r to reinstall and reapply the configuration to a load-balancer instance.

    The Red Hat Update Infrastructure Management Tool automatically performs all reinstallation and reconfiguration tasks.

  4. Select the load-balancer to reinstall:

        1 -
        Hostname:             <haproxy1.example.com>
        SSH Username:     <cloud-user>
        SSH Private Key:     /<cloud-user>/.ssh/id_rsa_rhua
  5. Enter a value or b to abort: 1: 1
  6. Update instance(s) after reinstalling? (y/n): y

    Installing and configuring the HAProxy Load-balancer...
    
    PLAY [Registering a load balancer instance] ************************************
    
    ...
    
    TASK [Update load balancer instance] *******************************************
    ok: [haproxy1.example.com]
    
    PLAY RECAP *********************************************************************
    cloud-user@haproxy1.example.com : ok=8    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    
    Done.

Verification

Check that you successfully reinstalled and reconfigured the load-balancer by viewing the code output:

Ensuring that HAProxy is available...
Done.

8.5. Unregistering an HAProxy load-balancer

You can unregister (delete) an HAProxy load-balancer instance that you are not going to use.

Prerequisites

  • Make sure sshd is running on the HAProxy load-balancer node and that port 443 is open.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press l to select manage HAProxy load-balancer instances.
  3. From the Load-balancer (HAProxy) Management screen, press d to delete a load-balancer instance.
  4. Enter the hostname of the load-balancer to delete:

    Hostname of the load-balancer instance to unregister:
    <haproxy1.example.com>

Chapter 9. Synchronization Status and Scheduling

A repository is a storage location for software packages (RPMs). RHEL uses dnf commands to search a repository, download, install, and update the RPMs. The RPMs contain all the dependencies needed to run an application.

The length of the initial synchronization of Red Hat content can vary. If you choose to synchronize repositories as soon as possible, you can synchronize all repositories in Red Hat Update Infrastructure 5 by running rhui-manager repo sync_all in the CLI.

9.2. Displaying repository synchronization summary

You can use the Synchronization Status screen to display information about a particular repository.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press dr:

    -= Repository Summary Synchronization Status =-
    
    Last Refreshed: 02:01:22
    (updated every 5 seconds, ctrl+c to exit)
    
    Last Sync                    Last Result
    -------------------------------------------------
    Red Hat Enterprise Linux 10 for ARM 64 - BaseOS (Debug RPMs) from RHUI (10)
      Never                        None
    ....
    ....
    Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.1)
      2026-07-29 17:45:41          Running
    Associating Content: 11001 (97%)
    Downloading Artifacts: 7376

9.3. Displaying running synchronizations

You can use the Synchronization Status screen to check the status on running synchronization tasks.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press rr:

    Last Refreshed: 02:06:46
    (updated every 5 seconds, ctrl+c to exit)
    
    Current Sync                 Result
    -------------------------------------------------
    Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0)
      2026-07-29 17:45:41          Running
    Associating Content: 11001 (97%)
    Downloading Artifacts: 7376

You can use the Synchronization Status screen to view the details of the last repository synchronization.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press vr.
  4. Enter the number for the repository that you want to see details for:

    Enter value (1-66) or 'b' to abort:

Verification

A similar message displays if the selected repository has not been synchronized:

Repo: Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (Debug RPMs) (8.2)
No syncs have been completed for this repository.

The initial synchronization of content can take a while, typically 10 to 20 minutes. If you choose to synchronize repositories as soon as possible, you can synchronize all repositories in RHUI 5} by running rhui-manager repo sync_all in the CLI.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press sr:

    Select one or more repositories to schedule to be synchronized before its scheduled time.
    The sync will happen as soon as possible depending on other tasks that may be executing
    in the RHUI.  Sync requests for repositories with tasks in running
    or pending state will be ignored.
    
             Last Result  Next Sync              Repository
             -------------------------------------------------
  4. Select the repository by entering the value beside the repository name. Enter one repository selection at a time before confirming your product selection:

    x  714: Error        2026-11-17 20:30:00    Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)
  5. Press c to confirm:

    The following repositories will be scheduled for synchronization:
      Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)
    Proceed? (y/n) y
  6. Press y to proceed:

    Scheduling sync for Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)...
    ... successfully scheduled for the next available timeslot.
    Note

    This message displays if a task for the selected repository is running. Ignoring sync request for Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0) as the repo is currently reserved by a running task.

9.6. Canceling active synchronization tasks

Most environments synchronize repositories on a scheduled basis. You may encounter a situation where you need to cancel active synchronization tasks.

Prerequisites

  • There are existing repositories.
  • There are active synchronization tasks.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press ca to select cancel active sync tasks.
  4. Enter the value for the task or tasks that you want to cancel:

    Select one or more repositories for which you want to cancel their active tasks.
      -    1: Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0)
    Enter value (1-1) to toggle selection, 'c' to confirm selections, or '?' for more commands:
  5. Press c to confirm your selection.
  6. Press y to cancel the synchronization task or tasks:

    The active tasks will be canceled for the following repositories:
      Red Hat Enterprise Linux 8 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0)
    Proceed? (y/n)

Verification

A similar message displays if you cancel an active synchronization task:

Canceling active task for repo Red Hat Enterprise Linux 10 for x86_64 - AppStream from RHUI (Debug RPMs) (10.0) ...
... done

You can use the Synchronization Status screen to look at and modify a repository’s auto-publish status.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press ap:

    rhui (sync) => ap
    
    Select one or more repositories to toggle the auto-publish status.
    The operation will be executed as soon as possible depending on other tasks
    that may be executing in the RHUI.
    
                    Status | Repository
               --------------------------------------------------------------------------
    Select one or more repositories:
    
      Custom Repositories
    
      Red Hat Repositories: dnf
    
         -  713:       AUTO Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10)
         -  714:       AUTO Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.1)
  4. Enter a value ( 1- 1631) to toggle the selection, c to confirm selections, or ? for more commands:

    The following repositories will have their auto-publish status changed:
      Red Hat Repositories
        dnf
           Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10.0)
  5. Press c to confirm your selection.
  6. Press y to proceed.

Verification

A similar message displays when you make and confirm a selection:

Scheduling a task to turn off auto-publish status of repository Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10)

9.8. Viewing and advancing repository workflow

You can use the Synchronization Status screen to look at and change a repository’s workflow.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press wf.
  4. Enter a value ( 1- 1631) to toggle the selection, c to confirm selections, or ? for more commands:

    The following repositories will be scheduled for workflow push:
      Red Hat Repositories
        dnf
           Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10)
  5. Press y to proceed:

Verification

A similar message displays if the scheduling was successful:

Scheduling a task for generating metadata version 0 for repo Red Hat Enterprise Linux 10 for ARM 64 - AppStream (RPMs) from RHUI (10) ...
  ... task scheduled.

9.9. Exporting a repository to the file system

Note

Repositories are exported automatically after the latest synchronization that updated their contents.

You can use the Synchronization Status screen to forcibly export a repository to a file system at any time.

Procedure

  1. Navigate to the Red Hat Update Infrastructure Management Tool home screen:

    [root@rhua ~]# rhua rhui-manager
  2. Press s to select synchronization status and scheduling.
  3. From the Synchronization Status screen, press ex.
  4. Enter a value to toggle the selection.
  5. Press c to confirm the selection:

    The following repositories will be exported:
      Red Hat Repositories
        dnf
           Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10)
  6. Press y to proceed.

Verification

A similar message displays if the repository is exported to a file system:

[1/1] Exporting version 1 of the repo Red Hat Enterprise Linux 10 for ARM 64 - AppStream (Source RPMs) from RHUI (10).

9.10. RHUI systemd/Timers

Systemd timers are systemd unit files that end in .timer that control .service files or events. Several repetitive tasks are automatically scheduled and run in Red Hat Update Infrastructure, on the RHUA node in particular.

Expand
Table 9.1. Systemd Timers
Timer filePurposeFrequencyLog file

rhui-export-repos-trigger.timer

export synchronized content

every 5 minutes; randomized

/var/log/rhui/rhui-export-repos.log

rhui-orphans-cleanup.timer

delete orphaned content

weekly, on 4am every Wednesday

/root/.rhui/rhui.log

rhui-pulp-tmpfiles-cleanup.timer

delete temporary files left behind by Pulp

weekly, on 3am every Tuesday

/root/.rhui/rhui.log

rhui-purge-upload-dirs.timer

clean up temporary directories from uploads to custom repositories

every 5 minutes; randomized

/var/log/rhui/rhui-purge-upload-dirs.log

rhui-repo-sync-timer

synchronize repositories, if they are due

every 5 minutes; randomized

(none; see the systemd journal instead)

rhui-symlinks-cleanup-deepscan.timer

delete all broken symlinks to deleted artifacts

yearly, at 4am on the Tuesday in January that is between the 16th and the 22nd

/root/.rhui/rhui.log

rhui-synchronize-subscriptions.timer

check for changes to the entitlement certificate, and import a new one if needed

hourly; randomized

/var/log/rhui/rhui-subscription-sync.log

rhui-update-mappings.timer

update the information about available minor versions

every six hours; randomized

/var/log/rhui/rhui-update-mappings.log

Parameters for systemd/timers:

  • Timer files are stored in /usr/lib/systemd/system in the RHUA container.
  • All the log file paths are in the RHUA container. Note that /var/log/rhui is also available on the RHUA host as /var/lib/rhui/log, and /root/.rhui is available as /var/lib/rhui/root/.rhui
  • Timers are randomized when the RHUA container starts or is restarted.
  • To view all the RHUI timers, run the following command in the RHUA container: systemctl list-timers --all rhui\*

Chapter 10. Backing up RHUI

10.1. Backing up Red Hat Update Infrastructure

After you have installed and configured your RHUI servers, you might want to back them up. Backing up RHUI is useful if you encounter any problems with RHUI. In such cases, you can return to a previous working configuration by restoring RHUI.

To successfully back up RHUI, you must back up your Red Hat Update Appliance (RHUA).

10.2. Backing up Red Hat Update Appliance

To back up Red Hat Update Appliance (RHUA), you must back up all the associated files and storage.

Note

To back up RHUA, you must stop the associated services. However, stopping services does not disable any client instances from updating or installing packages because clients are connected only to the content delivery servers (CDSs). Consequently, If you have an automated monitoring solution in place, your monitoring may fail during the backup process.

Procedure

  1. Stop RHUA services:

    # systemctl stop rhui_rhua
  2. Verify whether the services have stopped:

    # systemctl status rhui_rhua
  3. Back up the following files.

    # rsync -av --exclude .local --exclude remote_share /var/lib/rhui/ /BACKUP/DIRECTORY/
    Important

    Ensure that the files retain their current attributes when you back them up.

  4. Back up any generated client entitlement certificates and client configuration RPMs.

    • Optional: If you want to back up the remote share from the RHUA without using a different backup solution for the file server, use the following command:

      # rsync -av /var/lib/rhui/remote_share/ /ANOTHER/BACKUP/DIRECTORY/
  5. Restart RHUI services.

    # systemctl start rhui_rhua

10.3. Backing up content delivery servers

To back up CDSs, you must back up all the associated files and storage.

Note

To avoid complete loss of service, back up a single CDS node at a time. Clients will automatically switch to other running CDS nodes.

Procedure

  1. Stop the nginx service:

    # systemctl stop nginx
  2. Verify that the nginx service has stopped:

    # systemctl status nginx
  3. Back up the following files.

    # cp -a <source_files_path> <destination_files_path>
    Important

    Ensure that the files retain their current attributes when you back them up.

    List of files:

    • /etc/nginx/*
    • /var/log/nginx/*
    • /etc/pki/rhui/*
  4. Restart RHUI services.

    # rhui-services-restart

11.1. Configuration Files

The following configuration files, RHUI manager exit codes, and log files are used in RHUI 5.

Expand
Table 11.1. Configuration Files
ComponentFile or DirectoryUsage

Red Hat Update Appliance

/etc/pulp/*

Pulp config files

 

/etc/rhui/rhui-tools.conf

rhui-manager config files

 

etc/rhui-static/rhui-tools-static.conf

 
 

/etc/pki/rhui/*

Certificates for Red Hat Update Infrastructure

 

/etc/rhui/rhui-subscription-sync.conf

Configuration for the subscription synchronization script

Content Delivery Server

/etc/pki/rhui/certs/

Certificates for CDS

Content Delivery Server

var/lib/rhui/remote_share/cds-config/ssl

SSL configuration file

HAProxy

/etc/haproxy/haproxy.cfg

HAProxy configuration file

11.2. RHUI Manager Exit Codes

RHUI Manager uses the following codes to indicate the result of running the rhui-manager status command and running the rhui-manager CLI commands.

Expand
Table 11.2. RHUI Manager Exit Codes
Status CodeDescription

0

Success

1

General error or a repository synchronization error

2

SSL certificate error on a CDS

32

Entitlement CA or SSL certificate expiration warning

64

Entitlement CA or SSL certificate expiration error

128

One or more RHUI services is not running on the RHUA, CDS, or HAProxy nodes

238

No packages to upload to the specified custom repository were found.

239

A repository could not be deleted because it does not exist.

240

There was an issue with a required resource. For example, it was impossible to build a client configuration RPM because no valid repository was found.

241

A synchronization task could not be scheduled because an unknown repository was specified.

To troubleshoot

* Check the spelling * Add the repository first * Check logs for Pulp issues

242

A custom repository could not be created due to a Pulp issue. Check the message and logs for details.

243

Red Hat repositories could not be added because some of them already exist in RHUI and some of them were not available in the entitlement.

244

A custom repository could not be created because it already exists in RHUI.

245

A Red Hat repository could not be added because it already exists in RHUI.

246

A Red Hat repository could not be added because it is not available in the entitlement. Check the spelling or remove the repository mapping cache using the command rm -f /var/cache/rhui/*, and try again.

247

A Red Hat repository could not be added due to a Pulp issue. Check the message and logs for details.

248

Migration from RHUI 3 to RHUI 4 was stopped because one or more Red Hat repositories are already present in RHUI 4. You must remove the repositories or use the --force flag.

249

The RHUI configuration, /etc/rhui/rhui-tools.conf, is invalid. Check the message for details.

250

The entitlement certificate is not writable.

251

The entitlement certificate has expired.

252

The entitlement certificate is invalid because it does not contain RHUI repositories.

253

The entitlement certificate file is not a valid certificate.

254

Command-line Error: The RHUI CLI could not run due to a network issue.

255

Argument Error: A required argument was not supplied.

11.3. Log Files

Note

The paths in this table are valid in the RHUI containers.

Expand
Table 11.3. Log Files
ComponentFile or DirectoryUsage

Red Hat Update Appliance

/root/.rhui/rhui.log

Red Hat Update Infrastructure Management Tool logs

 

/var/log/rhui/pulp

Pulp logs; for example, repository synchronization

 

/var/log/rhui/nginx/

nginx logs

 

/var/log/rhui/rhua_ansible.log

CDS and HAproxy management log, service status log

 

/var/log/rhui/rhui-subscription-sync.log

Subscription synchronization log

 

/var/log/rhui/rhui-export-repos.log

Repository export log

 

/var/log/rhui/rhui-purge-upload-dirs.log

Temporary directory cleanup log

 

/var/log/rhui/rhui-update-mappings.log

Repository version mapping log

Content Delivery Server

/var/log/nginx/access.log and error.log

nginx logs

 

/var/log/nginx/ssl-access.log*

clients' requests for content

 

/var/log/nginx/gunicorn-auth.log

CDS authorizer plug-in logs; by default, requests without an entitlement certificate

 

/var/log/nginx/gunicorn-content_manager.log

CDS content manager plug-in logs; for example, on-demand package downloads

 

/var/log/nginx/gunicorn-mirror.log

CDS mirror plug-in logs; by default, only logs from starting and stopping the plug-in

Client

/var/log/dnf.log for RHEL 7 and earlier versions

dnf command logs

Client

/var/log/dnf.log for RHEL 8 and later versions

dnf command logs

 

/var/log/messages

Client syslog

Note

See also older logs saved with a number or a time stamp as an extension, possibly compressed by gzip.

Chapter 12. Certified CCSP Certification Workflow

The Certified Cloud Provider Agreement requires that Red Hat certifies the images (templates) from which tenant instances are created to ensure a fully supported configuration for end customers.

There are two methods for certifying the images for Red  Hat Enterprise Linux. The preferred method is to use the Certified Cloud and Service Provider (CCSP) image certification workflow.

After certifications have been reviewed by Red Hat, a pass/fail will be assigned and certification will be posted to the public Red Hat certification website at Red Hat Ecosystem Catalog.

Chapter 13. Changing Proxy Settings

13.1. Changing proxy settings

RHUI can use a proxy server to sync Red Hat content through. If no proxy server is specified while installing RHUI, none is used. Otherwise, this proxy server is used with all RHUI repositories that you add. This chapter describes how the proxy server configuration can be changed.

Follow these steps if you wish to:

  • start using a proxy server in a RHUI environment that was installed with no proxy server configuration
  • edit the current proxy server configuration, for example, if the server hostname has changed
  • stop using the proxy server that a RHUI environment was installed with

Procedure

  1. To configure (or unconfigure) proxy server settings, you will be creating (or editing) the local overrides file, /etc/rhui/rhui-tools.conf, so that it contains:

    [proxy]
    proxy_protocol: <PROTOCOL>
    proxy_host: <HOSTNAME>
    proxy_port: <PORT>
    proxy_user: <USERNAME>
    proxy_pass: <PASSWORD>

    Then, the parameters are as follows:

    • PROTOCOL is either http or https if configuring the proxy server; if unconfiguring it, when using the local file, leave the value empty
    • HOSTNAME is the new proxy server hostname; if clearing the configuration when using the local file, leave the value empty
    • PORT is the TCP port where the proxy server is listening, typically 3128; if clearing the configuration when using the local file, leave the value empty
    • USERNAME is an optional parameter. Only use it if the proxy server requires credentials. If it does not or you are clearing the configuration when using the local file, leave the value empty or do not use the proxy_user: option at all
    • PASSWORD is an optional parameter. Only use it if the proxy server requires credentials. If it does not or you are clearing the configuration when using the local file, leave the value empty or do not use the proxy_user: option at all
    • All commands must be run in the RHUA container

      Important

      This new configuration will only affect Red Hat repositories added after the configuration is updated. To apply this new configuration to existing repositories, it is necessary to remove, add, and re-synchronize the repositories.

      This will cause an outage that will last from the moment you remove them until you re-sync them. However, already synchronized packages will not have to be re-downloaded from the Red Hat CDN. RHUI will mainly have to parse all the repodata files and determine which package belongs where. This can take up to several hours.

      Although there are technical means outside of rhui-manager whereby the proxy fields can be modified for the existing repositories—or rather, for the so-called remotes—using such means is unsupported.

  2. Make sure you have a list (or lists) of your repositories so that you can add them again. If you do not have such a list, you can use rhui-manager to generate a file with all your currently added Red Hat repositories.
  3. To generate a list of Red Hat repositories, first create a raw list with one ID per line:

    rhui-manager --noninteractive repo list --redhat_only --ids_only > /root/rawlist
  4. Then create a YAML file with repositories. Start by creating a stub:

    echo -e "name: all Red Hat repositories\nrepo_ids:" > /root/repo_list.yml
  5. Next, append the repositories from the raw list as YAML list items:

    sed "s/^/  - /" /root/rawlist >> /root/repo_list.yml
  6. Delete all Red Hat repositories from your RHUI:

    Use the text user interface, or delete them one by one on the command line. For the latter, you can use the raw list created earlier:

    while read repo; do rhui-manager --noninteractive repo delete --repo_id $repo; done < /root/rawlist
    Note

    Repositories are deleted in asynchronous background tasks: queued and executed by available Pulp workers. It may take tens of minutes, or hours, to actually delete all the repositories. Be patient.

  7. When the repositories have been deleted, re-add them. They will be added with the new proxy settings (or with no proxy URL) this time. It is also necessary to re-synchronize the repositories. You can add and re-synchronize them in one step on the command line:

    rhui-manager --noninteractive repo add_by_file --file /root/repo_list.yml --sync_now

    Alternatively, use your own methods to synchronize the repositories, for example, in a specific order. Lastly, you can also simply wait for the synchronization to start automatically: in six hours, or in any other time defined as repo_sync_frequency in /etc/rhui/rhui-tools.conf.

    Important

    In any case, the repositories will not be available in the meantime.

Examples:

  • Start using a proxy server, and this server requires no credentials. With the local file:

    [proxy]
    proxy_host: squid.example.com
    proxy_protocol: http
    proxy_port: 3128
  • Change the proxy server hostname, everything else remains the same. With the local file:

    [proxy]
    proxy_host: newsquid.example.com
  • Stop using the proxy server. With the local file:

    [proxy]
    proxy_protocol:
    proxy_host:
    proxy_port:

Verification

The rhui-manager tool does not display information about the proxy server that is used with a repository. However, you can use the pulpcore-manager tool as outlined below:

sudo -u pulp env PULP_SETTINGS=/etc/pulp/settings.py /usr/bin/pulpcore-manager shell << EOM
from pulpcore.app.models import Remote
rem = Remote.objects.get(name="rhel-10-for-x86_64-baseos-rhui-rpms-8")
print(rem.proxy_url)
EOM

The output should look like this for a configured proxy server:

http://squid.example.com:3128

or None if no proxy server is configured with the specified repository.

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top