Chapter 1. About Red Hat Update Infrastructure 4
Red Hat Update Infrastructure 4 (Red Hat Update Infrastructure 4) is a highly scalable, highly redundant framework that enables you to manage repositories and content. It also enables cloud providers to deliver content and updates to Red Hat Enterprise Linux (RHEL) instances. Based on the upstream Pulp project, RHUI allows cloud providers to locally mirror Red Hat-hosted repository content, create custom repositories with their own content, and make those repositories available to a large group of end users through a load-balanced content delivery system.
As a system administrator, you can prepare your infrastructure for participation in the Red Hat Certified Cloud and Service Provider program by installing and configuring the Red Hat Update Appliance (RHUA), content delivery servers (CDS), repositories, shared storage, and load balancing.
Configuring RHUI comprises the following tasks:
- Creating and synchronizing a Red Hat repository
- Creating client entitlement certificates and client configuration RPMs
- Creating client profiles for the RHUI servers
Experienced RHEL system administrators are the target audience. System administrators with limited RHEL skills should consider engaging Red Hat Consulting to provide a Red Hat Certified Cloud Provider Architecture Service.
Learn about configuring, managing, and updating RHUI with the following topics:
- the RHUI components
- content provider types
- the command line interface (CLI) used to manage the components
- utility commands
- certificate management
- content management
1.1. Installation options
The following table presents the various Red Hat Update Infrastructure 4 components.
Component | Acronym | Function | Alternative |
---|---|---|---|
Red Hat Update Appliance | RHUA | Downloads new packages from the Red Hat content delivery network and copies new packages to each CDS node | None |
Content Delivery Server | CDS |
Provides the | None |
HAProxy | None | Provides load balancing across CDS nodes | Existing load balancing solution |
Shared storage | None | Provides shared storage | Existing storage solution |
The following table describes how to perform installation tasks.
Installation Task | Performed on |
---|---|
Install RHEL 8 | RHUA, CDS, and HAProxy |
Subscribe the system | RHUA, CDS, and HAProxy |
Attach a RHUI subscription | RHUA, CDS, and HAProxy |
Apply updates | RHUA, CDS and HAProxy |
Install | RHUA |
Run | RHUA |
1.1.1. Option 1: Full installation
- A RHUA
- Two or more CDS nodes with shared storage
- One or more HAProxy load-balancers
1.1.2. Option 2: Installation with an existing storage solution
- A RHUA
- Two or more CDS nodes with an existing storage solution
- One or more HAProxy load-balancers
1.1.3. Option 3: Installation with an existing load-balancer solution
- A RHUA
- Two or more CDS nodes with shared storage
- An existing load-balancer
1.1.4. Option 4: Installation with existing storage and load-balancer solutions
- A RHUA
- Two or more CDS nodes with existing shared storage
- An existing load-balancer
The following figure depicts a high-level view of how the various Red Hat Update Infrastructure 4 components interact.
Figure 1.1. Red Hat Update Infrastructure 4 overview
You need to subscribe the RHUA as --type rhui
and have a Red Hat Certified Cloud and Service Provider subscription to install RHUI. You also need an appropriate content certificate.
Install the RHUA and CDS nodes on separate x86_64
servers (bare metal or virtual machines). Ensure all the servers and networks that connect to RHUI can access the Red Hat Subscription Management service.
1.2. RHUI 4 components
Understanding how each RHUI component interacts with other components will make your job as a system administrator a little easier.
1.2.1. Red Hat Update Appliance
There is one RHUA per RHUI installation, though in many cloud environments there will be one RHUI installation per region or data center, for example, Amazon’s EC2 cloud comprises several regions. In every region, there is a separate RHUI set up with its own RHUA node.
The RHUA allows you to perform the following tasks:
- Download new packages from the Red Hat content delivery network (CDN). The RHUA is the only RHUI component that connects to Red Hat, and you can configure the synchronization schedule for the RHUA.
- Copy new packages to the shared network storage.
- Verify the RHUI installation’s health and write the results to a file located on the RHUA. Monitoring solutions use this file to determine the RHUI installation’s health.
- Provide a human-readable view of the RHUI installation’s health through a CLI tool.
RHUI uses two main configuration files: /etc/rhui/rhui-tools.conf
and /etc/rhui/rhui-subscription-sync.conf
.
The /etc/rhui/rhui-tools.conf
configuration file contains general options used by the RHUA, such as the default file locations for certificates, and default configuration parameters for the Red Hat CDN synchronization. This file normally does not require editing.
The Red Hat Update Infrastructure Management Tool generates the /etc/rhui/rhui-subscription-sync.conf
configuration file based on user-inputted values. It contains all the information that drives the running of a RHUA in a particular region. An example configuration includes the destination on the RHUA to download packages.
The RHUA employs several services to synchronize, organize, and distribute content for easy delivery.
RHUA services
- Pulp
- The service that oversees management of the supporting services, providing a user interface for users to interact with
- PostgreSQL
- A PostgreSQL database used to keep track of currently synchronized repositories, packages, and other crucial metadata.
1.2.2. Content delivery server
The CDS nodes provide the repositories that clients connect to for the updated content. There can be as few as one CDS. Because RHUI provides a load-balancer with failover capabilities, we recommended that you use multiple CDS nodes.
The CDS nodes host content to end-user RHEL systems. While there is no required number of systems, the CDS works in a round-robin style load-balanced fashion (A, B, C, A, B, C) to deliver content to end-user systems. The CDS uses HTTP to host content to end-user systems via httpd-based yum
repositories.
During configuration, you specify the CDS directory where packages are synchronized. Similar to the RHUA, the only requirement is that you mount the directory on the CDS. It is up to the cloud provider to determine the best course of action when allocating the necessary devices. The Red Hat Update Infrastructure Management Tool configuration RPM linked the package directory with the NGINX configuration to serve it.
Currently, RHUI supports the following shared storage solutions:
- NFS
If NFS is used,
rhui-installer
can configure an NFS share on the RHUA to store the content as well as a directory on the CDS nodes to mount the NFS share. The followingrhui-installer
options control these settings:-
--remote-fs-mountpoint
is the file system location where the remote file system share should be mounted (default:/var/lib/rhui/remote_share
) -
--remote-fs-server
is the remote mount point for a shared file system to use, for example,nfs.example.com:/path/to/share
(default:nfs.example.com:/export
)
-
- CephFS
If using CephFS, you must configure CephFS separately and then use it with RHUI as a mount point. The following
rhui-installer
options control these settings:-
--remote-fs-server
is the remote mount point for a shared file system to use, for example,ceph.example.com:/path/to/share
(default:ceph.example.com:/export
)
-
This document does not provide instructions to set up or configure Ceph shared file storage. For any Ceph related tasks, consult your system administrator, or see the Ceph documentation.
If these default values are used, the /export
directory on the RHUA and the /var/lib/rhui/remote_share
directory on each CDS are identical.
The expected usage is that you use one shared network file system on the RHUA and all CDS nodes, for example, NFS. It is possible the cloud provider will use some form of shared storage that the RHUA writes packages to and each CDS reads from.
The storage solution must provide an NFS or CephFS endpoint for mounting on the RHUA and CDS nodes. If local storage is implemented, shared storage is needed for the cluster to work. If you want to provide local storage to the RHUA, configure the RHUA to function as the NFS server with a rhua.example.com:/path/to/nfs/share
endpoint configured.
Do not set up Ceph shared file storage on any of the RHUI nodes. You must configure CephFS on independent dedicated machines.
The only nonstandard logic that takes place on each CDS is the entitlement certificate checking. This checking ensures that the client making requests on the yum
repositories is authorized by the cloud provider to access those repositories. The check ensures the following conditions:
- The entitlement certificate was signed by the cloud provider’s Certificate Authority (CA) Certificate. The CA Certificate is installed on the CDS as part of its configuration to facilitate this verification.
- The requested URI matches an entitlement found in the client’s entitlement certificate.
If the CA verification fails, the client sees an SSL error. See the CDS node’s NGINX logs under /var/log/nginx/
for more information.
[root@cds01 ~]# ls -1 /var/log/nginx/ access.log error.log gunicorn-auth.log gunicorn-content_manager.log gunicorn-mirror.log ssl-access.log----
The NGINX configuration is handled through the /etc/nginx/conf.d/ssl.conf
file during the CDS installation.
If multiple clients experience problems updating against a repository, this may indicate a problem with the RHUI. See Yum generates 'Errno 14 HTTP Error 401: Authorization Required' while accessing RHUI CDS for more details.
1.2.3. HAProxy load-balancer
If more than one CDS is used, a load-balancing solution must be in place to spread client HTTPS requests across all servers. RHUI ships with HAProxy, but it is up to you to choose what load-balancing solution (for example, the one from the cloud provider) to use during the installation. If HAProxy is used, you must also decide how many nodes to bring in.
Clients are not configured to go directly to a CDS; their repository files are configured to point to HAProxy, the RHUI load-balancer. HAProxy is a TCP/HTTP reverse proxy particularly suited for high-availability environments. HAProxy performs the following tasks:
- Routes HTTP requests depending on statically assigned cookies
- Spreads the load among several servers while assuring server persistence through the use of HTTP cookies
- Switches to backup servers in the event a main server fails
- Accepts connections to special ports dedicated to service monitoring
- Stops accepting connections without breaking existing ones
- Adds, modifies, and deletes HTTP headers in both directions
- Blocks requests matching particular patterns
- Persists client connections to the correct application server, depending on application cookies
- Reports detailed status as HTML pages to authenticated users from a URI intercepted from the application
If you use an existing load-balancer, ensure port 443 is configured in the load-balancer for the cds-lb-hostname
forwarded to the pool and that all CDSs in the cluster are in the load-balancer’s pool.
The exact configuration depends on the particular load-balancer software you use. See the following configuration, taken from a typical HAProxy setup, to understand how you should configure your load-balancer:
[root@rhui4proxy ~]# cat /etc/haproxy/haproxy.cfg # This file managed by Puppet global chroot /var/lib/haproxy daemon group haproxy log 10.10.153.149 local0 maxconn 4000 pidfile /run/haproxy.pid stats socket /var/lib/haproxy/stats user haproxy defaults log global maxconn 8000 option redispatch retries 3 stats enable timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout check 10s listen https00 bind 10.10.153.149:443 balance roundrobin option tcplog option tcp-check server cds01.example.com cds01.example.com:443 check server cds02.example.com cds02.example.com:443 check
Keep in mind that when clients fail to connect, it is important to review the nginx
logs on the CDS under /var/log/nginx/
to ensure that any request reached the CDS. If requests do not reach the CDS, issues such as DNS or general network connectivity may be at fault.
1.2.4. Repositories and content
A repository is a storage location for software packages (RPMs). RHEL uses yum
commands to search a repository, download, install, and configure the RPMs. The RPMs contain all the dependencies needed to run an application. RPMs also download updates for software in your repositories.
RHEL uses core technologies such as control groups (cgroups) for resource management, namespaces for process isolation, and SELinux for security, enabling secure multiple tenancy, and reducing the potential for security exploits. These technologies enable rapid application deployment, simpler testing, maintenance, and troubleshooting while improving security.
Content, as it relates to RHUI, is the software (such as RPMs) that you download from the Red Hat CDN for use on the RHUA and the CDS nodes. The RPMs provide the files necessary to run specific applications and tools. Clients are granted access by a set of SSL content certificates and keys provided by an rpm package, which also provides a set of generated yum
repository files.
1.3. Content provider types
There are three types of cloud computing environments:
- public cloud
- private cloud
- hybrid cloud
This guide focuses on public and private clouds. We assume the audience understands the implications of using public, private, and hybrid clouds.
1.4. Component communications
All RHUI components use the HTTPS communication protocol over port 443.
Source | Destination | Protocol | Purpose |
---|---|---|---|
Red Hat Update Appliance | Red Hat Content Delivery Network | HTTPS | Downloads packages from Red Hat |
Load-Balancer | Content Delivery Server | HTTPS | Forwards the client’s yum |
Client | Load-Balancer | HTTPS | Used by yum on the client to download packages from a CDS |
Content Delivery Server | Red Hat Update Appliance | HTTPS | Might request information from Pulp API about content |
RHUI nodes require the following network access to communicate with each other.
Make sure that the network port is open and that network access is restricted to only those nodes that you plan to use.
Node | Port | Access |
---|---|---|
RHUA | 443 | RHUA, CDS01, CDS02, … CDSn |
HAProxy | 443 | Client virtual machines |
1.5. Changing the admin password
The rhui-installer
sets the initial RHUI login password. It is also written in the /etc/rhui/rhui-subscription-sync.conf
file. You can override the initial password with the --rhui-manager-password
option.
If you want to change the initial password later, you can change it through the rhui-manager
tool or through rhui-installer
. Run the rhui-installer
--help
command to see the full list of rhui-installer
options.
Procedure
Navigate to the Red Hat Update Infrastructure Management Tool home screen:
[root@rhua ~]# rhui-manager
-
Press
u
to select manage RHUI users. From the User Manager screen, press
p
to select change admin’s password (followed by logout):-= User Manager =- p change admin's password (followed by logout) rhui (users) => p Warning: After password change you will be logged out. Use ctrl-c to cancel password change. New Password:
Enter your new password; reenter it to confirm the change.
New Password: Re-enter Password: [localhost] env PULP_SETTINGS=/etc/pulp/settings.py /usr/bin/pulpcore-manager reset-admin-password -p ********
Verification
The following message displays after you change the admin password:
Password successfully updated. For security reasons you have been logged out. [root@ip-10-141-150-145 ~]#