Chapter 3. Preparing for Red Hat Quay (high availability)
This procedure presents guidance on how to set up a highly available, production-quality deployment of Red Hat Quay.
3.1. Prerequisites Copy linkLink copied to clipboard!
Here are a few things you need to know before you begin the Red Hat Quay high availability deployment:
Either Postgres or MySQL can be used to provide the database service. Postgres was chosen here as the database because it includes the features needed to support Clair security scanning. Other options include:
- Crunchy Data PostgreSQL Operator: Although not supported directly by Red Hat, the Postgres Operator is available from Crunchy Data for use with Red Hat Quay. If you take this route, you should have a support contract with Crunchy Data and work directly with them for usage guidance or issues relating to the operator and their database.
- If your organization already has a high-availability (HA) database, you can use that database with Red Hat Quay. See the Red Hat Quay Support Policy for details on support for third-party databases and other components.
Ceph Object Gateway (also called RADOS Gateway) is one example of a product that can provide the object storage needed by Red Hat Quay. If you want your Red Hat Quay setup to do geo-replication, Ceph Object Gateway or other supported object storage is required. For cloud installations, you can use any of the following cloud object storage:
- Amazon S3 (see S3 IAM Bucket Policy for details on configuring an S3 bucket policy for Quay)
- Azure Blob Storage
- Google Cloud Storage
- Ceph Object Gateway
- OpenStack Swift
- CloudFront + S3
- NooBaa S3 Storage
- The haproxy server is used in this example, although you can use any proxy service that works for your environment.
Number of systems: This procedure uses seven systems (physical or virtual) that are assigned with the following tasks:
- A: db01: Load balancer and database: Runs the haproxy load balancer and a Postgres database. Note that these components are not themselves highly available, but are used to indicate how you might set up your own load balancer or production database.
- B: quay01, quay02, quay03: Quay and Redis: Three (or more) systems are assigned to run the Quay and Redis services.
- C: ceph01, ceph02, ceph03, ceph04, ceph05: Ceph: Three (or more) systems provide the Ceph service, for storage. If you are deploying to a cloud, you can use the cloud storage features described earlier. This procedure employs an additional system for Ansible (ceph05) and one for a Ceph Object Gateway (ceph04).
Each system should have the following attributes:
Red Hat Enterprise Linux (RHEL) 8: Obtain the latest Red Hat Enterprise Linux 8 server media from the Downloads page and follow the installation instructions available in the Product Documentation for Red Hat Enterprise Linux 9.
- Valid Red Hat Subscription: Configure a valid Red Hat Enterprise Linux 8 server subscription.
- CPUs: Two or more virtual CPUs
- RAM: 4GB for each A and B system; 8GB for each C system
- Disk space: About 20GB of disk space for each A and B system (10GB for the operating system and 10GB for docker storage). At least 30GB of disk space for C systems (or more depending on required container storage).
3.2. Using podman Copy linkLink copied to clipboard!
This document uses podman for creating and deploying containers. If you do not have podman available on your system, you should be able to use the equivalent docker commands. For more information on podman and related technologies, see Building, running, and managing Linux containers on Red Hat Enterprise Linux 8.
Podman is strongly recommended for highly available, production quality deployments of Red Hat Quay. Docker has not been tested with Red Hat Quay 3.15, and will be deprecated in a future release.
3.3. Setting up the HAProxy load balancer and the PostgreSQL database Copy linkLink copied to clipboard!
Use the following procedure to set up the HAProxy load balancer and the PostgreSQL database.
Prerequisites
- You have installed the Podman or Docker CLI.
Procedure
On the first two systems,
q01andq02, install the HAProxy load balancer and the PostgreSQL database. This configures HAProxy as the access point and load balancer for the following services running on other systems:- Red Hat Quay (ports 80 and 443 on B systems)
- Redis (port 6379 on B systems)
- RADOS (port 7480 on C systems)
Open all HAProxy ports in SELinux and selected HAProxy ports in the firewall:
setsebool -P haproxy_connect_any=on firewall-cmd --permanent --zone=public --add-port=6379/tcp --add-port=7480/tcp firewall-cmd --reload
# setsebool -P haproxy_connect_any=on # firewall-cmd --permanent --zone=public --add-port=6379/tcp --add-port=7480/tcp success # firewall-cmd --reload successCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure the
/etc/haproxy/haproxy.cfgto point to the systems and ports providing the Red Hat Quay, Redis and Ceph RADOS services. The following are examples of defaults and added frontend and backend settings:Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the new
haproxy.cfgfile is in place, restart the HAProxy service by entering the following command:systemctl restart haproxy
# systemctl restart haproxyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a folder for the PostgreSQL database by entering the following command:
mkdir -p /var/lib/pgsql/data
$ mkdir -p /var/lib/pgsql/dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the following permissions for the
/var/lib/pgsql/datafolder:chmod 777 /var/lib/pgsql/data
$ chmod 777 /var/lib/pgsql/dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the following command to start the PostgreSQL database:
sudo podman run -d --name postgresql_database \ -v /var/lib/pgsql/data:/var/lib/pgsql/data:Z \ -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass \ -e POSTGRESQL_DATABASE=quaydb -p 5432:5432 \ registry.redhat.io/rhel8/postgresql-13:1-109$ sudo podman run -d --name postgresql_database \ -v /var/lib/pgsql/data:/var/lib/pgsql/data:Z \ -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass \ -e POSTGRESQL_DATABASE=quaydb -p 5432:5432 \ registry.redhat.io/rhel8/postgresql-13:1-109Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteData from the container will be stored on the host system in the
/var/lib/pgsql/datadirectory.List the available extensions by entering the following command:
sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_available_extensions" | /opt/rh/rh-postgresql96/root/usr/bin/psql'
$ sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_available_extensions" | /opt/rh/rh-postgresql96/root/usr/bin/psql'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
name | default_version | installed_version | comment -----------+-----------------+-------------------+---------------------------------------- adminpack | 1.0 | | administrative functions for PostgreSQL ...
name | default_version | installed_version | comment -----------+-----------------+-------------------+---------------------------------------- adminpack | 1.0 | | administrative functions for PostgreSQL ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the
pg_trgmextension by entering the following command:sudo podman exec -it postgresql_database /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm;" | /opt/rh/rh-postgresql96/root/usr/bin/psql -d quaydb'
$ sudo podman exec -it postgresql_database /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm;" | /opt/rh/rh-postgresql96/root/usr/bin/psql -d quaydb'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the
pg_trgmhas been created by entering the following command:sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_extension" | /opt/rh/rh-postgresql96/root/usr/bin/psql'
$ sudo podman exec -it postgresql_database /bin/bash -c 'echo "SELECT * FROM pg_extension" | /opt/rh/rh-postgresql96/root/usr/bin/psql'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition ---------+----------+--------------+----------------+------------+-----------+-------------- plpgsql | 10 | 11 | f | 1.0 | | pg_trgm | 10 | 2200 | t | 1.3 | | (2 rows)
extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition ---------+----------+--------------+----------------+------------+-----------+-------------- plpgsql | 10 | 11 | f | 1.0 | | pg_trgm | 10 | 2200 | t | 1.3 | | (2 rows)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alter the privileges of the Postgres user
quayuserand grant them thesuperuserrole to give the user unrestricted access to the database:sudo podman exec -it postgresql_database /bin/bash -c 'echo "ALTER USER quayuser WITH SUPERUSER;" | /opt/rh/rh-postgresql96/root/usr/bin/psql'
$ sudo podman exec -it postgresql_database /bin/bash -c 'echo "ALTER USER quayuser WITH SUPERUSER;" | /opt/rh/rh-postgresql96/root/usr/bin/psql'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
ALTER ROLE
ALTER ROLECopy to Clipboard Copied! Toggle word wrap Toggle overflow If you have a firewalld service active on your system, run the following commands to make the PostgreSQL port available through the firewall:
firewall-cmd --permanent --zone=trusted --add-port=5432/tcp
# firewall-cmd --permanent --zone=trusted --add-port=5432/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow firewall-cmd --reload
# firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional. If you do not have the
postgresCLI package installed, install it by entering the following command:yum install postgresql -y
# yum install postgresql -yCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
psqlcommand to test connectivity to the PostgreSQL database.NoteTo verify that you can access the service remotely, run the following command on a remote system.
psql -h localhost quaydb quayuser
# psql -h localhost quaydb quayuserCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Set Up Ceph Copy linkLink copied to clipboard!
For this Red Hat Quay configuration, we create a three-node Ceph cluster, with several other supporting nodes, as follows:
- ceph01, ceph02, and ceph03 - Ceph Monitor, Ceph Manager and Ceph OSD nodes
- ceph04 - Ceph RGW node
- ceph05 - Ceph Ansible administration node
For details on installing Ceph nodes, see Installing Red Hat Ceph Storage on Red Hat Enterprise Linux.
Once you have set up the Ceph storage cluster, create a Ceph Object Gateway (also referred to as a RADOS gateway). See Installing the Ceph Object Gateway for details.
3.4.1. Install each Ceph node Copy linkLink copied to clipboard!
On ceph01, ceph02, ceph03, ceph04, and ceph05, do the following:
Review prerequisites for setting up Ceph nodes in Requirements for Installing Red Hat Ceph Storage. In particular:
- Decide if you want to use RAID controllers on OSD nodes.
- Decide if you want a separate cluster network for your Ceph Network Configuration.
-
Prepare OSD storage (ceph01, ceph02, and ceph03 only). Set up the OSD storage on the three OSD nodes (ceph01, ceph02, and ceph03). See OSD Ansible Settings in Table 3.2 for details on supported storage types that you will enter into your Ansible configuration later. For this example, a single, unformatted block device (
/dev/sdb), that is separate from the operating system, is configured on each of the OSD nodes. If you are installing on metal, you might want to add an extra hard drive to the machine for this purpose. - Install Red Hat Enterprise Linux Server edition, as described in the RHEL 7 Installation Guide.
Register and subscribe each Ceph node as described in the Registering Red Hat Ceph Storage Nodes. Here is how to subscribe to the necessary repos:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create an ansible user with root privilege on each node. Choose any name you like. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.2. Configure the Ceph Ansible node (ceph05) Copy linkLink copied to clipboard!
Log into the Ceph Ansible node (ceph05) and configure it as follows. You will need the ceph01, ceph02, and ceph03 nodes to be running to complete these steps.
In the Ansible user’s home directory create a directory to store temporary values created from the ceph-ansible playbook
USER_NAME=ansibleadmin sudo su - $USER_NAME
# USER_NAME=ansibleadmin # sudo su - $USER_NAME [ansibleadmin@ceph05 ~]$ mkdir ~/ceph-ansible-keysCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable password-less ssh for the ansible user. Run ssh-keygen on ceph05 (leave passphrase empty), then run and repeat ssh-copy-id to copy the public key to the Ansible user on ceph01, ceph02, and ceph03 systems:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the ceph-ansible package:
yum install ceph-ansible
# yum install ceph-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a symbolic between these two directories:
ln -s /usr/share/ceph-ansible/group_vars \ /etc/ansible/group_vars# ln -s /usr/share/ceph-ansible/group_vars \ /etc/ansible/group_varsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create copies of Ceph sample yml files to modify:
cd /usr/share/ceph-ansible cp group_vars/all.yml.sample group_vars/all.yml cp group_vars/osds.yml.sample group_vars/osds.yml cp site.yml.sample site.yml
# cd /usr/share/ceph-ansible # cp group_vars/all.yml.sample group_vars/all.yml # cp group_vars/osds.yml.sample group_vars/osds.yml # cp site.yml.sample site.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the copied group_vars/all.yml file. See General Ansible Settings in Table 3.1 for details. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that your network device and address range may differ.
Edit the copied
group_vars/osds.ymlfile. See the OSD Ansible Settings in Table 3.2 for details. In this example, the second disk device (/dev/sdb) on each OSD node is used for both data and journal storage:osd_scenario: collocated devices: - /dev/sdb dmcrypt: true osd_auto_discovery: false
osd_scenario: collocated devices: - /dev/sdb dmcrypt: true osd_auto_discovery: falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
/etc/ansible/hostsinventory file to identify the Ceph nodes as Ceph monitor, OSD and manager nodes. In this example, the storage devices are identified on each node as well:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add this line to the
/etc/ansible/ansible.cfgfile, to save the output from each Ansible playbook run into your Ansible user’s home directory:retry_files_save_path = ~/
retry_files_save_path = ~/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that Ansible can reach all the Ceph nodes you configured as your Ansible user:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the ceph-ansible playbook (as your Ansible user):
cd /usr/share/ceph-ansible/ ansible-playbook site.yml
[ansibleadmin@ceph05 ~]$ cd /usr/share/ceph-ansible/ [ansibleadmin@ceph05 ~]$ ansible-playbook site.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow At this point, the Ansible playbook will check your Ceph nodes and configure them for the services you requested. If anything fails, make needed corrections and rerun the command.
Log into one of the three Ceph nodes (ceph01, ceph02, or ceph03) and check the health of the Ceph cluster:
ceph health
# ceph health HEALTH_OKCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the same node, verify that monitoring is working using rados:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4.3. Install the Ceph Object Gateway Copy linkLink copied to clipboard!
On the Ansible system (ceph05), configure a Ceph Object Gateway to your Ceph Storage cluster (which will ultimately run on ceph04). See Installing the Ceph Object Gateway for details.
3.5. Set up Redis Copy linkLink copied to clipboard!
With Red Hat Enterprise Linux 8 server installed on each of the three Red Hat Quay systems (quay01, quay02, and quay03), install and start the Redis service as follows:
Install / Deploy Redis: Run Redis as a container on each of the three quay0* systems:
mkdir -p /var/lib/redis chmod 777 /var/lib/redis sudo podman run -d -p 6379:6379 \ -v /var/lib/redis:/var/lib/redis/data:Z \ registry.redhat.io/rhel8/redis-5# mkdir -p /var/lib/redis # chmod 777 /var/lib/redis # sudo podman run -d -p 6379:6379 \ -v /var/lib/redis:/var/lib/redis/data:Z \ registry.redhat.io/rhel8/redis-5Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check redis connectivity: You can use the
telnetcommand to test connectivity to the redis service. Type MONITOR (to begin monitoring the service) and QUIT to exit:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information on using podman and restarting containers, see the section "Using podman" earlier in this document.