Chapter 3. Red Hat Quay infrastructure
Red Hat Quay runs on any physical or virtual infrastructure, both on premise or public cloud. Deployments range from simple to massively scaled, like the following:
- All-in-one setup on a developer notebook
- Highly available on virtual machines or on OpenShift Container Platform
- Geographically dispersed across multiple availability zones and regions
3.1. Running Red Hat Quay on standalone hosts
You can automate the standalone deployment process by using Ansible or another automation suite. All standalone hosts require valid a Red Hat Enterprise Linux (RHEL) subscription.
- Proof of Concept deployment
- Red Hat Quay runs on a machine with image storage, containerized database, Redis, and optionally, Clair security scanning.
- Highly available setups
Red Hat Quay and Clair run in containers across multiple hosts. You can use
systemd
units to ensure restart on failure or reboot.High availability setups on standalone hosts require customer-provided load balancers, either low-level TCP load balancers or application load balancers, capable of terminating TLS.
3.2. Running Red Hat Quay on OpenShift
The Red Hat Quay Operator for OpenShift Container Platform provides the following features:
- Automated deployment and management of Red Hat Quay with customization options
- Management of Red Hat Quay and all of its dependencies
- Automated scaling and updates
- Integration with existing OpenShift Container Platform processes like GitOps, monitoring, alerting, logging
- Provision of object storage with limited availability, backed by the multi-cloud object gateway (NooBaa), as part of the Red Hat OpenShift Data Foundation (ODF) Operator. This service does not require an additional subscription.
- Scaled-out, high availability object storage provided by the ODF Operator. This service requires an additional subscription.
Red Hat Quay can run on OpenShift Container Platform infrastructure nodes. As a result, no further subscriptions are required. Running Red Hat Quay on OpenShift Container Platform has the following benefits:
- Zero to Hero: Simplified deployment of Red Hat Quay and associated components means that you can start using the product immediately
- Scalability: Use cluster compute capacity to manage demand through automated scaling, based on actual load
- Simplified Networking: Automated provisioning of load balancers and traffic ingress secured through HTTPS using OpenShift Container Platform TLS certificates and Routes
- Declarative configuration management: Configurations stored in CustomResource objects for GitOps-friendly lifecycle management
- Repeatability: Consistency regardless of the number of replicas of Red Hat Quay and Clair
- OpenShift integration: Additional services to use OpenShift Container Platform Monitoring and Alerting facilities to manage multiple Red Hat Quay deployments on a single cluster
3.3. Integrating standalone Red Hat Quay with OpenShift Container Platform
While the Red Hat Quay Operator ensures seamless deployment and management of Red Hat Quay running on OpenShift Container Platform, it is also possible to run Red Hat Quay in standalone mode and then serve content to one or many OpenShift Container Platform clusters, wherever they are running.
Integrating standalone Red Hat Quay with OpenShift Container Platform
Several Operators are available to help integrate standalone and Operator based deployments ofRed Hat Quay with OpenShift Container Platform, like the following:
- Red Hat Quay Cluster Security Operator
- Relays Red Hat Quay vulnerability scanning results into the OpenShift Container Platform console
- Red Hat Quay Bridge Operator
- Ensures seamless integration and user experience by using Red Hat Quay with OpenShift Container Platform in conjunction with OpenShift Container Platform Builds and ImageStreams
3.4. Mirror registry for Red Hat OpenShift
The mirror registry for Red Hat OpenShift is small-scale version of Red Hat Quay that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations.
For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift Container Platform subscription. It is available for download on the OpenShift console Downloads page.
The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry
command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with pre-configured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started.
The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments.
The mirror registry for Red Hat OpenShift is limited to hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as release images or Operator images. It uses local storage. Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift.
Unlike Red Hat Quay, the mirror registry for Red Hat OpenShift is not a highly-available registry. Only local file system storage is supported. Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged, because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to use the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters.
More information is available at Creating a mirror registry with mirror registry for Red Hat OpenShift.
3.5. Single compared to multiple registries
Many users consider running multiple, distinct registries. The preferred approach with Red Hat Quay is to have a single, shared registry:
- If you want a clear separation between development and production images, or a clear separation by content origin, for example, keeping third-party images distinct from internal ones, you can use organizations and repositories, combined with role-based access control (RBAC), to achieve the desired separation.
- Given that the image registry is a critical component in an enterprise environment, you may be tempted to use distinct deployments to test upgrades of the registry software to newer versions. The Red Hat Quay Operator updates the registry for patch releases as well as minor or major updates. This means that any complicated procedures are automated and, as a result, there is no requirement for you to provision multiple instances of the registry to test the upgrade.
- With Red Hat Quay, there is no need to have a separate registry for each cluster you deploy. Red Hat Quay is proven to work at scale at Quay.io, and can serve content to thousands of clusters.
- Even if you have deployments in multiple data centers, you can still use a single Red Hat Quay instance to serve content to multiple physically-close data centers, or use the HA functionality with load balancers to stretch across data centers. Alternatively, you can use the Red Hat Quay geo-replication feature to stretch across physically distant data centers. This requires the provisioning of a global load balancer or DNS-based geo-aware load balancing.
- One scenario where it may be appropriate to run multiple distinct registries, is when you want to specify different configuration for each registry.
In summary, running a shared registry helps you to save storage, infrastructure and operational costs, but a dedicated registry might be needed in specific circumstances.