Deploying the Red Hat Quay Operator on OpenShift Container Platform
Deploying the Red Hat Quay Operator on OpenShift Container Platform
Abstract
Preface Copy linkLink copied to clipboard!
This document guides you through the process of deploying and configuring Red Hat Quay in your environment using the Red Hat Quay Operator. The Operator simplifies the installation, configuration, and maintenance of your registry, ensuring you have a production-ready container image repository for your enterprise.
Chapter 1. Introduction to the Red Hat Quay Operator Copy linkLink copied to clipboard!
The Red Hat Quay Operator is designed to simplify the installation, deployment, and management of the Red Hat Quay container registry on OpenShift Container Platform. By leveraging the Operator framework, you can treat Quay as a native OpenShift Container Platform application, automating common tasks and managing its full lifecycle.
This chapter provides a conceptual overview of the Red Hat Quay Operator’s architecture and configuration model. It covers the following information:
- A configuration overview of Red Hat Quay when deployed on OpenShift Container Platform.
- How the Operator manages Quay’s components, or managed components.
- When and why to use external, or unmanaged, components for dependencies like the database and object storage.
-
The function and structure of the
configBundleSecret, which handles Quay’s configuration. - The prerequisites required before installation.
1.1. Red Hat Quay on OpenShift Container Platform configuration overview Copy linkLink copied to clipboard!
When deploying Red Hat Quay on OpenShift Container Platform, the registry configuration is managed declaratively through two primary mechanisms: the QuayRegistry custom resource (CR) and the configBundleSecret resource.
1.1.1. Understanding the QuayRegistry CR Copy linkLink copied to clipboard!
The QuayRegistry custom resource (CR) is the interface for defining the desired state of your Quay deployment. This resource focuses on managing the core components of the registry, such as the database, cache, and stroage.
The QuayRegistry CR is used to determine whether a component is managed, or automatically handled by the Operator, or unmanaged, or provided externally by the user.
By default, the QuayRegistry CR contains the following key fields:
-
configBundleSecret: The name of a Kubernetes Secret containing theconfig.yamlfile which defines additional configuration parameters. -
name: The name of your Red Hat Quay registry. -
namespace: The namespace, or project, in which the registry was created. spec.components: A list of component that the Operator automatically manages. Eachspec.componentfield contains two fields:-
kind: The name of the component -
managed: A boolean that addresses whether the component lifecycle is handled by the Red Hat Quay Operator. Settingmanaged: trueto a component in theQuayRegistryCR means that the Operator manages the component.
-
All QuayRegistry components are automatically managed and auto-filled upon reconciliation for visibility unless specified otherwise. The following sections highlight the major QuayRegistry components and provide an example YAML file that shows the default settings.
1.1.1.1. Managed components Copy linkLink copied to clipboard!
By default, the Operator handles all required configuration and installation needed for Red Hat Quay’s managed components.
| Field | Type | Description |
|---|---|---|
|
| Boolean |
Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, such as environment variables and number of replicas. This component cannot be set to unmanaged ( |
|
| Boolean | Used for storing registry metadata. Currently, PostgreSQL version 13 is used. |
|
| Boolean | Provides image vulnerability scanning. |
|
| Boolean | Storage live builder logs and the locking mechanism that is required for garbage collection. |
|
| Boolean |
Adjusts the number of |
|
| Boolean |
Stores image layer blobs. When set to |
|
| Boolean | Provides an external entrypoint to the Red Hat Quay registry from outside of OpenShift Container Platform. |
|
| Boolean | Configures repository mirror workers to support optional repository mirroring. |
|
| Boolean |
Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting |
|
| Boolean | Configures whether SSL/TLS is automatically handled. |
|
| Boolean | Configures a managed Clair database. This is a separate database than the PostgreSQL database that is used to deploy Red Hat Quay. |
The following example shows you the default configuration for the QuayRegistry custom resource provided by the Red Hat Quay Operator. It is available on the OpenShift Container Platform web console.
Example QuayRegistry custom resource
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: <example_registry>
namespace: <namespace>
spec:
configBundleSecret: config-bundle-secret
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
1.1.1.2. Using unmanaged components for dependencies Copy linkLink copied to clipboard!
Although the Red Hat Quay Operator provides an opinionated deployment by automatically managing all required dependencies, this approach might not be suitable for every environment. If you need to integrate existing infrastructure or require specific configurations, you can leverage the Operator to use external, or unmanaged, resources instead. An unmanaged component is any core dependency—such as PostgreSQL, Redis, or object storage—that you deploy and maintain outside of the Operator’s control.
If you are using an unmanaged PostgreSQL database, and the version is PostgreSQL 10, it is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy.
For more information about unmanaged components, see "Advanced configurations".
1.1.2. Understanding the configBundleSecret Copy linkLink copied to clipboard!
The spec.configBundleSecret field is an optional reference to the name of a Secret in the same namespace as the QuayRegistry resource. This Secret must contain a config.yaml key/value pair, where the value is a Red Hat Quay configuration file.
The configBundleSecret stores the config.yaml file. Red Hat Quay administrators can define the following settings through the config.yaml file:
- Authentication backends (for example, OIDC, LDAP)
- External TLS termination settings
- Repository creation policies
- Feature flags
- Notification settings
Red Hat Quay administrators might update this secret for the following reasons:
- Enable a new authentication method
- Add custom SSL/TLS certificates
- Enable features
- Modify security scanning settings
If this field is omitted, the Red Hat Quay Operator automatically generates a configuration secret based on default values and managed component settings. If the field is provided, the contents of the config.yaml are used as the base configuration and are merged with values from managed components to form the final configuration, which is mounted into the quay application pods.
1.2. Prerequisites for Red Hat Quay on OpenShift Container Platform Copy linkLink copied to clipboard!
Before deploying the Red Hat Quay Operator, ensure that your environment meets the following prerequisites. These requirements cover the minimum cluster version, administrative access, resource capacity, and storage configuration necessary for a successful installation.
1.2.1. OpenShift Container Platform cluster Copy linkLink copied to clipboard!
To deploy and manage the Red Hat Quay Operator, you must meet the following requirements:
- An OpenShift Container Platform cluster running version 4.5 or later.
- An administrative account with sufficient permissions to perform cluster-scoped actions, including the ability to create namespaces.
1.2.2. Resource Requirements Copy linkLink copied to clipboard!
Red Hat Quay requires dedicated compute resources to function effectively. You must ensure that your OpenShift Container Platform cluster has sufficient capacity to accommodate the following requirements for each Red Hat Quay application pod:
| Resource type | Requirement |
|---|---|
| Memory | 8 Gi |
| CPU | 2000 millicores (2 vCPUs) |
The Operator creates at least one main application pod per Red Hat Quay deployment that it manages. Plan your cluster capacity accordingly.
1.2.3. Object Storage Copy linkLink copied to clipboard!
Red Hat Quay requires object storage to store all container image layer blobs. You have two options for providing this storage: managed (automated by the Operator) or unmanaged (using an existing external service).
1.2.3.1. Managed storage overview Copy linkLink copied to clipboard!
By default, the Red Hat Quay Operator handles storage provisioning by consuming the ObjectBucketClaim Kubernetes API. Using the ObjectBucketClaim API is the preferred method because it decouples the Red Hat Quay Operator from vendor-specific storage implementations, allowing it to integrate seamlessly with various providers.
If you are using managed object storage, the Red Hat Quay Operator can provision it for you using this ObjectBucketClaim mechanism. The NooBaa component of Red Hat OpenShift Data Foundation is a common provider that implements the ObjectBucketClaim API.
There are two supported managed options available through Red Hat OpenShift Data Foundation: using the Multicloud Object Gateway, or a production-grade deployment of Red Hat OpenShift Data Foundation. The differences between the two are summarized in the following tables.
| Aspect | Description | Benefit |
|---|---|---|
| Component |
A standalone instance of the Multicloud Object Gateway backed by a local Kubernetes | Allows you to quickly deploy a Red Hat Quay registry without procuring an external service. |
| High availability | The Multicloud Object Gateway is not highly available. If the node fails, storage is temporarily inaccessible. | Depending on your use case, it should not be substituted for high availability needs. |
| Subscription | Included in the Red Hat Quay subscription. | Reduces complexity and avoids purchasing separate products. |
| Aspect | Description | Benefit |
|---|---|---|
| Component | A production deployment of Red Hat OpenShift Data Foundation with scale-out Object Service and Ceph. | Provides reliability and data redundancy. |
| High availability | Highly available, meaning that object storage layer can withstand node failures. | Beneficial for production environments where uptime is essential. |
| Subscription | Requires a separate subscription for Red Hat OpenShift Data Foundation. | Ensures enterprise-level support and stability for your storage layer. |
1.2.3.1.1. About the Multicloud Object Gateway component Copy linkLink copied to clipboard!
As part of a Red Hat Quay subscription, users are entitled to use the Multicloud Object Gateway component of the Red Hat OpenShift Data Foundation Operator (formerly known as OpenShift Container Storage Operator). The following table describes some of the benefits to using the Multicloud Object Gateway:
The Multicloud Object Gateway gateway component allows you to provide an S3-compatible object storage interface to Red Hat Quay backed by Kubernetes PersistentVolume-based block storage. The usage is limited to a Red Hat Quay deployment managed by the Operator and to the exact specifications of the Multicloud Object Gateway instance as documented below.
Since Red Hat Quay does not support local filesystem storage, users can leverage the gateway in combination with Kubernetes PersistentVolume storage instead, to provide a supported deployment. A PersistentVolume is directly mounted on the gateway instance as a backing store for object storage and any block-based StorageClass is supported.
By the nature of PersistentVolume, this is not a scale-out, highly available solution and does not replace a scale-out storage system like Red Hat OpenShift Data Foundation. Only a single instance of the gateway is running. If the pod running the gateway becomes unavailable due to rescheduling, updates or unplanned downtime, this will cause temporary degradation of the connected Red Hat Quay instances.
Deploying Red Hat Quay on OpenShift Container Platform using Red Hat OpenShift Data Foundation requires you to download the Local Storage Operator, the Red Hat OpenShift Data Foundation Operator, and then Multicloud Object Gateway using the OpenShift Container Platform UI. See the following Red Hat OpenShift Data Foundation documentation for these steps:
1.2.3.1.2. About Red Hat OpenShift Data Foundation Copy linkLink copied to clipboard!
Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation.
1.2.3.2. Unmanaged storage overview Copy linkLink copied to clipboard!
When your environment requires a connection to a storage provider that you manage, for example, AWS S3, Google Cloud Storage, or a self-hosted S3-compatible service, you can leverage unmanaged storage. Red Hat Quay supports the following major cloud and on-premises object storage providers:
- Amazon Web Services (AWS) S3
- AWS STS S3 (Security Token Service)
- AWS CloudFront (CloudFront S3Storage)
- Google Cloud Storage
- Microsoft Azure Blob Storage
- Swift Storage
- Nutanix Object Storage
- IBM Cloud Object Storage
- NetApp ONTAP S3 Object Storage
- Hitachi Content Platform (HCP) Object Storage
For a complete list of object storage providers, the Quay Enterprise 3.x support matrix.
For example configurations of external object storage, see Storage object configuration fields, which provides the required YAML configuration examples, credential formatting, and full field descriptions for all supported external storage providers.
1.2.4. StorageClass Copy linkLink copied to clipboard!
The Red Hat Quay Operator automatically deploys dedicated PostgreSQL databases for both the main Quay registry and the Clair vulnerability scanner. Both of these databases require persistent storage to ensure data integrity and availability.
To enable the Operator to provision this storage seamlessly, your cluster must have a default StorageClass configured. The Operator uses this default StorageClass to create the Persistent Volume Claims (PVCs) required by the Quay and Clair databases. These PVCs ensure that your registry metadata and vulnerability data persist across pod restarts, node failures, and upgrades.
Before proceeding with the installation, verify that a default StorageClass is configured in your cluster to ensure that the Quay and Clair components can successfully provision their required persistent volumes.
Chapter 2. Installing the Red Hat Quay Operator from the OperatorHub Copy linkLink copied to clipboard!
To install the Red Hat Quay Operator from the OpenShift Container Platform OperatorHub, configure the installation mode and update approval strategy. You should install the Operator cluster-wide to ensure the monitoring component is available; deploying to a specific namespace renders monitoring unavailable.
Procedure
- On the OpenShift Container Platform web console, click Operators → OperatorHub.
- In the search box, type Red Hat Quay and select the official Red Hat Quay Operator provided by Red Hat.
- Select Install.
- Select the update channel, for example, stable-3.15 and the version.
For the Installation mode, select one of the following:
- All namespaces on the cluster. Select this option if you want the Red Hat Quay Operator to be available cluster-wide. It is recommended that you install the Red Hat Quay Operator cluster-wide. If you choose a single namespace, the monitoring component is not available.
-
A specific namespace on the cluster. Select this option if you want Red Hat Quay deployed within a single namespace. Note that selecting this option renders the
monitoringcomponent unavailable.
- Select an Approval Strategy. Choose to approve either automatic or manual updates. Automatic update strategy is recommended.
- Select Install.
Chapter 3. Deploying the Red Hat Quay registry Copy linkLink copied to clipboard!
To deploy the Red Hat Quay registry after installing the Operator, you must create an instance based on the QuayRegistry custom resource (CR), which can be done using the OpenShift Container Platform web console or the oc cli (command-line interface). For the registry to deploy successfully, you must have, or configure, an object storage provider.
The following sections provide you with the information necessary to configure managed or unmanaged object storage, and then deploy the Red Hat Quay registry.
The following procedures show you how to create a basic Red Hat Quay registry in all namespaces of the OpenShift Container Platform deployment. Depending on your needs, advanced configuration might be necessary. For example, you might need to configure SSL/TLS for your deployment or disable certain components. Advanced configuration practices are covered in later chapters of this guide.
3.1. Deploying the Red Hat Quay registry by using the OpenShift Container Platform web console Copy linkLink copied to clipboard!
Use the OpenShift Container Platform web console to create and deploy a basic Red Hat Quay registry instance.
Prerequisites
- You have installed the Red Hat Quay Operator.
- You have have administrative privileges to the cluster.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- On the Red Hat Quay dashboard, click Create instance.
On the Create QuayRegistry page, review the default settings of the
QuayRegistrycustom resource (CR). Here, you decide whether to to use managed or unmanaged object storage.If you are using the Multicloud Object Gateway or Red Hat OpenShift Data Foundation as your object storage, keep the following settings:
- kind: objectstorage managed: trueIf you are using a different storage provider, such as Google Cloud Platform, AWS S3, or Nutanix, set the
objectstoragecomponent as follows:- kind: objectstorage managed: false
- Click Create. You are redirected to the Quay Registry tab on the Operator page.
Click the name of the Red Hat Quay registry that you created, then click Events to view the status of creation. If you used managed storage and leveraged the Multicloud Object Gateway, the registry completes creation. If you are using Red Hat OpenShift Data Foundation or an unmanaged storage backend provider, complete the following steps:
- Click the Details page of the Red Hat Quay registry.
- Click the name of the Config Bundle Secret resource, for example, <example_registry_name_config-bundle-secret-12345>.
Click Actions → Edit Secret, and pass in the following information from your backend storage provider:
# ... DISTRIBUTED_STORAGE_CONFIG: <storage_provider>: - <storage_provider_name> - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry # ...NoteDepending on your storage provider, different information is required. For more information, see see Storage object configuration fields.
- Click Save, and then re-navigate to the Events page of the registry to ensure successful deployment.
3.2. Deploying the Red Hat Quay registry by using the CLI Copy linkLink copied to clipboard!
Use the oc command-line interface (CLI) to create and deploy a basic Red Hat Quay registry instance.
The following config.yaml file includes automation configuration options. Collectively, these options streamline using the CLI with your registry, helping reduce dependency on the UI. Adding these fields to your config.yaml file is optional if you plan to use the UI, but recommended if you plan to use the CLI.
For more information, see Automation configuration options.
Prerequisites
- You have logged into OpenShift Container Platform using the CLI.
Procedure
Create a namespace, for example,
quay-enterprise, by entering the following command:$ oc new-project quay-enterpriseCreate the
QuayRegistrycustom resource (CR).If the
objectstoragecomponent is set tomanaged: true, complete the following steps:Create the
QuayRegistryCR by entering the following command:$ cat <<EOF | oc create -n quay-enterprise -f - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise EOF
If the
objectstoragecomponent is set tomanaged: false, complete the following steps:Create the
config.yamlfile for Red Hat Quay by entering the following command. You must include the information required for your backend storage provider. During this step, you can enable additional Red Hat Quay features. The following example is for a minimal configuration that includes the configuration options for automating early setup tasks:$ cat <<EOF > config.yaml ALLOW_PULLS_WITHOUT_STRICT_LOGGING: false AUTHENTICATION_TYPE: Database DEFAULT_TAG_EXPIRATION: 2w FEATURE_USER_INITIALIZE: true1 SUPER_USERS:2 - <username> BROWSER_API_CALLS_XHR_ONLY: false3 FEATURE_USER_CREATION: false4 DISTRIBUTED_STORAGE_CONFIG: <storage_provider>: - <storage_provider_name> - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry ENTERPRISE_LOGO_URL: /static/img/RH_Logo_Quay_Black_UX-horizontal.svg FEATURE_BUILD_SUPPORT: false FEATURE_DIRECT_LOGIN: true FEATURE_MAILING: false REGISTRY_TITLE: Red Hat Quay REGISTRY_TITLE_SHORT: Red Hat Quay SETUP_COMPLETE: true TAG_EXPIRATION_OPTIONS: - 2w TEAM_RESYNC_STALE_TIME: 60m TESTING: false EOF- 1
- Set this field to
trueif you plan to create the first user by using API. - 2
- Include this field and the username that you plan to leverage as a Red Hat Quay administrator.
- 3
- When set to
False, allows general browser-based access to the API. - 4
- When set to
False, relegates the creation of new users to only superusers.
Create a secret for the configuration by entering the following command:
$ oc create secret generic <quay_config_bundle_name> \ --from-file=config.yaml=</path/to/config.yaml> \ -n quay-enterprise \ --dry-run=client -o yaml | oc apply -f -Create the
QuayRegistryCR by entering the following command:$ cat <<EOF | oc create -n quay-enterprise -f - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: <quay_config_bundle_name> components: - kind: clair managed: true - kind: objectstorage managed: false1 - kind: mirror managed: true - kind: monitoring managed: true EOF- 1
- Must be set to false when providing your own storage backend.
Verification
Check the status of your registry by entering the following command:
$ oc describe quayregistry <registry_name> -n quay-enterpriseExample output
... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ComponentsCreationSuccess 23s (x2458 over 42h) quayregistry-controller All objects created/updated successfully- Alternatively, you can check pod statuses for your registry deployment by entering the following command:
Enter the following command to view the deployed components:
$ oc get pods -n quay-enterpriseExample output
NAME READY STATUS RESTARTS AGE example-registry-clair-app-5ffc9f77d6-jwr9s 1/1 Running 0 3m42s example-registry-clair-app-5ffc9f77d6-wgp7d 1/1 Running 0 3m41s example-registry-clair-postgres-54956d6d9c-rgs8l 1/1 Running 0 3m5s example-registry-quay-app-79c6b86c7b-8qnr2 1/1 Running 4 3m42s example-registry-quay-app-79c6b86c7b-xk85f 1/1 Running 4 3m41s example-registry-quay-app-upgrade-5kl5r 0/1 Completed 4 3m50s example-registry-quay-database-b466fc4d7-tfrnx 1/1 Running 2 3m42s example-registry-quay-mirror-6d9bd78756-6lj6p 1/1 Running 0 2m58s example-registry-quay-mirror-6d9bd78756-bv6gq 1/1 Running 0 2m58s example-registry-quay-postgres-init-dzbmx 0/1 Completed 0 3m43s example-registry-quay-redis-8bd67b647-skgqx 1/1 Running 0 3m42s
Additional resources
- For more information about how to track the progress of your Red Hat Quay deployment, see Monitoring and debugging the deployment process.
Chapter 4. Creating the first user Copy linkLink copied to clipboard!
This section guides you through creating the initial administrative user for your Red Hat Quay registry. Completing this step confirms that your deployment is fully operational and grants you the necessary credentials to begin using and managing your registry. This can be completed by using the Red Hat Quay UI or by leveraging the API.
4.1. Creating the first user by using the UI Copy linkLink copied to clipboard!
Creating the first user by using the UI offers a visual workflow and is often preferred after initial setup to ensure that the user interface is functional. For most users, the UI offers a simpler path to creating the first user, as it does not require additional configuration in the config.yaml file.
Prerequisites
- You have deployed the Red Hat Quay registry.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- On the Red Hat Quay Operators page, click Quay Registry, and then the name of your registry.
- On the QuayRegistry details page, click the Registry Endpoint link, for example, example-registry-quay.username-cluster-new.gcp.quaydev.org. You are navigated to the registry’s main page.
- Click Create Account.
- Enter the details for Username, Password, Email, and then click Create Account. After creating the first user, you are automatically logged in to the Red Hat Quay registry.
4.2. Using the API to create the first user Copy linkLink copied to clipboard!
You can use the API to create the first user with administrative privileges for your registry.
Prerequisites
You have set
FEATURE_USER_INITIALIZE: trueand established a superuser in yourconfig.yamlfile. For example:# ... FEATURE_USER_INITIALIZE: true SUPER_USERS: - <username> # ..If you did not configure these settings upon registry creation, and need to re-configure your registry to enable these settings, see "Enabling features after deployment".
- You have not created a user by using the Red Hat Quay UI.
Procedure
On the command-line interface, generate a new user with a username, password, email, and access token by entering the following
CURLcommand:$ curl -X POST -k http:/</quay-server.example.com>/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "<username>", "password":"<password>", "email": "<email>@example.com", "access_token": true}'If successful, the command returns an object with the username, email, and encrypted password. For example:
{"access_token":"123456789", "email":"quayadmin@example.com","encrypted_password":"<password>","username":"quayadmin"} # gitleaks:allowIf a user already exists in the database, an error is returned. For example:
{"message":"Cannot initialize user in a non-empty database"}If your password is not at least eight characters or contains whitespace, an error is returned. For example:
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}You can log in to your registry by navigating to the UI or by leveraging Podman on the CLI.
Log in to the registry by running the following
podmancommand:$ podman login -u <username> -p <password> http://<quay-server.example.com>Example output
Login Succeeded!
Chapter 5. Modifying the QuayRegistry CR after deployment Copy linkLink copied to clipboard!
After you have installed the Red Hat Quay Operator and created an initial deployment, you can modify the QuayRegistry custom resource (CR) to customize or reconfigure aspects of the Red Hat Quay environment.
Red Hat Quay administrators might modify the QuayRegistry CR for the following reasons:
-
To change component management: Switch components from
managed: truetomanaged: falsein order to bring your own infrastructure. For example, you might setkind: objectstorageto unmanaged to integrate external object storage platforms such as Google Cloud Storage or Nutanix. -
To apply custom configuration: Update or replace the
configBundleSecretto apply new configuration settings, for example, authentication providers, external SSL/TLS settings, feature flags. -
To enable or disable features: Toggle features like repository mirroring, Clair scanning, or horizontal pod autoscaling by modifying the
spec.componentslist. - To scale the deployment: Adjust environment variables or replica counts for the Quay application.
- To integrate with external services: Provide configuration for external PostgreSQL, Redis, or Clair databases, and update endpoints or credentials.
5.1. Modifying the QuayRegistry CR by using the OpenShift Container Platform web console Copy linkLink copied to clipboard!
The QuayRegistry can be modified by using the OpenShift Container Platform web console. This allows you to set managed components to unamanged (managed: false) and use your own infrastructure.
Prerequisites
- You are logged into OpenShift Container Platform as a user with admin privileges.
- You have installed the Red Hat Quay Operator.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators.
- Click Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- Click YAML.
-
Adjust the
managedfield of the desired component to eitherTrueorFalse. Click Save.
NoteSetting a component to unmanaged (
managed: false) might require additional configuration. For more information about setting unmanaged components in theQuayRegistryCR, see Using unmanaged components for dependencies.
5.2. Modifying the QuayRegistry CR by using the CLI Copy linkLink copied to clipboard!
The QuayRegistry CR can be modified by using the CLI. This allows you to set managed components to unamanged (managed: false) and use your own infrastructure.
Prerequisites
- You are logged in to your OpenShift Container Platform cluster as a user with admin privileges.
Procedure
Edit the
QuayRegistryCR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace>Make the desired changes to the
QuayRegistryCR.NoteSetting a component to unmanaged (
managed: false) might require additional configuration. For more information about setting unmanaged components in theQuayRegistryCR, see Using unmanaged components for dependencies.- Save the changes.
Chapter 6. Enabling features after deployment Copy linkLink copied to clipboard!
After deployment, you can customize to the Red Hat Quay registry to enable new features and better suit the needs of your organization. This entails editing the Red Hat Quay configuration bundle secret (spec.configBundleSecret) resource. You can use the OpenShift Container Platform web console or the command-line interface to enable features after deployment. Using the OpenShift Container Platform web console is generally considered a simpler method.
6.1. Enabling features by using the OpenShift Container Platform web console Copy linkLink copied to clipboard!
To enable features in the OpenShift Container Platform web console, you can edit the configBundleSecret resource.
Prerequisites
- You have have administrative privileges to the cluster.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- Click Quay Registry and then the name of your registry.
-
Under Config Bundle Secret, click the name of your secret, for example,
quay-config-bundle. - On the Secret details page, click Actions → Edit secret.
- In the Value text box, add the new configuration fields for the features that you want to enable. For a list of all configuration fields, see Configure Red Hat Quay.
- Click Save. The Red Hat Quay Operator automatically reconciles the changes by restarting all Quay-related pods. After all pods are restarted, the features are enabled.
6.2. Modifying the configuration file by using the CLI Copy linkLink copied to clipboard!
You can modify the config.yaml file that is stored by the configBundleSecret by downloading the existing configuration using the CLI. After making changes, you can re-upload the configBundleSecret resource to make changes to the Red Hat Quay registry.
Modifying the config.yaml file that is stored by the configBundleSecret resource is a multi-step procedure that requires base64 decoding the existing configuration file and then uploading the changes. For most cases, using the OpenShift Container Platform web console to make changes to the config.yaml file is simpler.
Prerequisites
- You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
Procedure
Describe the
QuayRegistryresource by entering the following command:$ oc describe quayregistry -n <quay_namespace>Example output
# ... Config Bundle Secret: example-registry-config-bundle-v123x # ...Obtain the secret data by entering the following command:
$ oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'Example output
{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }Decode the data into a YAML file into the current directory by passing in the
>> config.yamlflag. For example:$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml-
Make the desired changes to your
config.yamlfile, and then save the file asconfig.yaml. Create a new
configBundleSecretYAML by entering the following command.$ touch <new_configBundleSecret_name>.yamlCreate the new
configBundleSecretresource, passing in theconfig.yamlfile` by entering the following command:$ oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \1 --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml- 1
- Where
<config.yaml>is yourbase64 decodedconfig.yamlfile.
Create the
configBundleSecretresource by entering the following command:$ oc create -n <namespace> -f <new_configBundleSecret_name>.yamlExample output
secret/config-bundle createdUpdate the
QuayRegistryYAML file to reference the newconfigBundleSecretobject by entering the following command:$ oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'Example output
quayregistry.quay.redhat.com/example-registry patched
Verification
Verify that the
QuayRegistryCR has been updated with the newconfigBundleSecret:$ oc describe quayregistry -n <quay_namespace>Example output
# ... Config Bundle Secret: <new_configBundleSecret_name> # ...After patching the registry, the Red Hat Quay Operator automatically reconciles the changes.
Chapter 7. Deploying Red Hat Quay on infrastructure nodes Copy linkLink copied to clipboard!
By default, all quay-related pods are scheduled on available worker nodes in your OpenShift Container Platform cluster. In some environments, you might want to dedicate certain nodes specifically for infrastructure workloads—such as registry, database, and monitoring pods—to improve performance, isolate critical components, or simplify maintenance.
OpenShift Container Platform supports this approach using infrastructure machine sets, which automatically create and manage nodes reserved for infrastructure.
As an OpenShift Container Platform administrator, you can achieve the same result by labeling and tainting worker nodes. This ensures that only infrastructure workloads, like quay pods, are scheduled on these nodes. After your infrastructure nodes are configured, you can control where quay pods run using node selectors and tolerations.
The following procedures is intended for new deployments that install the Red Hat Quay Operator in a single namespace and provide their own backend storage. The procedure shows you how to prepare nodes and deploy Red Hat Quay on dedicated infrastructure nodes. In this procedure, all quay-related pods are placed on dedicated infrastructure nodes.
7.1. Labeling and tainting nodes for infrastructure use Copy linkLink copied to clipboard!
Use the following procedure to label and taint nodes for infrastructure use.
The following procedure labels three worker nodes with the infra label. Depending on the resources relevant to your environment, you might have to label more than three worker nodes with the infra label.
Obtain a list of worker nodes in your deployment by entering the following command:
$ oc get nodes | grep workerExample output
NAME STATUS ROLES AGE VERSION --- example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal Ready worker 401d v1.31.11 example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal Ready worker 402d v1.31.11 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal Ready worker 401d v1.31.11 ---Add the
node-role.kubernetes.io/infra=label to the worker nodes by entering the following command. The number of infrastructure nodes required depends on your environment. Production environments should provision enough infra nodes to ensure high availability and sufficient resources for allquay-related components. Monitor CPU, memory, and storage utilization to determine if additional infra nodes are required.$ oc label node --overwrite <infra_node_one> <infra_node_two> <infra_node_three> node-role.kubernetes.io/infra=Confirm that the
node-role.kubernetes.io/infra=label has been added to the proper nodes by entering the following command:$ oc get node | grep infra--- example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal Ready infra,worker 405d v1.32.8 example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal Ready infra,worker 406d v1.32.8 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal Ready infra,worker 405d v1.32.8 ---When a worker node is assigned the
infrarole, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node, and then add tolerations for the pods that you want to control. Taint the worker nodes with theinfralabel by entering the following command:$ oc adm taint nodes -l node-role.kubernetes.io/infra \ node-role.kubernetes.io/infra=reserved:NoSchedule --overwriteExample output
node/example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal modified
7.2. Creating a project with node selector and tolerations Copy linkLink copied to clipboard!
Use the following procedure to create a project with the node-selector and tolerations annotations.
Procedure
Add the
node-selectorannotation to the namespace by entering the following command:$ oc annotate namespace <namespace> openshift.io/node-selector='node-role.kubernetes.io/infra='Example output
namespace/<namespace> annotatedAdd the
tolerationsannotation to the namespace by entering the following command:$ oc annotate namespace <namespace> scheduler.alpha.kubernetes.io/defaultTolerations='[{"operator":"Equal","value":"reserved","effect":"NoSchedule","key":"node-role.kubernetes.io/infra"},{"operator":"Equal","value":"reserved","effect":"NoExecute","key":"node-role.kubernetes.io/infra"}]' --overwriteExample output
namespace/<namespace> annotatedImportantThe tolerations in this example are specific to two taints commonly applied to infra nodes. The taints configured in your environment might differ. You must set the tolerations accordingly to match the taints applied to your infra nodes.
7.3. Installing the Red Hat Quay Operator on the annotated namespace Copy linkLink copied to clipboard!
After you have added the node-role.kubernetes.io/infra= label to worker nodes and added the node-selector and tolerations annotations to the namespace, you must download the Red Hat Quay Operator in that namespace.
The following procedure shows you how to download the Red Hat Quay Operator on the annotated namespace and how to update the subscription to ensure successful installation.
Procedure
- On the OpenShift Container Platform web console, click Operators → OperatorHub.
- In the search box, type Red Hat Quay.
- Click Red Hat Quay → Install.
- Select the update channel, for example, stable-3.15 and the version.
-
Click A specific namespace on the cluster for the installation mode, and then select the namespace that you applied the
node-selectorandtolerationsannotations to. - Click Install.
Confirm that the Operator is installed by entering the following command:
$ oc get pods -n <annotated_namespace> -o wide | grep quay-operatorExample output
quay-operator.v3.15.1-858b5c5fdc-lf5kj 1/1 Running 0 29m 10.130.6.18 example-cluster-new-c5qqp-worker-f-mhngl.c.quay-devel.internal <none> <none>
7.4. Creating the Red Hat Quay registry Copy linkLink copied to clipboard!
After you have downloaded the Red Hat Quay Operator, you must create the Red Hat Quay registry. The registry’s components, for example, clair, postgres, redis, and so on, must be patched with the toleration annotation so that they can schedule onto the infra worker nodes.
The following procedure shows you how to create a Red Hat Quay registry that runs on infrastructure nodes.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- On the Red Hat Quay Operator details page, click Quay Registry → Create QuayRegistry.
On the Create QuayRegistry page, set the
monitoringandobjectstoragefields tofalse. The monitoring component cannot be enabled when Red Hat Quay is installed in a single namespace. For example:# ... - kind: monitoring managed: false - kind: objectstorage managed: false # ...- Click Create.
Optional: Confirm that the pods are running on infra nodes.
List all
Quay-related pods along with the nodes that they are scheduled on by entering the following command:$ oc get pods -n <annotated_namespace> -o wide | grep example-registryExample output
... NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES example-registry-clair-app-5f95d685bd-dgjf6 1/1 Running 0 52m 10.128.4.12 example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal <none> <none> ...Confirm that the nodes listed include only nodes labeled
infraby running the following command:$ oc get nodes -l node-role.kubernetes.io/infra -o nameExample output
node/example-cluster-new-c5qqp-worker-b-4zxx5.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-kz6jn.c.quay-devel.internal modified node/example-cluster-new-c5qqp-worker-b-wrhw4.c.quay-devel.internal modifiedNoteIf any pod appears on a non-infra node, revisit your namespace annotations and deployment patching.
Restart all pods for the Red Hat Quay registry by entering the following command:
$ oc delete pod -n <annotated_namespace> --allCheck the status of the pods by entering the following command:
$ oc get pods -n <annotated_namespace>Example output
... NAME READY STATUS RESTARTS AGE example-registry-clair-app-5f95d685bd-dgjf6 1/1 Running 0 5m4s ...
Chapter 8. Advanced configuration Copy linkLink copied to clipboard!
The following sections cover advanced configuration options for when the default deployment settings do not meet your organization’s needs for performance, security, or existing infrastructure integration.
8.1. Using an external PostgreSQL database Copy linkLink copied to clipboard!
When using your own PostgreSQL database with Red Hat Quay, you must ensure that the required configuration and extensions are in place before deployment.
Do not share the same PostgreSQL database between Red Hat Quay and Clair deployments. Each service must use its own database instance. Sharing databases with other workloads is also not supported, because connection-intensive components such as Red Hat Quay and Clair can quickly exceed PostgreSQL’s connection limits.
Connection poolers such as pgBouncer are not supported with Red Hat Quay or Clair.
When managing your own PostgreSQL database for use with Red Hat Quay, the following best practices are recommended:
-
*
pg_trgmextenion: Thepg_trgmextension must be enabled on the database for a successful deployment. - Backups: Perform regular database backups using PostgreSQL-native tools or your existing backup infrastructure. The Red Hat Quay Operator does not manage database backups.
- Restores: When restoring a backup, ensure that all Red Hat Quay pods are stopped before beginning the restore process.
- Storage sizing: When using the Operator-managed PostgreSQL database, the default storage allocation is 50 GiB. For external databases, you must ensure sufficient storage capacity for your environment, as the Operator does not handle volume resizing.
- Monitoring: Monitor disk usage, connection limits, and query performance to prevent outages caused by resource exhaustion.
8.1.1. Integrating an existing PostgreSQL database Copy linkLink copied to clipboard!
Configure Red Hat Quay on OpenShift Container Platform to use an existing PostgreSQL database to leverage your current data storage setup.
The following procedure uses the OpenShift Container Platform web console to configure the Red Hat Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler.
This procedure can also be done by using the oc CLI and following the instructions in "Modifying the QuayRegistry CR by using the CLI" and " Modifying the configuration file by using the CLI".
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators.
- Click Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- Click YAML.
Set the
postgresfield of theQuayRegistryCR tomanaged: false. For example:- kind: postgres managed: false- Click Save.
-
Click Details → the name of your
Config Bundle Secretresource. - On the Secret Details page, click Actions → Edit Secret.
Add the
DB_URIfield to yourconfig.yamlfile. For example:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database-
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGSor SSL/TLS connection arguments. For more information, see Database connection arguments. - Click Save.
8.2. Using an external Redis database Copy linkLink copied to clipboard!
Redis is a critical component that supports several Red Hat Quay features, such as build logs and user event tracking. When using an externally managed Redis database with Red Hat Quay, you must ensure that it is properly configured and available before deployment.
Do not share the same Redis instance between Red Hat Quay and Clair deployments. Each service must use its own dedicated Redis instance. Sharing Redis with other workloads is not supported, because connection-intensive components such as Red Hat Quay and Clair can quickly exhaust available Redis connections and degrade performance.
8.2.1. Integrating an external Redis database Copy linkLink copied to clipboard!
Configuring Red Hat Quay to use an external Redis database
You can configure Red Hat Quay on OpenShift Container Platform to use an existing Redis deployment for build logs and user event processing.
The following procedure uses the OpenShift Container Platform web console to configure Red Hat Quay to use an external Redis database. For most users, using the web console is simpler.
You can also complete this procedure by using the oc CLI. For more information, see "Modifying the QuayRegistry CR by using the CLI" and "Modifying the configuration file by using the CLI".
Procedure
- In the OpenShift Container Platform web console, navigate to Operators → Installed Operators.
- Click Red Hat Quay.
- Click QuayRegistry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- Click YAML.
Set the
rediscomponent to unmanaged by adding the following entry underspec.components:- kind: redis managed: false- Click Save.
-
Click Details → the name of your
Config Bundle Secretresource. - On the Secret details page, click Actions → Edit Secret.
In the
config.yamlsection, add entries for your external Redis instance. For example:BUILDLOGS_REDIS: host: redis.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: redis.example.com port: 6379 ssl: falseImportantIf both the
BUILDLOGS_REDISandUSER_EVENTS_REDISfields reference the same Redis deployment, ensure that your Redis service can handle the combined connection load. For large or high-throughput registries, use separate Redis databases or clusters for these components.-
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGSor SSL/TLS connection arguments. For more information, see Redis configuration fields. - Click Save.
8.3. About Horizontal Pod Autoscaling (HPA) Copy linkLink copied to clipboard!
By default, Red Hat Quay deployments include managed Horizontal Pod Autoscalers (HPAs) for key components to ensure availability and performance during load spikes or maintenance events. HPAs automatically adjust the number of running pods based on observed CPU and memory utilization.
A typical Red Hat Quay deployment includes the following pods:
-
Two pods for the Red Hat Quay application (
example-registry-quay-app-*) -
One Redis pod for Red Hat Quay logging (
example-registry-quay-redis-*) -
One PostgreSQL pod for metadata storage (
example-registry-quay-database-*) -
Two
Quaymirroring pods (example-registry-quay-mirror-*) -
Two pods for Clair (
example-registry-clair-app-*) -
One PostgreSQL pod for Clair (
example-registry-clair-postgres-*)
HPAs are managed by default for the Quay, Clair, and Mirror components, each starting with two replicas to prevent downtime during upgrades, reconfigurations, or pod rescheduling ev
8.3.1. Managing Horizontal Pod Autoscaling Copy linkLink copied to clipboard!
Managing Horizontal Pod Autoscalers (HPA) by setting the HPA component to unmanaged (managed: false) in the QuayRegistry custom resource allows you to customize scaling thresholds or replica limits.
The following procedure shows you how to disable the horizontalpodautoscaler component and explicitly set replicas: null in the quay, clair, and mirror component definitions.
The following procedure uses the OpenShift Container Platform web console to configure the Red Hat Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler.
This procedure can also be done by using the oc CLI and following the instructions in "Modifying the QuayRegistry CR by using the CLI" and " Modifying the configuration file by using the CLI".
Procedure
Edit your
QuayRegistryCR:$ oc edit quayregistry <quay_registry_name> -n <quay_namespace>apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false - kind: quay managed: true overrides: replicas: null1 - kind: clair managed: true overrides: replicas: null - kind: mirror managed: true overrides: replicas: null # ...- 1
- After setting
replicas: null, a new replica set might be generated because theQuaydeployment manifest changes toreplicas: 1.
Create a custom
HorizontalPodAutoscalerresource with your desired configuration, for example:kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: quay-registry-quay-app namespace: quay-enterprise spec: scaleTargetRef: kind: Deployment name: quay-registry-quay-app apiVersion: apps/v1 minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: memory target: type: Utilization averageUtilization: 90 - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 90Apply the new HPA configuration to your cluster:
$ oc apply -f <custom_hpa>.yamlExample output
horizontalpodautoscaler.autoscaling/quay-registry-quay-app created
Verification
Verify that your Red Hat Quay application pods are running:
$ oc get pod | grep quay-appExample output
quay-registry-quay-app-5b8fd49d6b-7wvbk 1/1 Running 0 34m quay-registry-quay-app-5b8fd49d6b-jslq9 1/1 Running 0 3m42s quay-registry-quay-app-5b8fd49d6b-pskpz 1/1 Running 0 43mVerify that your custom HPA is active:
$ oc get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE quay-registry-quay-app Deployment/quay-registry-quay-app 67%/90%, 54%/90% 3 20 3 51m
8.4. Configuring custom ingress Copy linkLink copied to clipboard!
You can configure custom ingress for Red Hat Quay by disabling the Operator-managed route component and managing your own routes or ingress controllers. This configuration is useful when your environment requires a custom SSL/TLS setup, specific DNS naming conventions, or when Red Hat Quay is deployed behind a load balancer or proxy that handles TLS termination.
The Red Hat Quay Operator separates route management from SSL/TLS configuration by introducing a distinct tls component. You can therefore manage each independently, depending on whether Red Hat Quay or the cluster should handle TLS termination. For more information about using SSL/TLS certificates with your deployment, see "Securing Red Hat Quay".
If you disable the managed route, you are responsible for creating and managing a Route, Ingress, or Service to expose Red Hat Quay. Ensure that your DNS entry matches the SERVER_HOSTNAME configured in config.yaml.
8.4.1. Disabling the Route component Copy linkLink copied to clipboard!
Use the following procedure to prevent the Red Hat Quay Operator from creating a route.
Procedure
In your
quayregistry.yamlfile, set theroutecomponent asmanaged: false:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: falseIn your
config.yamlfile, configure Red Hat Quay to handle SSL/TLS. For example:# ... EXTERNAL_TLS_TERMINATION: false SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com PREFERRED_URL_SCHEME: https # ...If the configuration is incomplete, the following error might appear:
{ "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
8.4.2. Configuring SSL/TLS and routes Copy linkLink copied to clipboard!
Support for OpenShift Container Platform edge termination routes is provided through the tls component. This separation allows independent control of route management and TLS certificate handling.
EXTERNAL_TLS_TERMINATION: true is the default, opinionated setting, which assumes the cluster manages TLS termination.
-
When
tlsis managed, the cluster’s default wildcard certificate is used. -
When
tlsis unmanaged, you must supply your own SSL/TLS certificate and key pair.
Multiple valid configurations are possible, as shown in the following table:
| Option | Route | TLS | Certs provided | Result |
|---|---|---|---|---|
| My own load balancer handles TLS | Managed | Managed | No | Edge route using default cluster wildcard certificate |
| Red Hat Quay handles TLS | Managed | Unmanaged | Yes | Passthrough route with certificates mounted in the Red Hat Quay pod |
| Red Hat Quay handles TLS | Unmanaged | Unmanaged | Yes | Certificates set inside the Red Hat Quay pod; user must manually create a route |
8.5. Disabling the monitoring component Copy linkLink copied to clipboard!
When installed in a single namespace, the monitoring component of the Red Hat Quay Operator must be set to managed: false, because it does not have permission to create cluster-wide monitoring resources. You can also explicitly disable monitoring in a multi-namespace installation if you prefer to use your own monitoring stack.
Monitoring cannot be enabled when the Red Hat Quay Operator is installed in a single namespace.
You might also disable monitoring in multi-namespace deployments if you use an external Prometheus or Grafana instance, want to reduce resource overhead, or require custom observability integration.
Example unmanaged monitoring configuration
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
8.6. Disabling the mirroring component Copy linkLink copied to clipboard!
Repository mirroring in Red Hat Quay allows you to automatically synchronize container images from remote registries into your local Red Hat Quay instance. The Red Hat Quay Operator deploys a separate mirroring worker component that handles these synchronization tasks.
You can disable the managed mirroring component by setting it to managed: false in the QuayRegistry custom resource.
Disabling managed mirroring means that the Operator does not deploy or reconcile any mirroring pods. You are responsible for creating, scheduling, and maintaining mirroring jobs manually. For most production deployments, leaving mirroring as managed: true is recommended.
Unmanaged mirroring example YAML configuration
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false
8.7. Configuring QuayRegistry CR resources Copy linkLink copied to clipboard!
You can manually adjust the resources on Red Hat Quay on OpenShift Container Platform for the following components that have running pods:
-
quay -
clair -
mirroring -
clairpostgres -
postgres
This feature allows users to run smaller test clusters, or to request more resources upfront in order to avoid partially degraded Quay pods. Limitations and requests can be set in accordance with Kubernetes resource units.
The following components should not be set lower than their minimum requirements. This can cause issues with your deployment and, in some cases, result in failure of the pod’s deployment.
-
quay: Minimum of 6 GB, 2vCPUs -
clair: Recommended of 2 GB memory, 2 vCPUs -
clairpostgres: Minimum of 200 MB
You can configure resource requests on the OpenShift Container Platform UI or directly by updating the QuayRegistry CR via the CLI.
The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively.
8.7.1. Configuring resource requests by using the OpenShift Container Platform web console Copy linkLink copied to clipboard!
Use the following procedure to configure resources by using the OpenShift Container Platform web console.
Procedure
- On the OpenShift Container Platform developer console, click Operators → Installed Operators → Red Hat Quay.
- Click QuayRegistry.
- Click the name of your registry, for example, example-registry.
- Click YAML.
In the
spec.componentsfield, you can override the resource of thequay,clair,mirroringclairpostgres, andpostgresresources by setting values for the.overrides.resources.limitsand theoverrides.resources.requestsfields. For example:spec: components: - kind: clair managed: true overrides: resources: limits: cpu: "5" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: "18Gi" # Limiting to 18 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {}1 requests: cpu: "700m" # Requesting 700 millicpu or 0.7 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits:2 requests: cpu: "800m" # Requesting 800 millicpu or 0.8 CPU memory: "1Gi" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: "4" # Limiting to 4 CPU memory: "10Gi" # Limiting to 10 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "10Gi" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: "800m" # Limiting to 800 millicpu or 0.8 CPU memory: "3Gi" # Limiting to 3 Gibibytes of memory requests: {}
8.7.2. Configuring resource requests by using the CLI Copy linkLink copied to clipboard!
You can re-configure Red Hat Quay to configure resource requests after you have already deployed a registry. This can be done by editing the QuayRegistry YAML file directly and then re-deploying the registry.
Procedure
Edit the
QuayRegistryCR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace>Make any desired changes. For example:
- kind: quay managed: true overrides: resources: limits: {} requests: cpu: "0.7" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: "512Mi" # Requesting 512 Mebibytes of memory- Save the changes.
Chapter 9. Troubleshooting the QuayRegistry CR Copy linkLink copied to clipboard!
You can troubleshoot the QuayRegistry CR to reveal issues during registry deployment by checking the Events page on the OpenShift Container Platform web console, or by using the oc CLI.
9.1. Monitoring and debugging the QuayRegistry CR by using the OpenShift Container Platform web console Copy linkLink copied to clipboard!
Lifecycle observability for a Red Hat Quay registry is reported on the Events page of the registry. If leveraging the OpenShift Container Platform web console, this is the first place to look for any problems related to registry deployment.
Prerequisites
- You have deployed a Red Hat Quay registry.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- On the Red Hat Quay Operator details page, click Quay Registry.
- Click the name of the registry → Events. On this page, events a streamed in real-time.
-
Optional: To reveal more information about deployment issues, you can click the name of the registry on the Events page to navigate to the QuayRegistry details page. On the QuayRegistry details page, you can view the condition of all
QuayRegistryCR components.
9.2. Monitoring and debugging the QuayRegistry CR by using the CLI Copy linkLink copied to clipboard!
The oc CLI tool can be used to troubleshoot problems related to registry deployment. With the oc CLI, you can obtain the following information about the QuayRegistry CR:
-
The
conditionsfield, which field shows the status of allQuayRegistrycomponents. -
The
currentVersionfield, which shows the version of Red Hat Quay. -
The
registryEndpointfield, which shows the publicly available hostname of the registry.
When troubleshooting deployment issues, you can check the Status field of the QuayRegistry custom resource (CR). This field reveals the health of the components during the deployment and can help you debug various problems with the deployment.
Prerequisites
- You have deployed a Red Hat Quay registry by using the web console or the CLI.
Procedure
View the state of deployed components by entering the following command:
$ oc get pods -n quay-enterpriseExample output
NAME READY STATUS RESTARTS AGE example-registry-clair-app-86554c6b49-ds7bl 0/1 ContainerCreating 0 2s example-registry-clair-app-86554c6b49-hxp5s 0/1 Running 1 17s example-registry-clair-postgres-68d8857899-lbc5n 0/1 ContainerCreating 0 17s example-registry-quay-app-upgrade-h2v7h 0/1 ContainerCreating 0 9s example-registry-quay-database-66f495c9bc-wqsjf 0/1 ContainerCreating 0 17s example-registry-quay-mirror-854c88457b-d845g 0/1 Init:0/1 0 2s example-registry-quay-mirror-854c88457b-fghxv 0/1 Init:0/1 0 17s example-registry-quay-postgres-init-bktdt 0/1 Terminating 0 17s example-registry-quay-redis-f9b9d44bf-4htpz 0/1 ContainerCreating 0 17sReturn information about your deployment by entering the following command:
$ oc get quayregistry -n <namespace> -o yamlExample output
apiVersion: v1 items: - apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: annotations: ... spec: components: - kind: clair managed: true - kind: objectstorage managed: false ... status: conditions:1 - lastTransitionTime: "2025-10-01T18:46:13Z" lastUpdateTime: "2025-10-07T13:12:54Z" message: Horizontal pod autoscaler found reason: ComponentReady status: "True" type: ComponentHPAReady ... currentVersion: v3.15.22 lastUpdated: 2025-10-07 13:12:54.48811705 +0000 UTC registryEndpoint: https://example-registry-quay-cluster-new.gcp.quaydev.org3