Configure Red Hat Quay
Customizing Red Hat Quay using configuration options
Abstract
Chapter 1. Getting started with Red Hat Quay configuration
Red Hat Quay is a secure artifact registry that can be deployed as a self-managed installation, or through the Red Hat Quay on OpenShift Container Platform Operator. Each deployment type offers a different approach to configuration and management, but each rely on the same set of configuration parameters to control registry behavior. Common configuration parameters allow administrators to define how their registry interacts with users, storage backends, authentication providers, security policies, and other integrated services.
There are one of two ways to configure Red Hat Quay that depend on your deployment type:
-
On prem Red Hat Quay: With an on prem Red Hat Quay deployment, a registry administrator provides a
config.yaml
file that includes all required parameters. For this deployment type, the registry is unable to start without a valid configuration. -
Red Hat Quay Operator: By default, the Red Hat Quay Operator automatically configures your Red Hat Quay deployment by generating the minimal required values and deploying the necessary components for you. After the initial deployment, you can customize your registry’s behavior by modifying the
QuayRegistry
custom resource, or by using the OpenShift Container Platform Web Console.
This guide offers an overview of the following configuration concepts:
- How to retrieve, inspect, and modify your current configuration for both on prem and Operator-based Red Hat Quay deployment types.
- The minimal configuration fields required for startup.
- An overview of all available Red Hat Quay configuration fields and YAML examples for those fields.
Chapter 2. Red Hat Quay configuration disclaimer
In both self-managed and Operator-based deployments of Red Hat Quay, certain features and configuration parameters are not actively used or implemented. As a result, some feature flags, such as those that enable or disable specific functionality, or configuration parameters that are not explicitly documented or supported by or requested for documentation by Red Hat Support, should only be modified with caution.
Unused or undocumented features might not be fully tested, supported, or compatible with Red Hat Quay. Modifying these settings could result in unexpected behavior or disruptions to your deployment.
Chapter 3. Understanding the Red Hat Quay configuration file
Whether deployed on premise of by the Red Hat Quay on OpenShift Container Platform Operator, the registry’s behavior is defined by the config.yaml
file. The config.yaml
file must include all required configuration fields for the registry to start. Red Hat Quay administrators can also define optional parameters that customize their registry, such as authentication parameters, storage parameters, proxy cache parameters, and so on.
The config.yaml
file must be written using valid YAML ("YAML Ain’t Markup Language") syntax, and Red Hat Quay cannot start if the file itself contains any formatting errors or missing required fields. Regardless of deployment type, whether that is on premise or Red Hat Quay on OpenShift Container Platform that is configured by the Operator, the YAML principles stay the same, even if the required configuration fields are slightly different.
The following section outlines basic YAML syntax relevant to creating and editing the Red Hat Quay config.yaml
file. For a more complete overview of YAML, see What is YAML.
3.1. Key-value pairs
Configuration fields within a config.yaml
file are written as key-value pairs in the following form:
... ...
# ...
EXAMPLE_FIELD_NAME: <value>
# ...
Each line within a config.yaml
file contains a field name, followed by a colon, a space, and then an appropriate value that matches with the key. The following example shows you how the AUTHENTICATION_TYPE
configuration field must be formatted in your config.yaml
file.
AUTHENTICATION_TYPE: Database # ...
AUTHENTICATION_TYPE: Database
# ...
- 1
- The authentication engine to use for credential authentication.
In the previous example, the AUTHENTICATION_TYPE
is set to Database
, however, different deployment types require a different value. The following example shows you how your config.yaml
file might look if LDAP
, or Lightweight Directory Access Protocol, was used for authentication:
AUTHENTICATION_TYPE: LDAP # ...
AUTHENTICATION_TYPE: LDAP
# ...
3.2. Indentation and nesting
Many Red Hat Quay configuration fields require indentation to indicate nested structures. Indentation must be done by using white spaces, or literal space characters; tab characters are not allowed by design. Indentation must be consistent across the file. The following YAML snippet shows you how the BUILDLOGS_REDIS
field uses indentation for the required host
, password,
and port
fields:
... ...
# ...
BUILDLOGS_REDIS:
host: quay-server.example.com
password: example-password
port: 6379
# ...
3.3. Lists
In some cases, the Red Hat Quay configuration field relies on lists to define certain values. Lists are formatted by using a hyphen (-
) followed by a space. The following example shows you how the SUPER_USERS
configuration field uses a list to define superusers:
... ...
# ...
SUPER_USERS:
- quayadmin
# ...
3.4. Quoted values
Some Red Hat Quay configuration fields require the use of quotation marks (""
) to properly define a variable. This is generally not required. The following examples shows you how the FOOTER_LINKS
configuration field uses quotation marks to define the TERMS_OF_SERVICE_URL
, PRIVACY_POLICY_URL
, SECURITY_URL
, and ABOUT_URL
:
FOOTER_LINKS: "TERMS_OF_SERVICE_URL": "https://www.index.hr" "PRIVACY_POLICY_URL": "https://www.jutarnji.hr" "SECURITY_URL": "https://www.bug.hr" "ABOUT_URL": "https://www.zagreb.hr"
FOOTER_LINKS:
"TERMS_OF_SERVICE_URL": "https://www.index.hr"
"PRIVACY_POLICY_URL": "https://www.jutarnji.hr"
"SECURITY_URL": "https://www.bug.hr"
"ABOUT_URL": "https://www.zagreb.hr"
3.5. Comments
The hash symbol, or #
, can be placed at the beginning of a line to add comments or to temporarily disable a configuration field. They are ignored by the configuration parser and will not affect the behavior of the registry. For example:
... FEATURE_UI_V2: true ...
# ...
# FEATURE_UI_V2: true
# ...
In this example, the FEATURE_UI_V2
configuration is ignored by the parser, meaning that the option to use the v2 UI is disabled. Using the #
symbol on a required configuration field results in failure for the registry to start.
Chapter 4. On prem Red Hat Quay configuration overview
For on premise deployments of Red Hat Quay, the config.yaml
file that is managed by the administrator is mounted into the container at startup and read by Red Hat Quay during initialization. The config.yaml
file is not dynamically reloaded, meaning that any changes made to the file require restarting the registry container to take effect.
This chapter provides an overview of the following concepts:
- The minimal required configuration fields.
- How to edit and manage your configuration after deployment.
This section applies specifically to on premise Red Hat Quay deployment types. For information about configuring Red Hat Quay on OpenShift Container Platform, see "Red Hat Quay on OpenShift Container Platform configuration overview".
4.1. Required configuration fields
The following configuration fields are required for an on premise deployment of Red Hat Quay:
Field | Type | Description |
AUTHENTICATION_TYPE | String |
The authentication engine to use for credential authentication. |
BUILDLOGS_REDIS | Object | Redis connection details for build logs caching. |
.host | String | The hostname at which Redis is accessible. |
.password | String | The password to connect to the Redis instance. |
DATABASE_SECRET_KEY | String |
Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. |
DB_URI | String | The URI for accessing the database, including any credentials. |
DISTRIBUTED_STORAGE_CONFIG | Object |
Configuration for storage engine(s) to use in Red Hat Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. |
SECRET_KEY | String | Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Red Hat Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. |
SERVER_HOSTNAME | String | The URL at which Red Hat Quay is accessible, without the scheme. |
SETUP_COMPLETE | Boolean |
This is an artifact left over from earlier versions of the software and currently it must be specified with a value of |
USER_EVENTS_REDIS | Object | Redis connection details for user event handling. |
.host | String | The hostname at which Redis is accessible. |
.port | Number | The port at which Redis is accessible. |
.password | String | The password to connect to the Redis instance. |
4.1.1. Minimal configuration file examples
This section provides two examples of a minimal configuration file: one example that uses local storage, and another example that uses cloud-based storage with Google Cloud Platform.
4.1.1.1. Minimal configuration using local storage
The following example shows a sample minimal configuration file that uses local storage for images.
Only use local storage when deploying a registry for proof of concept purposes. It is not intended for production purposes. When using local storage, you must map the registry to a local directory to the datastorage
path in the container when starting the registry. For more information, see Proof of Concept - Deploying Red Hat Quay
Local storage minimal configuration
AUTHENTICATION_TYPE: Database BUILDLOGS_REDIS: host: <quay-server.example.com> password: <password> port: <port> DATABASE_SECRET_KEY: <example_database_secret_key> DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry SECRET_KEY: <example_secret_key> SERVER_HOSTNAME: <server_host_name> SETUP_COMPLETE: true USER_EVENTS_REDIS: host: <redis_events_url> password: <password> port: <port>
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <password>
port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
host: <redis_events_url>
password: <password>
port: <port>
4.1.1.2. Minimal configuration using cloud-based storage
In most production environments, Red Hat Quay administrators use cloud or enterprise-grade storage backends provided by supported vendors. The following example shows you how to configure Red Hat Quay to use Google Cloud Platform for image storage. For a complete list of supported storage providers, see Image storage.
When using a cloud or enterprise-grade storage backend, additional configuration, such as mapping the registry to a local directory, is not required.
Cloud storage minimal configuration
AUTHENTICATION_TYPE: Database BUILDLOGS_REDIS: host: <quay-server.example.com> password: <password> port: <port> DATABASE_SECRET_KEY: <example_database_secret_key> DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay DISTRIBUTED_STORAGE_CONFIG: default: - GoogleCloudStorage - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default SECRET_KEY: <example_secret_key> SERVER_HOSTNAME: <server_host_name> SETUP_COMPLETE: true USER_EVENTS_REDIS: host: <redis_events_url> password: <password> port: <port>
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <password>
port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
default:
- GoogleCloudStorage
- access_key: <access_key>
bucket_name: <bucket_name>
secret_key: <secret_key>
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
host: <redis_events_url>
password: <password>
port: <port>
4.2. Modifying your configuration file after deployment
After deploying a Red Hat Quay registry with an initial config.yaml
file, Red Hat Quay administrators can update the configuration file to enable or disable features as needed. This flexibility allows administrators to tailor the registry to fit their specific environment needs, or to meet certain security policies.
Because the config.yaml
file is not dynamically reloaded, you must restart the Red Hat Quay container after making changes for them to take effect.
The following procedure shows you how to retrieve the config.yaml
file from the quay-registry
container, how to enable a new feature by adding that feature’s configuration field to the file, and how to restart the quay-registry
container using Podman.
Prerequisites
- You have deployed Red Hat Quay.
- You are a registry administrator.
Procedure
If you have access to the
config.yaml
file:Navigate to the directory that is storing the
config.yaml
file. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd /home/<username>/<quay-deployment-directory>/config
$ cd /home/<username>/<quay-deployment-directory>/config
Make changes to the
config.yaml
file by adding a new feature flag. The following example enables the v2 UI:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... FEATURE_UI_V2: true # ...
-
Save the changes made to the
config.yaml
file. Restart the
quay-registry
pod by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman restart <container_id>
$ podman restart <container_id>
If you do not have access to the
config.yaml
file and need to create a new file while keeping the same credentials:Retrieve the container ID of your
quay-registry
pod by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman ps
$ podman ps
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5f2297ef53ff registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 20 hours ago Up 20 hours 0.0.0.0:5432->5432/tcp postgresql-quay 3b40fb83bead registry.redhat.io/rhel8/redis-5:1 run-redis 20 hours ago Up 20 hours 0.0.0.0:6379->6379/tcp redis 0b4b8fbfca6d registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.14.0-14 registry 20 hours ago Up 20 hours 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp, 7443/tcp, 9091/tcp, 55443/tcp quay
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5f2297ef53ff registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 20 hours ago Up 20 hours 0.0.0.0:5432->5432/tcp postgresql-quay 3b40fb83bead registry.redhat.io/rhel8/redis-5:1 run-redis 20 hours ago Up 20 hours 0.0.0.0:6379->6379/tcp redis 0b4b8fbfca6d registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.14.0-14 registry 20 hours ago Up 20 hours 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp, 7443/tcp, 9091/tcp, 55443/tcp quay
Copy the
config.yaml
file from thequay-registry
pod to a directory by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman cp <container_id>:/quay-registry/conf/stack/config.yaml ./config.yaml
$ podman cp <container_id>:/quay-registry/conf/stack/config.yaml ./config.yaml
Make changes to the
config.yaml
file by adding a new feature flag. The following example sets theAUTHENTICATION_TYPE
toLDAP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... AUTHENTICATION_TYPE: LDAP # ...
Re-deploy the registry, mounting the
config.yaml
file into thequay-registry
configuration volume by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:v3.14.0
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:v3.14.0
4.3. Troubleshooting the configuration file
Failure to add all of the required configuration field, or to provide the proper information for some parameters, might result in the quay-registry
container failing to deploy. Use the following procedure to view and troubleshoot a failed on premise deployment type.
Prerequisites
- You have created a minimal configuration file.
Procedure
Attempt to deploy the
quay-registry
container by entering the following command. Note that this command uses the-it
, which shows you debugging information:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman run -it --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z -v /home/<username>/<quay-deployment-directory>/storage:/datastorage:Z 33f1c3dc86be
$ podman run -it --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z -v /home/<username>/<quay-deployment-directory>/storage:/datastorage:Z 33f1c3dc86be
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow --- +------------------------+-------+--------+ | LDAP | - | X | +------------------------+-------+--------+ | LDAP_ADMIN_DN is required | X | +-----------------------------------------+ | LDAP_ADMIN_PSSWD is required | X | +-----------------------------------------+ | . . . Connection refused | X | +-----------------------------------------+ ---
--- +------------------------+-------+--------+ | LDAP | - | X | +------------------------+-------+--------+ | LDAP_ADMIN_DN is required | X | +-----------------------------------------+ | LDAP_ADMIN_PSSWD is required | X | +-----------------------------------------+ | . . . Connection refused | X | +-----------------------------------------+ ---
In this example, the
quay-registry
container failed to deploy because improper LDAP credentials were provided.
Chapter 5. Red Hat Quay on OpenShift Container Platform configuration overview
When deploying Red Hat Quay using the Operator on OpenShift Container Platform, configuration is managed declaratively through the QuayRegistry
custom resource (CR). This model allows cluster administrators to define the desired state of the Red Hat Quay deployment, including which components are enabled, storage backends, SSL/TLS configuration, and other core features.
After deploying Red Hat Quay on OpenShift Container Platform with the Operator, administrators can further customize their registry by updating the config.yaml
file and referencing it in a Kubernetes secret. This configuration bundle is linked to the QuayRegistry
CR through the configBundleSecret
field.
The Operator reconciles the state defined in the QuayRegistry
CR and its associated configuration, automatically deploying or updating registry components as needed.
This guide covers the basic concepts behind the QuayRegistry
CR and modifying your config.yaml
file on Red Hat Quay on OpenShift Container Platform deployments. More advanced topics, such as using unmanaged components within the QuayRegistry
CR, can be found in Deploying Red Hat Quay Operator on OpenShift Container Platform.
5.1. Understanding the QuayRegistry CR
By default, the QuayRegistry
CR contains the following key fields:
-
configBundleSecret
: The name of a Kubernetes Secret containing theconfig.yaml
file which defines additional configuration parameters. -
name
: The name of your Red Hat Quay registry. -
namespace
: The namespace, or project, in which the registry was created. spec.components
: A list of component that the Operator automatically manages. Eachspec.component
field contains two fields:-
kind
: The name of the component -
managed
: A boolean that addresses whether the component lifecycle is handled by the Red Hat Quay Operator. Settingmanaged: true
to a component in theQuayRegistry
CR means that the Operator manages the component.
-
All QuayRegistry
components are automatically managed and auto-filled upon reconciliation for visibility unless specified otherwise. The following sections highlight the major QuayRegistry
components and provide an example YAML file that shows the default settings.
5.2. Managed components
By default, the Operator handles all required configuration and installation needed for Red Hat Quay’s managed components.
If the opinionated deployment performed by the Red Hat Quay Operator is unsuitable for your environment, you can provide the Red Hat Quay Operator with unmanaged
resources, or overrides, as described in Using unmanaged components.
Field | Type | Description |
---|---|---|
| Boolean |
Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, such as environment variables and number of replicas. This component cannot be set to unmanaged ( |
| Boolean | Used for storing registry metadata. Currently, PostgreSQL version 13 is used. |
| Boolean | Provides image vulnerability scanning. |
| Boolean | Storage live builder logs and the locking mechanism that is required for garbage collection. |
| Boolean |
Adjusts the number of |
| Boolean |
Stores image layer blobs. When set to |
| Boolean | Provides an external entrypoint to the Red Hat Quay registry from outside of OpenShift Container Platform. |
| Boolean | Configures repository mirror workers to support optional repository mirroring. |
| Boolean |
Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting |
| Boolean | Configures whether SSL/TLS is automatically handled. |
| Boolean | Configures a managed Clair database. This is a separate database than the PostgreSQL database that is used to deploy Red Hat Quay. |
The following example shows you the default configuration for the QuayRegistry
custom resource provided by the Red Hat Quay Operator. It is available on the OpenShift Container Platform web console.
Example QuayRegistry
custom resource
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: <example_registry> namespace: <namespace> spec: configBundleSecret: config-bundle-secret components: - kind: quay managed: true - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: <example_registry>
namespace: <namespace>
spec:
configBundleSecret: config-bundle-secret
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
5.3. Modifying the QuayRegistry CR after deployment
After you have installed the Red Hat Quay Operator and created an initial deployment, you can modify the QuayRegistry
custom resource (CR) to customize or reconfigure aspects of the Red Hat Quay environment.
Red Hat Quay administrators might modify the QuayRegistry CR for the following reasons:
-
To change component management: Switch components from
managed: true
tomanaged: false
in order to bring your own infrastructure. For example, you might setkind: objectstorage
to unmanaged to integrate external object storage platforms such as Google Cloud Storage or Nutanix. -
To apply custom configuration: Update or replace the
configBundleSecret
to apply new configuration settings, for example, authentication providers, external SSL/TLS settings, feature flags. -
To enable or disable features: Toggle features like repository mirroring, Clair scanning, or horizontal pod autoscaling by modifying the
spec.components
list. - To scale the deployment: Adjust environment variables or replica counts for the Quay application.
- To integrate with external services: Provide configuration for external PostgreSQL, Redis, or Clair databases, and update endpoints or credentials.
5.3.1. Modifying the QuayRegistry CR by using the OpenShift Container Platform web console
The QuayRegistry
can be modified by using the OpenShift Container Platform web console. This allows you to set managed components to unamanged (managed: false
) and use your own infrastructure.
Prerequisites
- You are logged into OpenShift Container Platform as a user with admin privileges.
- You have installed the Red Hat Quay Operator.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators.
- Click Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- Click YAML.
-
Adjust the
managed
field of the desired component to eithertrue
orfalse
. Click Save.
NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies.
5.3.2. Modifying the QuayRegistry CR by using the CLI
The QuayRegistry
CR can be modified by using the CLI. This allows you to set managed components to unamanged (managed: false
) and use your own infrastructure.
Prerequisites
- You are logged in to your OpenShift Container Platform cluster as a user with admin privileges.
Procedure
Edit the
QuayRegistry
CR by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit quayregistry <registry_name> -n <namespace>
$ oc edit quayregistry <registry_name> -n <namespace>
Make the desired changes to the
QuayRegistry
CR.NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies.- Save the changes.
5.3.3. Understanding the configBundleSecret
The spec.configBundleSecret
field is an optional reference to the name of a Secret in the same namespace as the QuayRegistry
resource. This Secret must contain a config.yaml
key/value pair, where the value is a Red Hat Quay configuration file.
The configBundleSecret
stores the config.yaml
file. Red Hat Quay administrators can define the following settings through the config.yaml
file:
- Authentication backends (for example, OIDC, LDAP)
- External TLS termination settings
- Repository creation policies
- Feature flags
- Notification settings
Red Hat Quay might update this secret for the following reasons:
- Enable a new authentication method
- Add custom SSL/TLS certificates
- Enable features
- Modify security scanning settings
If this field is omitted, the Red Hat Quay Operator automatically generates a configuration secret based on default values and managed component settings. If the field is provided, the contents of the config.yaml
are used as the base configuration and are merged with values from managed components to form the final configuration, which is mounted into the quay
application pods.
How the QuayRegistry
CR is configured determines which fields must be included in the configBundleSecret’s `config.yaml
file for Red Hat Quay on OpenShift Container Platform. The following example shows you a default config.yaml
file when all components are managed by the Operator. Note that this example looks different depending on whether components are managed or unmanaged (managed: false
).
Example YAML with all components managed by the Operator
ALLOW_PULLS_WITHOUT_STRICT_LOGGING: false AUTHENTICATION_TYPE: Database DEFAULT_TAG_EXPIRATION: 2w ENTERPRISE_LOGO_URL: /static/img/RH_Logo_Quay_Black_UX-horizontal.svg FEATURE_BUILD_SUPPORT: false FEATURE_DIRECT_LOGIN: true FEATURE_MAILING: false REGISTRY_TITLE: Red Hat Quay REGISTRY_TITLE_SHORT: Red Hat Quay SETUP_COMPLETE: true TAG_EXPIRATION_OPTIONS: - 2w TEAM_RESYNC_STALE_TIME: 60m TESTING: false
ALLOW_PULLS_WITHOUT_STRICT_LOGGING: false
AUTHENTICATION_TYPE: Database
DEFAULT_TAG_EXPIRATION: 2w
ENTERPRISE_LOGO_URL: /static/img/RH_Logo_Quay_Black_UX-horizontal.svg
FEATURE_BUILD_SUPPORT: false
FEATURE_DIRECT_LOGIN: true
FEATURE_MAILING: false
REGISTRY_TITLE: Red Hat Quay
REGISTRY_TITLE_SHORT: Red Hat Quay
SETUP_COMPLETE: true
TAG_EXPIRATION_OPTIONS:
- 2w
TEAM_RESYNC_STALE_TIME: 60m
TESTING: false
In some cases, you might opt to manage certain components yourself, for example, object storage. In that scenario, you would modify the QuayRegistry
CR as follows:
Unmanaged objectstorage component
... ...
# ...
- kind: objectstorage
managed: false
# ...
If you are managing your own components, your deployment must be configured to include the necessary information or resources for that component. For example, if the objectstorage
component is set to managed: false
, you would include the relevant information depending on your storage provider inside of the config.yaml
file. The following example shows you a distributed storage configuration using Google Cloud Storage:
Required information when objectstorage is unmanaged
... ...
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- GoogleCloudStorage
- access_key: <access_key>
bucket_name: <bucket_name>
secret_key: <secret_key>
storage_path: /datastorage/registry
# ...
Similarly, if you are managing the horizontalpodautoscaler
component, you must create an accompanying HorizontalPodAutoscaler
custom resource.
5.3.3.1. Modifying the configuration file by using the OpenShift Container Platform web console
Use the following procedure to modify the config.yaml
file that is stored by the configBundleSecret
by using the OpenShift Container Platform web console.
Prerequisites
- You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- On the QuayRegistry details page, click the name of your Config Bundle Secret, for example, example-registry-config-bundle.
- Click Actions → Edit Secret.
In the Value box, add the desired key/value pair. For example, to add a superuser to your Red Hat Quay on OpenShift Container Platform deployment, add the following reference:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SUPER_USERS: - quayadmin
SUPER_USERS: - quayadmin
- Click Save.
Verification
Verify that the changes have been accepted:
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
Click Events. If successful, the following message is displayed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow All objects created/updated successfully
All objects created/updated successfully
You must base64 encode any updated config.yaml before placing it in the Secret. Ensure the Secret name matches the value specified in spec.configBundleSecret. Once the Secret is updated, the Operator detects the change and automatically rolls out updates to the Red Hat Quay pods.
For detailed steps, see "Updating configuration secrets through the Red Hat Quay UI."
5.3.3.2. Modifying the configuration file by using the CLI
You can modify the config.yaml
file that is stored by the configBundleSecret
by downloading the existing configuration using the CLI. After making changes, you can re-upload the configBundleSecret
resource to make changes to the Red Hat Quay registry.
Modifying the config.yaml
file that is stored by the configBundleSecret
resource is a multi-step procedure that requires base64 decoding the existing configuration file and then uploading the changes. For most cases, using the OpenShift Container Platform web console to make changes to the config.yaml
file is simpler.
Prerequisites
- You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
Procedure
Describe the
QuayRegistry
resource by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe quayregistry -n <quay_namespace>
$ oc describe quayregistry -n <quay_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... Config Bundle Secret: example-registry-config-bundle-v123x # ...
Obtain the secret data by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'
$ oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow { "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
Decode the data into a YAML file into the current directory by passing in the
>> config.yaml
flag. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
-
Make the desired changes to your
config.yaml
file, and then save the file asconfig.yaml
. Create a new
configBundleSecret
YAML by entering the following command.Copy to Clipboard Copied! Toggle word wrap Toggle overflow touch <new_configBundleSecret_name>.yaml
$ touch <new_configBundleSecret_name>.yaml
Create the new
configBundleSecret
resource, passing in theconfig.yaml
file` by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \ --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml
$ oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \
1 --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml
- 1
- Where
<config.yaml>
is yourbase64 decoded
config.yaml
file.
Create the
configBundleSecret
resource by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -n <namespace> -f <new_configBundleSecret_name>.yaml
$ oc create -n <namespace> -f <new_configBundleSecret_name>.yaml
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow secret/config-bundle created
secret/config-bundle created
Update the
QuayRegistry
YAML file to reference the newconfigBundleSecret
object by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'
$ oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow quayregistry.quay.redhat.com/example-registry patched
quayregistry.quay.redhat.com/example-registry patched
Verification
Verify that the
QuayRegistry
CR has been updated with the newconfigBundleSecret
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe quayregistry -n <quay_namespace>
$ oc describe quayregistry -n <quay_namespace>
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... Config Bundle Secret: <new_configBundleSecret_name> # ...
After patching the registry, the Red Hat Quay Operator automatically reconciles the changes.
Chapter 6. New configuration fields with Red Hat Quay 3.14
The following sections detail new configuration fields added in Red Hat Quay 3.14.
6.1. Model card rendering configuration fields
The following configuration fields have been added to support model card rendering on the v2 UI.
Field | Type | Description |
---|---|---|
FEATURE_UI_MODELCARD | Boolean |
Enables Model card image tab in UI. Defaults to |
UI_MODELCARD_ARTIFACT_TYPE | String | Defines the model card artifact type. |
UI_MODELCARD_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
UI_MODELCARD_LAYER_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
Example model card YAML
FEATURE_UI_MODELCARD: true UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel UI_MODELCARD_ANNOTATION: org.opencontainers.image.description: "Model card metadata" UI_MODELCARD_LAYER_ANNOTATION: org.opencontainers.image.title: README.md
FEATURE_UI_MODELCARD: true
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel
UI_MODELCARD_ANNOTATION:
org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION:
org.opencontainers.image.title: README.md
- 1
- Enables the Model Card image tab in the UI.
- 2
- Defines the model card artifact type. In this example, the artifact type is
application/x-mlmodel
. - 3
- Optional. If an image does not have an
artifactType
defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matchingUI_MODELCARD_LAYER_ANNOTATION
. - 4
- Optional. If an image has an
artifactType
defined and multiple layers, this field is used to locate the specific layer containing the model card.
Chapter 7. Required configuration fields
Red Hat Quay requires a minimal set of configuration fields to operator correctly. These fields define essential aspects of your deployment, such as how the registry is accessed, where image content is stored, how metadata is persisted, and how background services such as logs are managed.
The required configuration fields fall into five main categories:
- General required configuration fields. Core fields such as the authentication type, URL scheme, server hostname, database secret key, and secret key are covered in this section.
- Database configuration fields. Red Hat Quay requires a PostgreSQL relational database to store metadata about repositories, users, teams, and tags.
- Object storage configuration fields. Object storage define the backend where container image blobs and manifests are stored. Your storage backend must be supported by Red Hat Quay, such as Ceph/RadosGW,AWS S3 storage, Google Cloud Storage, Nutanix, and so on.
- Redis configuration fields. Redis is used as a backend for data such as push logs, user notifications, and other operations.
7.1. General required configuration fields
The following table describes the required configuration fields for a Red Hat Quay deployment:
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE | String |
The authentication engine to use for credential authentication. |
PREFERRED_URL_SCHEME | String |
The URL scheme to use when accessing Red Hat Quay. |
SERVER_HOSTNAME | String |
The URL at which Red Hat Quay is accessible, without the scheme. |
DATABASE_SECRET_KEY | String |
Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. |
SECRET_KEY | String | Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Red Hat Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. |
SETUP_COMPLETE | Boolean |
This is an artifact left over from earlier versions of the software and currently it must be specified with a value of |
General required fields example
AUTHENTICATION_TYPE: Database PREFERRED_URL_SCHEME: https SERVER_HOSTNAME: <quay-server.example.com> SECRET_KEY: <secret_key_value> DATABASE_SECRET_KEY: <database_secret_key_value> SETUP_COMPLETE: true # ...
AUTHENTICATION_TYPE: Database
PREFERRED_URL_SCHEME: https
SERVER_HOSTNAME: <quay-server.example.com>
SECRET_KEY: <secret_key_value>
DATABASE_SECRET_KEY: <database_secret_key_value>
SETUP_COMPLETE: true
# ...
7.2. Database configuration fields
This section describes the database configuration fields available for Red Hat Quay deployments.
7.2.1. Database URI
With Red Hat Quay, connection to the database is configured by using the required DB_URI
field.
The following table describes the DB_URI
configuration field:
Field | Type | Description |
---|---|---|
DB_URI | String | The URI for accessing the database, including any credentials.
Example postgresql://quayuser:quaypass@quay-server.example.com:5432/quay |
Database URI example
... ...
# ...
DB_URI: postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
# ...
7.2.2. Database connection arguments
Optional connection arguments are configured by the DB_CONNECTION_ARGS
parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS
are generic, while others are database specific.
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS | Object | Optional connection arguments for the database, such as timeouts and SSL/TLS. |
.autorollback | Boolean |
Whether to use thread-local connections. |
.threadlocals | Boolean |
Whether to use auto-rollback connections. |
Database connection arguments example
... ...
# ...
DB_URI: postgresql://quayuser:quaypass@quay-server.example.com:5432/quay
DB_CONNECTION_ARGS:
autorollback: true
threadlocals: true
# ...
7.2.2.1. SSL/TLS connection arguments
With SSL/TLS, configuration depends on the database you are deploying.
The sslmode
option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes:
Mode | Description |
---|---|
sslmode | Determines whether, or with, what priority a secure SSL/TLS or TCP/IP connection is negotiated with the server. |
*: disable | Your configuration only tries non-SSL/TLS connections. |
*: allow | Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. |
*: prefer | Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. |
*: require | Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. |
*: verify-ca | Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). |
*: verify-full | Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. |
For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.
PostgreSQL SSL/TLS configuration
... ...
# ...
DB_CONNECTION_ARGS:
sslmode: <value>
sslrootcert: path/to/.postgresql/root.crt
# ...
7.3. Storage object configuration fields
Storage fields define the backend where container image blobs and manifests are stored. The following storage providers are supported by Red Hat Quay:
- Amazon Web Services (AWS) S3
- AWS STS S3 (Security Token Service)
- AWS CloudFront (CloudFront S3Storage)
- Google Cloud Storage
- Microsoft Azure Blob Storage
- Swift Storage
- Nutanix Object Storage
- IBM Cloud Object Storage
- NetApp ONTAP S3 Object Storage
- Hitachi Content Platform (HCP) Object Storage
Many of the supported storage providers use the RadosGWStorage
driver due to their S3-compatible APIs.
7.3.1. Storage configuration fields
The following table describes the storage configuration fields for Red Hat Quay. These fields are required when configuring backend storage.
Field | Type | Description |
---|---|---|
DISTRIBUTED_STORAGE_CONFIG | Object |
Configuration for storage engine(s) to use in Red Hat Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. |
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS | Array of string |
The list of storage engine(s) (by ID in |
DISTRIBUTED_STORAGE_PREFERENCE | Array of string |
The preferred storage engine(s) (by ID in |
MAXIMUM_LAYER_SIZE | String |
Maximum allowed size of an image layer. |
Storage configuration example
DISTRIBUTED_STORAGE_CONFIG: DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default MAXIMUM_LAYER_SIZE: 100G
DISTRIBUTED_STORAGE_CONFIG:
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
MAXIMUM_LAYER_SIZE: 100G
7.3.2. Local storage
The following YAML shows an example configuration using local storage.
Only use local storage when deploying a registry for proof of concept purposes. It is not intended for production purposes. When using local storage, you must map the registry to a local directory to the datastorage
path in the container when starting the registry. For more information, see Proof of Concept - Deploying Red Hat Quay
Local storage example
DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
7.3.3. Red Hat OpenShift Data Foundation
The following YAML shows a sample configuration using an Red Hat OpenShift Data Foundation:
DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: <access_key_here> secret_key: <secret_key_here> bucket_name: <bucket_name> hostname: <hostname> is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100 server_side_assembly: true
DISTRIBUTED_STORAGE_CONFIG:
rhocsStorage:
- RHOCSStorage
- access_key: <access_key_here>
secret_key: <secret_key_here>
bucket_name: <bucket_name>
hostname: <hostname>
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100
server_side_assembly: true
7.3.4. Ceph Object Gateway (RadosGW) storage example
Red Hat Quay supports using Ceph Object Gateway (RadosGW) as an object storage backend. RadosGW is a component of Red Hat Ceph Storage, which is a storage platform engineered for private architecture. Red Hat Ceph Storage provides an S3-compatible REST API for interacting with Ceph.
RadosGW is an on-premise S3-compatible storage solution. It implements the S3 API and requires the same authentication fields, such as access_key
, secret_key
, and bucket_name
. For more information about Ceph Object Gateway and the S3 API, see Ceph Object Gateway.
The following YAML shows an example configuration using RadosGW.
RadosGW with general s3 access example
DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' secret_key: <secret_key_here> storage_path: /datastorage/registry maximum_chunk_size_mb: 100 server_side_assembly: true
DISTRIBUTED_STORAGE_CONFIG:
radosGWStorage:
- RadosGWStorage
- access_key: <access_key_here>
bucket_name: <bucket_name_here>
hostname: <hostname_here>
is_secure: true
port: '443'
secret_key: <secret_key_here>
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100
server_side_assembly: true
- 1
- Used for general s3 access. Note that general s3 access is not strictly limited to Amazon Web Services (AWS) s3, and can be used with RadosGW or other storage services. For an example of general s3 access using the AWS S3 driver, see "AWS S3 storage".
- 2
- Optional. Defines the maximum chunk size in MB for the final copy. Has no effect if
server_side_assembly
is set tofalse
. - 3
- Optional. Whether Red Hat Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to
true
.
7.3.5. Supported AWS storage backends
Red Hat Quay supports multiple Amazon Web Services (AWS) storage backends:
- S3 storage: Standard support for AWS S3 buckets that uses AWS’s native object storage service.
- STS S3 storage: Support for AWS Security Token Service (STS) to assume IAM roles, allowing for more secure S3 operations.
- CloudFront S3 storage: Integrates with AWS CloudFront to enable high-availability distribution of content while still using AWS S3 as the origin.
The following sections provide example YAMLs and additional information about each AWS storage backend.
7.3.5.1. Amazon Web Services S3 storage
Red Hat Quay supports using AWS S3 as an object storage backend. AWS S3 is an object storage service designed for data availability, scalability, security, and performance. The following YAML shows an example configuration using AWS S3.
AWS S3 example
... ...
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- S3Storage
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
s3_region: <region>
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
# ...
- 1
- The
S3Storage
storage driver should only be used for AWS S3 buckets. Note that this differs from general S3 access, where the RadosGW driver or other storage services can be used. For an example, see "Example B: Using RadosGW with general S3 access". - 2
- Optional. The Amazon Web Services region. Defaults to
us-east-1
.
7.3.5.2. Amazon Web Services STS S3 storage
AWS Security Token Service (STS) provides temporary, limited-privilege credentials for accessing AWS resources, improving security by avoiding the need to store long-term access keys. This is useful in environments such as OpenShift Container Platform where credentials can be rotated or managed through IAM roles.
The following YAML shows an example configuration for using AWS STS with Red Hat Quay on OpenShift Container Platform configurations.
AWS STS S3 storage example
... ...
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- STSS3Storage
- sts_role_arn: <role_arn>
s3_bucket: <s3_bucket_name>
storage_path: <storage_path>
sts_user_access_key: <s3_user_access_key>
sts_user_secret_key: <s3_user_secret_key>
s3_region: <region>
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
# ...
7.3.5.3. AWS CloudFront storage
AWS CloudFront is a content delivery network (CDN) service that caches and distributes content closer to users for improved performance and lower latency. Red Hat Quay supports CloudFront through the CloudFrontedS3Storage
driver, which enables secure, signed access to S3 buckets via CloudFront distributions.
Use the following example when configuring AWS CloudFront for your Red Hat Quay deployment.
When configuring AWS Cloudfront storage, the following conditions must be met for proper use with Red Hat Quay:
-
You must set an Origin path that is consistent with Red Hat Quay’s storage path as defined in your
config.yaml
file. Failure to meet this require results in a403
error when pulling an image. For more information, see Origin path. - You must configure a Bucket policy and a Cross-origin resource sharing (CORS) policy.
-
You must set an Origin path that is consistent with Red Hat Quay’s storage path as defined in your
Cloudfront S3 example YAML
DISTRIBUTED_STORAGE_CONFIG: default: - CloudFrontedS3Storage - cloudfront_distribution_domain: <CLOUDFRONT_DISTRIBUTION_DOMAIN> cloudfront_key_id: <CLOUDFRONT_KEY_ID> cloudfront_privatekey_filename: <CLOUDFRONT_PRIVATE_KEY_FILENAME> host: <S3_HOST> s3_access_key: <S3_ACCESS_KEY> s3_bucket: <S3_BUCKET_NAME> s3_secret_key: <S3_SECRET_KEY> storage_path: <STORAGE_PATH> s3_region: <S3_REGION> DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default
DISTRIBUTED_STORAGE_CONFIG:
default:
- CloudFrontedS3Storage
- cloudfront_distribution_domain: <CLOUDFRONT_DISTRIBUTION_DOMAIN>
cloudfront_key_id: <CLOUDFRONT_KEY_ID>
cloudfront_privatekey_filename: <CLOUDFRONT_PRIVATE_KEY_FILENAME>
host: <S3_HOST>
s3_access_key: <S3_ACCESS_KEY>
s3_bucket: <S3_BUCKET_NAME>
s3_secret_key: <S3_SECRET_KEY>
storage_path: <STORAGE_PATH>
s3_region: <S3_REGION>
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- default
DISTRIBUTED_STORAGE_PREFERENCE:
- default
Bucket policy example
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<S3_BUCKET_NAME>/*" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::<S3_BUCKET_NAME>" } ] }
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<S3_BUCKET_NAME>/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<S3_BUCKET_NAME>"
}
]
}
7.3.6. Google Cloud Storage
Red Hat Quay supports using Google Cloud Storage (GCS) as an object storage backend. When used with Red Hat Quay, it provides a cloud-native solution for storing container images and artifacts.
The following YAML shows a sample configuration using Google Cloud Storage.
Google Cloud Storage example
DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry boto_timeout: 120 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: <access_key>
bucket_name: <bucket_name>
secret_key: <secret_key>
storage_path: /datastorage/registry
boto_timeout: 120
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
- 1
- Optional. The time, in seconds, until a timeout exception is thrown when attempting to read from a connection. The default is
60
seconds. Also encompasses the time, in seconds, until a timeout exception is thrown when attempting to make a connection. The default is60
seconds.
7.3.7. Microsoft Azure Blob Storage
Red Hat Quay supports using Microsoft Azure Blob Storage as an object storage backend. Azure Blob Storage can be used to persist container images, metadata, and other artifacts in a secure and cloud-native manner.
The following YAML shows a sample configuration using Azure Storage.
Microsoft Azure Blob Storage example
DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: <azure_account_name> azure_container: <azure_container_name> storage_path: /datastorage/registry azure_account_key: <azure_account_key> sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
- azure_account_name: <azure_account_name>
azure_container: <azure_container_name>
storage_path: /datastorage/registry
azure_account_key: <azure_account_key>
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
- 1
- The
endpoint_url
parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Azure region.As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
7.3.8. Swift object storage
Red Hat Quay supports using Red Hat OpenStack Platform (RHOSP) Object Storage service, or Swift, as an object storage backend. Swift offers S3-like functionality with its own API and authentication mechanisms.
The following YAML shows a sample configuration using Swift storage.
Swift object storage example
DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: <swift_username> swift_password: <swift_password> swift_container: <swift_container> auth_url: https://example.org/swift/v1/quay auth_version: 3 os_options: tenant_id: <osp_tenant_id> user_domain_name: <osp_domain_name> ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage
DISTRIBUTED_STORAGE_CONFIG:
swiftStorage:
- SwiftStorage
- swift_user: <swift_username>
swift_password: <swift_password>
swift_container: <swift_container>
auth_url: https://example.org/swift/v1/quay
auth_version: 3
os_options:
tenant_id: <osp_tenant_id>
user_domain_name: <osp_domain_name>
ca_cert_path: /conf/stack/swift.cert"
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- swiftStorage
7.3.9. Nutanix Objects Storage
Red Hat Quay supports Nutanix Objects Storage as an object storage backend. Nutanix Object Storage is suitable for organizations running private cloud infrastructure using Nutanix.
The following YAML shows a sample configuration using Nutanix Object Storage.
Nutanix Objects Storage example
DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: # storage config name - RadosGWStorage # actual driver - access_key: <access_key> secret_key: <secret_key> bucket_name: <bucket_name> hostname: <hostname> is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: # must contain name of the storage config - nutanixStorage
DISTRIBUTED_STORAGE_CONFIG:
nutanixStorage: # storage config name
- RadosGWStorage # actual driver
- access_key: <access_key>
secret_key: <secret_key>
bucket_name: <bucket_name>
hostname: <hostname>
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE: # must contain name of the storage config
- nutanixStorage
7.3.10. IBM Cloud Object Storage
Red Hat Quay supports IBM Cloud Object Storage as an object storage backend. IBM Cloud Object Storage is suitable for cloud-native applications requiring scalable and secure storage on IBM Cloud.
The following YAML shows a sample configuration using IBM Cloud Object Storage.
IBM Cloud Object Storage
DISTRIBUTED_STORAGE_CONFIG: default: - IBMCloudStorage # actual driver - access_key: <access_key> # parameters secret_key: <secret_key> bucket_name: <bucket_name> hostname: <hostname> is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100mb minimum_chunk_size_mb: 5mb DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default
DISTRIBUTED_STORAGE_CONFIG:
default:
- IBMCloudStorage # actual driver
- access_key: <access_key> # parameters
secret_key: <secret_key>
bucket_name: <bucket_name>
hostname: <hostname>
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100mb
minimum_chunk_size_mb: 5mb
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- default
DISTRIBUTED_STORAGE_PREFERENCE:
- default
- 1
- Optional. Recommended to be set to
100mb
. - 2
- Optional. Defaults to
5mb
. Do not adjust this field without consulting Red Support, because it can have unintended consequences.
7.3.11. NetApp ONTAP S3 object storage
Red Hat Quay supports using NetApp ONTAP S3 as an object storage backend.
The following YAML shows a sample configuration using NetApp ONTAP S3.
Netapp ONTAP S3 example
DISTRIBUTED_STORAGE_CONFIG: local_us: - RadosGWStorage - access_key: <access_key> bucket_name: <bucket_name> hostname: <host_url_address> is_secure: true port: <port> secret_key: <secret_key> storage_path: /datastorage/registry signature_version: v4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us
DISTRIBUTED_STORAGE_CONFIG:
local_us:
- RadosGWStorage
- access_key: <access_key>
bucket_name: <bucket_name>
hostname: <host_url_address>
is_secure: true
port: <port>
secret_key: <secret_key>
storage_path: /datastorage/registry
signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- local_us
DISTRIBUTED_STORAGE_PREFERENCE:
- local_us
7.3.12. Hitachi Content Platform object storage
Red Hat Quay supports using Hitachi Content Platform (HCP) as an object storage backend.
The following YAML shows a sample configuration using HCP for object storage.
HCP storage configuration example
DISTRIBUTED_STORAGE_CONFIG: hcp_us: - RadosGWStorage - access_key: <access_key> bucket_name: <bucket_name> hostname: <hitachi_hostname_example> is_secure: true secret_key: <secret_key> storage_path: /datastorage/registry signature_version: v4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - hcp_us DISTRIBUTED_STORAGE_PREFERENCE: - hcp_us
DISTRIBUTED_STORAGE_CONFIG:
hcp_us:
- RadosGWStorage
- access_key: <access_key>
bucket_name: <bucket_name>
hostname: <hitachi_hostname_example>
is_secure: true
secret_key: <secret_key>
storage_path: /datastorage/registry
signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- hcp_us
DISTRIBUTED_STORAGE_PREFERENCE:
- hcp_us
7.4. Redis configuration fields
Redis is used by Red Hat Quay to support backend tasks and services, such as build triggers and notifications. There are configuration types related to Redis: build logs and user events. The following sections detail the configuration fields available for each type.
7.4.1. Build logs
Build logs are generated during the image build process and provide insights for debugging and auditing. Red Hat Quay uses Redis to temporarily store these logs before they are accessed through the user interface or API.
The following build logs configuration fields are available for Redis deployments.
Field | Type | Description |
---|---|---|
BUILDLOGS_REDIS | Object | Redis connection details for build logs caching. |
.host | String |
The hostname at which Redis is accessible. |
.port | Number |
The port at which Redis is accessible. |
.password | String |
The password to connect to the Redis instance. |
.ssl | Boolean | Whether to enable TLS communication between Redis and Quay. Defaults to false. |
Build logs configuration example
... ...
# ...
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <example_password>
port: 6379
ssl: true
# ...
7.4.2. User events
User events track activity across Red Hat Quay, such as repository pushes, tag creations, deletions, and permission changes. These events are recorded in Redis as part of the activity stream and can be accessed through the API or web interface.
The following user event fields are available for Redis deployments.
Field | Type | Description |
---|---|---|
USER_EVENTS_REDIS | Object | Redis connection details for user event handling. |
.host | String |
The hostname at which Redis is accessible. |
.port | Number |
The port at which Redis is accessible. |
.password | String |
The password to connect to the Redis instance. |
.ssl | Boolean | Whether to enable TLS communication between Redis and Quay. Defaults to false. |
.ssl_keyfile | String |
The name of the key database file, which houses the client certificate to be used. |
.ssl_certfile | String |
Used for specifying the file path of the SSL certificate. |
.ssl_cert_reqs | String |
Used to specify the level of certificate validation to be performed during the SSL/TLS handshake. |
.ssl_ca_certs | String |
Used to specify the path to a file containing a list of trusted Certificate Authority (CA) certificates. |
.ssl_ca_data | String |
Used to specify a string containing the trusted CA certificates in PEM format. |
.ssl_check_hostname | Boolean |
Used when setting up an SSL/TLS connection to a server. It specifies whether the client should check that the hostname in the server’s SSL/TLS certificate matches the hostname of the server it is connecting to. |
Redis user events example
... ...
# ...
USER_EVENTS_REDIS:
host: <quay-redis.example.com>
port: 6379
password: <example_password>
ssl: true
ssl_keyfile: /etc/ssl/private/redis-client.key
ssl_certfile: /etc/ssl/certs/redis-client.crt
ssl_cert_reqs: <required_certificate>
ssl_ca_certs: /etc/ssl/certs/ca-bundle.crt
ssl_check_hostname: true
# ...
Chapter 8. Automation configuration options
Red Hat Quay supports various mechanisms for automating deployment and configuration, which allows the integration of Red Hat Quay into GitOps and CI/CD pipelines. By defining these options and leveraging the API, Red Hat Quay can be initialized and managed without using the UI.
Because the Red Hat Quay Operator manages the config.yaml
file through the configBundleSecret
custom resource (CR), pre-configuring Red Hat Quay on OpenShift Container Platform requires an administrator to manually create a valid config.yaml
file with the desired configuration. This file must then be bundled into a new Kubernetes Secret and used to replace the default configBundleSecret
CR referenced by the QuayRegistry
CR. This allows Red Hat Quay on OpenShift Container Platform to be deployed in a fully automated manner, bypassing the web-based configuration UI. For more information, see Modifying the QuayRegistry CR after deployment.
For on-premise Red Hat Quay deployments, pre-configuration is done by manually creating a valid config.yaml
file and then deploying the registry.
Automation options are ideal for environments that require declarative Red Hat Quay deployments, such as disconnected or air-gapped clusters.
8.1. Pre-configuration options for automation
Red Hat Quay provides configuration options that enable registry administrators to automate early setup tasks and API accessibility. These options are useful for new deployments and controlling how API calls can be made. The following options support automation and administrative control.
Field | Type | Description |
---|---|---|
FEATURE_USER_INITIALIZE | Boolean |
Enables initial user bootstrapping in a newly deployed Red Hat Quay registry. When this field is set to Note
Unlike all other registry API calls that require an OAuth 2 access token generated by an OAuth application in an existing organization, the |
BROWSER_API_CALLS_XHR_ONLY | Boolean |
Controls whether the registry API only accepts calls from browsers. To allow general browser-based access to the API, administrators must set this field to |
SUPER_USERS | String |
Defines a list of administrative users, or superusers, who have full privileges and unrestricted access to the registry. Red Hat Quay administrators should configure |
FEATURE_USER_CREATION | Boolean |
Relegates the creation of new users to only superusers when this field is set to |
The following YAML shows you the suggested configuration for automation:
Suggested configuration for automation
... ...
# ...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
# ...
Chapter 9. Component and feature configuration fields
The Component and Feature Configuration section describes the configurable fields available for fine-tuning Red Hat Quay across its various subsystems. These fields allow administrators to customize registry behavior, enable or disable specific features, and integrate with external services and infrastructure. While not required for a basic deployment, these options support advanced use cases related to security, automation, scalability, compliance, and performance.
9.1. Core configuration overview
Use these core fields to configure the registry’s basic behavior, including hostname, protocol, authentication settings, and more.
9.1.1. Registry branding and identity fields
The following configuration fields allow you to modify the branding, identity, and contact information displayed in your Red Hat Quay deployment. With these fields, you can customize how the registry appears to users by specifying titles, headers, footers, and organizational contact links shown throughout the UI.
Some of the following fields are not available on the Red Hat Quay v2 UI.
Field | Type | Description |
---|---|---|
REGISTRY_TITLE | String |
If specified, the long-form title for the registry. Displayed in frontend of your Red Hat Quay deployment, for example, at the sign in page of your organization. Should not exceed 35 characters. |
REGISTRY_TITLE_SHORT | String |
If specified, the short-form title for the registry. Title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization’s Tutorial page. |
CONTACT_INFO | Array of String | If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly. |
[0] | String |
Adds a link to send an e-mail. |
[1] | String |
Adds a link to visit an IRC chat room. |
[2] | String |
Adds a link to call a phone number. |
[3] | String |
Adds a link to a defined URL. |
Field | Type | Description |
---|---|---|
BRANDING | Object | Custom branding for logos and URLs in the Red Hat Quay UI. |
.logo | String |
Main logo image URL.
The header logo defaults to 205x30 PX. The form logo on the Red Hat Quay sign in screen of the web UI defaults to 356.5x39.7 PX. |
.footer_img | String |
Logo for UI footer. Defaults to 144x34 PX. |
.footer_url | String |
Link for footer image. |
Field | Type | Description |
---|---|---|
FOOTER_LINKS | Object | Enable customization of footer links in Red Hat Quay’s UI for on-prem installations. |
.TERMS_OF_SERVICE_URL | String |
Custom terms of service for on-prem installations. |
.PRIVACY_POLICY_URL | String |
Custom privacy policy for on-prem installations. |
.SECURITY_URL | String |
Custom security page for on-prem installations. |
.ABOUT_URL | String |
Custom about page for on-prem installations. |
Registry branding and identity example YAML
... ...
# ...
REGISTRY_TITLE: "Example Container Registry"
REGISTRY_TITLE_SHORT: "Example Quay"
CONTACT_INFO:
- mailto:support@example.io
- irc://chat.freenode.net:6665/examplequay
- tel:+1-800-555-1234
- https://support.example.io
BRANDING:
logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
footer_url: https://opensourceworld.org/
FOOTER_LINKS:
"TERMS_OF_SERVICE_URL": "https://www.index.hr"
"PRIVACY_POLICY_URL": "https://www.example.hr"
"SECURITY_URL": "https://www.example.hr"
"ABOUT_URL": "https://www.example.hr"
# ...
9.1.2. SSL/TLS configuration fields
This section describes the available configuration fields for enabling and managing SSL/TLS encryption in your Red Hat Quay deployment.
Additional resources
Field | Type | Description |
---|---|---|
PREFERRED_URL_SCHEME | String |
One of |
SERVER_HOSTNAME | String |
The URL at which Red Hat Quay is accessible, without the scheme |
SSL_CIPHERS | Array of String |
If specified, the nginx-defined list of SSL ciphers to enabled and disabled |
SSL_PROTOCOLS | Array of String |
If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Red Hat Quay startup. |
SESSION_COOKIE_SECURE | Boolean |
Whether the |
EXTERNAL_TLS_TERMINATION | Boolean |
Set to |
SSL configuration example YAML
... ...
# ...
PREFERRED_URL_SCHEME: https
SERVER_HOSTNAME: quay-server.example.com
SSL_CIPHERS:
- ECDHE-RSA-AES128-GCM-SHA256
SSL_PROTOCOLS:
- TLSv1.3
SESSION_COOKIE_SECURE: true
EXTERNAL_TLS_TERMINATION: true
# ...
9.1.3. IPv6 configuration field
You can use the FEATURE_LISTEN_IP_VERSION
configuration field to specify which IP protocol family Red Hat Quay should listen on: IPv4, IPv6, or both (dual-stack). This field is critical in environments where the registry must operate on IPv6-only or dual-stack networks.
Field | Type | Description |
---|---|---|
FEATURE_LISTEN_IP_VERSION | String |
Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Red Hat Quay fails to start. Default: |
IPv6 example YAML
... ...
# ...
FEATURE_LISTEN_IP_VERSION: dual-stack
# ...
9.1.4. Logging and debugging variables
The following variables control how Red Hat Quay logs events, exposes debugging information, and interacts with system health checks. These settings are useful for troubleshooting and monitoring your registry
Variable | Type | Description |
---|---|---|
DEBUGLOG | Boolean | Whether to enable or disable debug logs. |
USERS_DEBUG |
Integer. Either |
Used to debug LDAP operations in clear text, including passwords. Must be used with Important
Setting |
ALLOW_PULLS_WITHOUT_STRICT_LOGGING | String |
If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time. |
ENABLE_HEALTH_DEBUG_SECRET | String | If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser |
HEALTH_CHECKER | String |
The configured health check |
FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL | Boolean |
Whether to allow retrieval of aggregated log counts |
Logging and debugging example YAML
...
#...
DEBUGLOG: true
USERS_DEBUG: 1
ALLOW_PULLS_WITHOUT_STRICT_LOGGING: "true"
ENABLE_HEALTH_DEBUG_SECRET: "<secret_value>"
HEALTH_CHECKER: "('RDSAwareHealthCheck', {'access_key': 'foo', 'secret_key': 'bar'})"
FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL: true
# ...
Additional resources
9.1.5. Registry state and system behavior configuration fields
The following configuration fields control the operational state of the Red Hat Quay registry and how it interacts with external systems. These settings allow administrators to place the registry into a restricted read-only mode for maintenance purposes, and to enforce additional security by blocking specific hostnames from being targeted by webhooks.
Field | Type | Description |
---|---|---|
REGISTRY_STATE | String |
The state of the registry |
WEBHOOK_HOSTNAME_BLACKLIST | Array of String | The set of hostnames to disallow from webhooks when validating, beyond localhost |
Registry state and system behavior example YAML
... ...
# ...
REGISTRY_STATE: normal
WEBHOOK_HOSTNAME_BLACKLIST:
- "169.254.169.254"
- "internal.example.com"
- "127.0.0.2"
# ...
Additional resources
9.2. User Experience and Interface
These fields configure how users interact with the UI, including branding, pagination, browser behavior, and accessibility options like recaptcha. This also covers user-facing performance and display settings.
9.2.1. Web UI and user experience configuration fields
These configuration fields control the behavior and appearance of the Red Hat Quay web interface and overall user experience. Options in this section allow administrators to customize login behavior, avatar display, user autocomplete, session handling, and catalog visibility.
Field | Type | Description |
---|---|---|
AVATAR_KIND | String |
The types of avatars to display, either generated inline (local) or Gravatar (gravatar) |
FRESH_LOGIN_TIMEOUT | String |
The time after which a fresh login requires users to re-enter their password |
FEATURE_UI_V2 | Boolean | When set, allows users to try the v2 beta UI environment.
Default: |
FEATURE_UI_V2_REPO_SETTINGS | Boolean |
When set to
+ Default: |
FEATURE_DIRECT_LOGIN | Boolean |
Whether users can directly login to the UI |
FEATURE_PARTIAL_USER_AUTOCOMPLETE | Boolean |
If set to true, autocompletion will apply to partial usernames+ |
FEATURE_LIBRARY_SUPPORT | Boolean |
Whether to allow for "namespace-less" repositories when pulling and pushing from Docker |
FEATURE_PERMANENT_SESSIONS | Boolean |
Whether sessions are permanent |
FEATURE_PUBLIC_CATALOG | Boolean |
If set to true, the |
Example YAML
... ...
# ...
AVATAR_KIND: local
FRESH_LOGIN_TIMEOUT: 5m
FEATURE_UI_V2: true
FEATURE_UI_V2_REPO_SETTINGS: false
FEATURE_DIRECT_LOGIN: true
FEATURE_PARTIAL_USER_AUTOCOMPLETE: true
FEATURE_LIBRARY_SUPPORT: true
FEATURE_PERMANENT_SESSIONS: true
FEATURE_PUBLIC_CATALOG: false
# ...
9.2.1.1. v2 user interface configuration
With FEATURE_UI_V2
enabled, you can toggle between the current version of the user interface and the new version of the user interface.
- This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags.
- When running Red Hat Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI.
- There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Red Hat Quay uses the standard definition of megabyte (MB) to report image manifest sizes.
9.2.2. Session timeout configuration field
The following configuration field relies on on the Flask API configuration field of the same name.
Altering session lifetime is not recommended. Administrators should be aware of the allotted time when setting a session timeout. If you set the time too early, it might interrupt your workflow.
Field | Type | Description |
---|---|---|
PERMANENT_SESSION_LIFETIME | Integer |
A
Default: |
Session timeout example YAML
... ...
# ...
PERMANENT_SESSION_LIFETIME: 3000
# ...
9.3. User and Access Management
Use these fields to configure how users are created, authenticated, and managed. This includes settings for superusers, account recovery, app-specific tokens, login behavior, and external identity providers like LDAP, OAuth, and OIDC.
9.3.1. User configuration fields
The user configuration fields define how user accounts behave in your Red Hat Quay deployment. These fields enable control over user creation, access levels, metadata tracking, recovery options, and namespace management. You can also enforce restrictions, such as invite-only creation or superuser privileges, to match your organization’s governance and security policies.
Field | Type | Description |
---|---|---|
FEATURE_SUPER_USERS | Boolean |
Whether superusers are supported |
FEATURE_USER_CREATION | Boolean |
Whether users can be created (by non-superusers) |
FEATURE_USER_LAST_ACCESSED | Boolean |
Whether to record the last time a user was accessed |
FEATURE_USER_LOG_ACCESS | Boolean |
If set to true, users will have access to audit logs for their namespace |
FEATURE_USER_METADATA | Boolean |
Whether to collect and support user metadata |
FEATURE_USERNAME_CONFIRMATION | Boolean |
If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP. |
FEATURE_USER_RENAME | Boolean |
If set to true, users can rename their own namespace |
FEATURE_INVITE_ONLY_USER_CREATION | Boolean |
Whether users being created must be invited by another user |
FRESH_LOGIN_TIMEOUT | String |
The time after which a fresh login requires users to re-enter their password |
USERFILES_LOCATION | String |
ID of the storage engine in which to place user-uploaded files |
USERFILES_PATH | String |
Path under storage in which to place user-uploaded files |
USER_RECOVERY_TOKEN_LIFETIME | String |
The length of time a token for recovering a user accounts is valid |
FEATURE_SUPERUSERS_FULL_ACCESS | Boolean | Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for.
Default: |
FEATURE_SUPERUSERS_ORG_CREATION_ONLY | Boolean | Whether to only allow superusers to create organizations.
Default: |
FEATURE_RESTRICTED_USERS | Boolean |
When set to
Default: |
RESTRICTED_USERS_WHITELIST | String |
When set with |
GLOBAL_READONLY_SUPER_USERS | String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the |
User example YAML
... ...
# ...
FEATURE_SUPER_USERS: true
FEATURE_USER_CREATION: true
FEATURE_INVITE_ONLY_USER_CREATION: false
FEATURE_USER_RENAME: true
FEATURE_SUPERUSERS_FULL_ACCESS: true
FEATURE_SUPERUSERS_ORG_CREATION_ONLY: false
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST:
- user1
GLOBAL_READONLY_SUPER_USERS:
- quayadmin
FRESH_LOGIN_TIMEOUT: "5m"
USER_RECOVERY_TOKEN_LIFETIME: "30m"
USERFILES_LOCATION: "s3_us_east"
USERFILES_PATH: "userfiles"
# ...
- 1
- When the
RESTRICTED_USERS_WHITELIST
field is set, whitelisted users can create organizations, or read or write content from the repository even ifFEATURE_RESTRICTED_USERS
is set totrue
. Other users, for example,user2
,user3
, anduser4
are restricted from creating organizations, reading, or writing content.
9.3.2. Robot account configuration fields
The following configuration field allows for globally disallowing robot account creation and interaction.
Additional resources
Field | Type | Description |
---|---|---|
ROBOTS_DISALLOW | Boolean |
When set to |
Robot account disallow example YAML
... ...
# ...
ROBOTS_DISALLOW: true
# ...
9.3.3. LDAP configuration fields
The following configuration fields allow administrators to integrate Red Hat Quay with an LDAP-based authentication system. When AUTHENTICATION_TYPE
is set to LDAP
, Red Hat Quay can authenticate users against an LDAP directory and support additional, optional features such as team synchronization, superuser access control, restricted user roles, and secure connection parameters.
This section provides YAML examples for the following LDAP scenarios:
- Basic LDAP configuration
- LDAP restricted user configuration
- LDAP superuser configuration
Additional resources
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE | String |
Must be set to |
FEATURE_TEAM_SYNCING | Boolean |
Whether to allow for team membership to be synced from a backing group in the authentication engine (OIDC, LDAP, or Keystone). |
FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP | Boolean |
If enabled, non-superusers can setup team syncrhonization. |
LDAP_ADMIN_DN | String | The admin DN for LDAP authentication. |
LDAP_ADMIN_PASSWD | String | The admin password for LDAP authentication. |
LDAP_ALLOW_INSECURE_FALLBACK | Boolean | Whether or not to allow SSL insecure fallback for LDAP authentication. |
LDAP_BASE_DN | Array of String | The base DN for LDAP authentication. |
LDAP_EMAIL_ATTR | String | The email attribute for LDAP authentication. |
LDAP_UID_ATTR | String | The uid attribute for LDAP authentication. |
LDAP_URI | String | The LDAP URI. |
LDAP_USER_FILTER | String | The user filter for LDAP authentication. |
LDAP_USER_RDN | Array of String | The user RDN for LDAP authentication. |
LDAP_SECONDARY_USER_RDNS | Array of String | Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. |
TEAM_RESYNC_STALE_TIME | String |
If team syncing is enabled for a team, how often to check its membership and resync if necessary. |
LDAP_SUPERUSER_FILTER | String |
Subset of the With this field, administrators can add or remove superusers without having to update the Red Hat Quay configuration file and restart their deployment.
This field requires that your |
LDAP_GLOBAL_READONLY_SUPERUSER_FILTER | String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the |
LDAP_RESTRICTED_USER_FILTER | String |
Subset of the
This field requires that your |
FEATURE_RESTRICTED_USERS | Boolean |
When set to
Default: |
LDAP_TIMEOUT | Integer |
Specifies the time limit, in seconds, for LDAP operations. This limits the amount of time an LDAP search, bind, or other operation can take. Similar to the |
LDAP_NETWORK_TIMEOUT | Integer |
Specifies the time limit, in seconds, for establishing a connection to the LDAP server. This is the maximum time Red Hat Quay waits for a response during network operations, similar to the |
Basic LDAP configuration example YAML
... ...
# ...
AUTHENTICATION_TYPE: LDAP
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- dc=example
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com)
LDAP_USER_RDN:
- ou=people
LDAP_SECONDARY_USER_RDNS:
- ou=<example_organization_unit_one>
- ou=<example_organization_unit_two>
- ou=<example_organization_unit_three>
- ou=<example_organization_unit_four>
- 1
- Required. Must be set to
LDAP
. - 2
- Required. The admin DN for LDAP authentication.
- 3
- Required. The admin password for LDAP authentication.
- 4
- Required. Whether to allow SSL/TLS insecure fallback for LDAP authentication.
- 5
- Required. The base DN for LDAP authentication.
- 6
- Required. The email attribute for LDAP authentication.
- 7
- Required. The UID attribute for LDAP authentication.
- 8
- Required. The LDAP URI.
- 9
- Required. The user filter for LDAP authentication.
- 10
- Required. The user RDN for LDAP authentication.
- 11
- Optional. Secondary User Relative DNs if there are multiple Organizational Units where user objects are located.
LDAP restricted user configuration example YAML
... ... ... ...
# ...
AUTHENTICATION_TYPE: LDAP
# ...
FEATURE_RESTRICTED_USERS: true
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
# ...
LDAP superuser configuration reference example YAML
... ... ...
# ...
AUTHENTICATION_TYPE: LDAP
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_SUPERUSER_FILTER: (<filterField>=<value>)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
# ...
- 1
- Configures specified users as superusers.
9.3.4. OAuth configuration fields
The following fields define the behavior of Red Hat Quay when handling authentication through external identity providers using OAuth. You can configure global OAuth options such as token assignment and whitelisted client IDs, as well as provider-specific settings for GitHub and Google.
Field | Type | Description |
---|---|---|
DIRECT_OAUTH_CLIENTID_WHITELIST | Array of String | A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval. |
FEATURE_ASSIGN_OAUTH_TOKEN | Boolean | Allows organization administrators to assign OAuth tokens to other users. |
Global OAuth example YAML
... ...
# ...
DIRECT_OAUTH_CLIENTID_WHITELIST:
- <quay_robot_client>
- <quay_app_token_issuer>
FEATURE_ASSIGN_OAUTH_TOKEN: true
# ...
Additional resources
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_LOGIN | Boolean |
Whether GitHub login is supported |
GITHUB_LOGIN_CONFIG | Object | Configuration for using GitHub (Enterprise) as an external login provider. |
.ALLOWED_ORGANIZATIONS | Array of String | The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option. |
.API_ENDPOINT | String |
The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance; cannot be shared with |
.CLIENT_SECRET | String |
The registered client secret for this Red Hat Quay instance. |
.GITHUB_ENDPOINT | String |
The endpoint for GitHub (Enterprise). |
.ORG_RESTRICT | Boolean | If true, only users within the organization whitelist can login using this provider. |
Github OAth example YAML
... ...
# ...
FEATURE_GITHUB_LOGIN: true
GITHUB_LOGIN_CONFIG:
ALLOWED_ORGANIZATIONS:
- <myorg>
- <dev-team>
API_ENDPOINT: <https://api.github.com/>
CLIENT_ID: <client_id>
CLIENT_SECRET: <client_secret>
GITHUB_ENDPOINT: <https://github.com/>
ORG_RESTRICT: true
# ...
Field | Type | Description |
---|---|---|
FEATURE_GOOGLE_LOGIN | Boolean |
Whether Google login is supported. |
GOOGLE_LOGIN_CONFIG | Object | Configuration for using Google for external authentication. |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance. |
.CLIENT_SECRET | String |
The registered client secret for this Red Hat Quay instance. |
Google OAuth example YAML
... ...
# ...
FEATURE_GOOGLE_LOGIN: true
GOOGLE_LOGIN_CONFIG:
CLIENT_ID: <client_id>
CLIENT_SECRET: <client_secret>
# ...
9.3.5. OIDC configuration fields
You can configure Red Hat Quay to authenticate users through any OpenID Connect (OIDC)-compatible identity provider, including Azure Entra ID (formerly Azure AD), Okta, Keycloak, and others. These fields define the necessary client credentials, endpoints, and token behavior used during the OIDC login flow.
Additional resources
Field | Type | Description |
---|---|---|
<string>_LOGIN_CONFIG | String |
The parent key that holds the OIDC configuration settings. Typically the name of the OIDC provider, for example, |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance. |
.CLIENT_SECRET | String |
The registered client secret for this Red Hat Quay instance. |
.DEBUGLOG | Boolean | Whether to enable debugging. |
.LOGIN_BINDING_FIELD | String | Used when the internal authorization is set to LDAP. Red Hat Quay reads this parameter and tries to search through the LDAP tree for the user with this username. If it exists, it automatically creates a link to that LDAP account. |
.LOGIN_SCOPES | Object | Adds additional scopes that Red Hat Quay uses to communicate with the OIDC provider. |
.OIDC_ENDPOINT_CUSTOM_PARAMS | String |
Support for custom query parameters on OIDC endpoints. The following endpoints are supported: |
.OIDC_ISSUER | String |
Allows the user to define the issuer to verify. For example, JWT tokens container a parameter known as |
.OIDC_SERVER | String |
The address of the OIDC server that is being used for authentication. |
.PREFERRED_USERNAME_CLAIM_NAME | String | Sets the preferred username to a parameter from the token. |
.SERVICE_ICON | String | Changes the icon on the login screen. |
.SERVICE_NAME | String |
The name of the service that is being authenticated. |
.VERIFIED_EMAIL_CLAIM_NAME | String | The name of the claim that is used to verify the email address of the user. |
.PREFERRED_GROUP_CLAIM_NAME | String | The key name within the OIDC token payload that holds information about the user’s group memberships. |
.OIDC_DISABLE_USER_ENDPOINT | Boolean |
Whether to allow or disable the |
OIDC example YAML
AUTHENTICATION_TYPE: OIDC # ... <oidc_provider>_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> DEBUGLOG: true LOGIN_BINDING_FIELD: <login_binding_field> LOGIN_SCOPES: - openid - email - profile OIDC_ENDPOINT_CUSTOM_PARAMS: authorization_endpoint: some: "param" token_endpoint: some: "param" user_endpoint: some: "param" OIDC_ISSUER: <oidc_issuer_url> OIDC_SERVER: <oidc_server_address> PREFERRED_USERNAME_CLAIM_NAME: <preferred_username_claim> SERVICE_ICON: <service_icon_url> SERVICE_NAME: <service_name> VERIFIED_EMAIL_CLAIM_NAME: <verified_email_claim> PREFERRED_GROUP_CLAIM_NAME: <preferred_group_claim> OIDC_DISABLE_USER_ENDPOINT: true # ...
AUTHENTICATION_TYPE: OIDC
# ...
<oidc_provider>_LOGIN_CONFIG:
CLIENT_ID: <client_id>
CLIENT_SECRET: <client_secret>
DEBUGLOG: true
LOGIN_BINDING_FIELD: <login_binding_field>
LOGIN_SCOPES:
- openid
- email
- profile
OIDC_ENDPOINT_CUSTOM_PARAMS:
authorization_endpoint:
some: "param"
token_endpoint:
some: "param"
user_endpoint:
some: "param"
OIDC_ISSUER: <oidc_issuer_url>
OIDC_SERVER: <oidc_server_address>
PREFERRED_USERNAME_CLAIM_NAME: <preferred_username_claim>
SERVICE_ICON: <service_icon_url>
SERVICE_NAME: <service_name>
VERIFIED_EMAIL_CLAIM_NAME: <verified_email_claim>
PREFERRED_GROUP_CLAIM_NAME: <preferred_group_claim>
OIDC_DISABLE_USER_ENDPOINT: true
# ...
9.3.6. Recaptcha configuration fields
You can enable Recaptcha support in your Red Hat Quay instance to help protect user login and account recovery forms from abuse by automated systems.
Field | Type | Description |
---|---|---|
FEATURE_RECAPTCHA | Boolean |
Whether Recaptcha is necessary for user login and recovery |
RECAPTCHA_SECRET_KEY | String | If recaptcha is enabled, the secret key for the Recaptcha service |
RECAPTCHA_SITE_KEY | String | If recaptcha is enabled, the site key for the Recaptcha service |
Recaptcha example YAML
... ...
# ...
FEATURE_RECAPTCHA: true
RECAPTCHA_SITE_KEY: "<site_key>"
RECAPTCHA_SECRET_KEY: "<secret_key>"
# ...
9.3.7. JWT configuration fields
Red Hat Quay can be configured to support external authentication using JSON Web Tokens (JWT). This integration allows third-party identity providers or token issuers to authenticate and authorize users by calling specific endpoints that handle token verification, user lookup, and permission queries.
Field | Type | Description |
---|---|---|
JWT_AUTH_ISSUER | String |
The endpoint for JWT users |
JWT_GETUSER_ENDPOINT | String |
The endpoint for JWT users |
JWT_QUERY_ENDPOINT | String |
The endpoint for JWT queries |
JWT_VERIFY_ENDPOINT | String |
The endpoint for JWT verification |
JWT example YAML
... ...
# ...
JWT_AUTH_ISSUER: "http://192.168.99.101:6060"
JWT_GETUSER_ENDPOINT: "http://192.168.99.101:6060/getuser"
JWT_QUERY_ENDPOINT: "http://192.168.99.101:6060/query"
JWT_VERIFY_ENDPOINT: "http://192.168.99.101:6060/verify"
# ...
9.3.8. App tokens configuration fields
App-specific tokens allow users to authenticate with Red Hat Quay using token-based credentials. These fields might be useful for CLI tools like Docker.
Field | Type | Description |
---|---|---|
FEATURE_APP_SPECIFIC_TOKENS | Boolean |
If enabled, users can create tokens for use by the Docker CLI |
APP_SPECIFIC_TOKEN_EXPIRATION | String |
The expiration for external app tokens. |
EXPIRED_APP_SPECIFIC_TOKEN_GC | String |
Duration of time expired external app tokens will remain before being garbage collected |
App tokens example YAML
... ...
# ...
FEATURE_APP_SPECIFIC_TOKENS: true
APP_SPECIFIC_TOKEN_EXPIRATION: "30d"
EXPIRED_APP_SPECIFIC_TOKEN_GC: "1d"
# ...
9.4. Security and Permissions
This section describes configuration fields that govern core security behaviors and access policies within Red Hat Quay.
9.4.1. Namespace and repository management configuration fields
The following configuration fields govern how Red Hat Quay manages namespaces and repositories, including behavior during automated image pushes, visibility defaults, and rate limiting exceptions.
Field | Type | Description |
---|---|---|
DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT | Number |
The default maximum number of builds that can be queued in a namespace. |
CREATE_PRIVATE_REPO_ON_PUSH | Boolean |
Whether new repositories created by push are set to private visibility |
CREATE_NAMESPACE_ON_PUSH | Boolean |
Whether new push to a non-existent organization creates it |
PUBLIC_NAMESPACES | Array of String | If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces. |
NON_RATE_LIMITED_NAMESPACES | Array of String |
If rate limiting has been enabled using |
DISABLE_PUSHES | Boolean |
Disables pushes of new content to the registry while retaining all other functionality. Differs from |
Namespace and repository management example YAML
... ...
# ...
DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT: 10
CREATE_PRIVATE_REPO_ON_PUSH: true
CREATE_NAMESPACE_ON_PUSH: false
PUBLIC_NAMESPACES:
- redhat
- opensource
- infra-tools
NON_RATE_LIMITED_NAMESPACES:
- ci-pipeline
- trusted-partners
DISABLE_PUSHES: false
# ...
9.4.2. Nested repositories configuration fields
Support for nested repository path names has been added by the FEATURE_EXTENDED_REPOSITORY_NAMES
property. This optional configuration is added to the config.yaml
by default. Enablement allows the use of /
in repository names.
Field | Type | Description |
---|---|---|
FEATURE_EXTENDED_REPOSITORY_NAMES | Boolean |
Enable support for nested repositories |
Nested repositories example YAML
... ...
# ...
FEATURE_EXTENDED_REPOSITORY_NAMES: true
# ...
9.5. Additional security configuration fields
The following configuration fields provide additional security controls for your Red Hat Quay deployment. These options allow administrators to enforce authentication practices, control anonymous access to content, require team invitations, and enable FIPS-compliant cryptographic functions for environments with enhanced security requirements.
Feature | Type | Description |
---|---|---|
FEATURE_REQUIRE_TEAM_INVITE | Boolean |
Whether to require invitations when adding a user to a team |
FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH | Boolean |
Whether non-encrypted passwords (as opposed to encrypted tokens) can be used for basic auth |
FEATURE_ANONYMOUS_ACCESS | Boolean |
Whether to allow anonymous users to browse and pull public repositories |
FEATURE_FIPS | Boolean |
If set to true, Red Hat Quay will run using FIPS-compliant hash functions |
Additional security example YAML
... ...
# ...
FEATURE_REQUIRE_TEAM_INVITE: true
FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH: false
FEATURE_ANONYMOUS_ACCESS: true
FEATURE_FIPS: false
# ...
9.6. Rate limiting and performance configuration fields
The following fields control rate limiting and performance-related behavior for your Red Hat Quay deployment.
Field | Type | Description |
---|---|---|
FEATURE_RATE_LIMITS | Boolean |
Whether to enable rate limits on API and registry endpoints. Setting FEATURE_RATE_LIMITS to |
PROMETHEUS_NAMESPACE | String |
The prefix applied to all exposed Prometheus metrics |
Rate limiting and performance example YAML
... ...
# ...
FEATURE_RATE_LIMITS: false
PROMETHEUS_NAMESPACE: quay
# ...
9.7. Search configuration fields
The following configuration fields define how search results are paginated in the Red Hat Quay user interface.
Field | Type | Description |
---|---|---|
SEARCH_MAX_RESULT_PAGE_COUNT | Number |
Maximum number of pages the user can paginate in search before they are limited |
SEARCH_RESULTS_PER_PAGE | Number |
Number of results returned per page by search page |
Search example YAML
... ...
# ...
SEARCH_MAX_RESULT_PAGE_COUNT: 10
SEARCH_RESULTS_PER_PAGE: 10
# ...
9.8. Storage and Data Management
This section describes the configuration fields that govern how Red Hat Quay stores, manages, and audits data.
9.8.1. Image storage features
Red Hat Quay supports image storage features that enhance scalability, resilience, and flexibility in managing container image data. These features allow Red Hat Quay to mirror repositories, proxy storage access through NGINX, and replicate data across multiple storage engines.
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR | Boolean |
If set to true, enables repository mirroring. |
FEATURE_PROXY_STORAGE | Boolean |
Whether to proxy all direct download URLs in storage through NGINX. |
FEATURE_STORAGE_REPLICATION | Boolean |
Whether to automatically replicate between storage engines. |
Image storage example YAML
... ...
# ...
FEATURE_REPO_MIRROR: true
FEATURE_PROXY_STORAGE: false
FEATURE_STORAGE_REPLICATION: true
# ...
9.8.2. Action log storage configuration fields
Red Hat Quay maintains a detailed action log to track user and system activity, including repository events, authentication actions, and image operations. By default, this log data is stored in the database, but administrators can configure their deployment to export or forward logs to external systems like Elasticsearch or Splunk for advanced analysis, auditing, or compliance.
Additional resources
Field | Type | Description |
---|---|---|
FEATURE_LOG_EXPORT | Boolean |
Whether to allow exporting of action logs. |
LOGS_MODEL | String |
Specifies the preferred method for handling log data. |
LOGS_MODEL_CONFIG | Object | Logs model config for action logs. |
ALLOW_WITHOUT_STRICT_LOGGING | Boolean |
When set to |
Action log storage example YAML
... ...
# ...
FEATURE_LOG_EXPORT: true
LOGS_MODEL: elasticsearch
LOGS_MODEL_CONFIG:
elasticsearch:
endpoint: http://elasticsearch.example.com:9200
index_prefix: quay-logs
username: elastic
password: changeme
ALLOW_WITHOUT_STRICT_LOGGING: true
# ...
9.8.2.1. Action log rotation and archiving configuration
This section describes configuration fields related to action log rotation and archiving in Red Hat Quay. When enabled, older logs can be automatically rotated and archived to designated storage locations, helping to manage log retention and storage utilization efficiently.
Field | Type | Description |
---|---|---|
FEATURE_ACTION_LOG_ROTATION | Boolean |
Enabling log rotation and archival will move all logs older than 30 days to storage. |
ACTION_LOG_ARCHIVE_LOCATION | String |
If action log archiving is enabled, the storage engine in which to place the archived data. |
ACTION_LOG_ARCHIVE_PATH | String |
If action log archiving is enabled, the path in storage in which to place the archived data. |
ACTION_LOG_ROTATION_THRESHOLD | String |
The time interval after which to rotate logs. |
Action log rotation and archiving example YAML
... ...
# ...
FEATURE_ACTION_LOG_ROTATION: true
ACTION_LOG_ARCHIVE_LOCATION: s3_us_east
ACTION_LOG_ARCHIVE_PATH: archives/actionlogs
ACTION_LOG_ROTATION_THRESHOLD: 30d
# ...
9.8.2.2. Action log audit configuration
This section covers the configuration fields for audit logging within Red Hat Quay. When enabled, audit logging tracks detailed user activity such as UI logins, logouts, and Docker logins for regular users, robot accounts, and token-based accounts.
Field | Type | Description |
---|---|---|
ACTION_LOG_AUDIT_LOGINS | Boolean |
When set to |
Audit logs configuration example YAML
... ...
# ...
ACTION_LOG_AUDIT_LOGINS: true
# ...
9.8.3. Elasticsearch configuration fields
Use the following configuration fields to integrate Red Hat Quay with an external Elasticsearch service. This enables storing and querying structured data such as action logs, repository events, and other operational records outside of the internal database.
Field | Type | Description |
---|---|---|
LOGS_MODEL_CONFIG.elasticsearch_config.access_key | String |
Elasticsearch user (or IAM key for AWS ES). |
.elasticsearch_config.host | String |
Elasticsearch cluster endpoint. |
.elasticsearch_config.index_prefix | String |
Prefix for Elasticsearch indexes. |
.elasticsearch_config.index_settings | Object | Index settings for Elasticsearch. |
LOGS_MODEL_CONFIG.elasticsearch_config.use_ssl | Boolean |
Whether to use SSL for Elasticsearch. |
.elasticsearch_config.secret_key | String |
Elasticsearch password (or IAM secret for AWS ES). |
.elasticsearch_config.aws_region | String |
AWS region. |
.elasticsearch_config.port | Number |
Port of the Elasticsearch cluster. |
.kinesis_stream_config.aws_secret_key | String |
AWS secret key. |
.kinesis_stream_config.stream_name | String |
AWS Kinesis stream to send action logs to. |
.kinesis_stream_config.aws_access_key | String |
AWS access key. |
.kinesis_stream_config.retries | Number |
Max number of retry attempts for a single request. |
.kinesis_stream_config.read_timeout | Number |
Read timeout in seconds. |
.kinesis_stream_config.max_pool_connections | Number |
Max number of connections in the pool. |
.kinesis_stream_config.aws_region | String |
AWS region. |
.kinesis_stream_config.connect_timeout | Number |
Connection timeout in seconds. |
.producer | String |
Logs producer type. |
.kafka_config.topic | String |
Kafka topic used to publish log entries. |
.kafka_config.bootstrap_servers | Array | List of Kafka brokers used to bootstrap the client. |
.kafka_config.max_block_seconds | Number |
Max seconds to block during a |
Elasticsearch example YAML
... ...
# ...
FEATURE_LOG_EXPORT: true
LOGS_MODEL: elasticsearch
LOGS_MODEL_CONFIG:
producer: elasticsearch
elasticsearch_config:
access_key: elastic_user
secret_key: elastic_password
host: es.example.com
port: 9200
use_ssl: true
aws_region: us-east-1
index_prefix: logentry_
index_settings:
number_of_shards: 3
number_of_replicas: 1
ALLOW_WITHOUT_STRICT_LOGGING: true
# ...
9.8.3.1. Splunk configuration fields
Use the following fields to configure Red Hat Quay to export action logs to a Splunk endpoint. This configuration allows audit and event logs to be sent to an external Splunk server for centralized analysis, search, and long-term storage.
Field | Type | Description |
---|---|---|
producer | String |
Must be set to |
splunk_config | Object | Logs model configuration for Splunk action logs or Splunk cluster configuration. |
.host | String | The Splunk cluster endpoint. |
.port | Integer | The port number for the Splunk management cluster endpoint. |
.bearer_token | String | The bearer token used for authentication with Splunk. |
.verify_ssl | Boolean |
Enable ( |
.index_prefix | String | The index prefix used by Splunk. |
.ssl_ca_path | String |
The relative container path to a |
Splunk configuration example YAML
... ...
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
producer: splunk
splunk_config:
host: http://<user_name>.remote.csb
port: 8089
bearer_token: <bearer_token>
url_scheme: <http/https>
verify_ssl: False
index_prefix: <splunk_log_index_name>
ssl_ca_path: <location_to_ssl-ca-cert.pem>
# ...
9.8.3.1.1. Splunk HEC configuration fields
The following fields are available when configuring Splunk HTTP Event Collector (HEC) for Red Hat Quay.
Field | Type | Description |
---|---|---|
producer | String |
Must be set to |
splunk_hec_config | Object | Logs model configuration for Splunk HTTP Event Collector action logs. |
.host | String | Splunk cluster endpoint. |
.port | Integer | Splunk management cluster endpoint port. |
.hec_token | String | HEC token used for authenticating with Splunk. |
.url_scheme | String |
URL scheme to access the Splunk service. Use |
.verify_ssl | Boolean |
Enable ( |
.index | String | The Splunk index to use for log storage. |
.splunk_host | String | The hostname to assign to the logged event. |
.splunk_sourcetype | String |
The Splunk |
Splunk HEC example YAML
... ...
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
producer: splunk_hec
splunk_hec_config:
host: prd-p-aaaaaq.splunkcloud.com
port: 8088
hec_token: 12345678-1234-1234-1234-1234567890ab
url_scheme: https
verify_ssl: False
index: quay
splunk_host: quay-dev
splunk_sourcetype: quay_logs
# ...
9.9. Builds and Automation
This section outlines the configuration options available for managing automated builds within Red Hat Quay. These settings control how Dockerfile builds are triggered, processed, and stored, and how build logs are managed and accessed.
You can use these fields to:
- Enable or disable automated builds from source repositories.
- Configure the behavior and resource management of the build manager.
- Control access to and retention of build logs for auditing or debugging purposes.
These options help you streamline your CI/CD pipeline, enforce build policies, and retain visibility into your build history across the registry.
Additional resources
9.9.1. Dockerfile build triggers fields
This section describes the configuration fields used to enable and manage automated builds in Red Hat Quay from Dockerfiles and source code repositories. These fields allow you to define build behavior, enable or disable support for GitHub, GitLab, and Bitbucket triggers, and provide OAuth credentials and endpoints for each SCM provider.
Field | Type | Description |
---|---|---|
FEATURE_BUILD_SUPPORT | Boolean |
Whether to support Dockerfile build. |
SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD | Number |
If not set to |
SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD | Number |
If not set to |
Dockerfile build support example YAML
... ...
# ...
FEATURE_BUILD_SUPPORT: true
SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD: 100
SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD: 5
# ...
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_BUILD | Boolean |
Whether to support GitHub build triggers. |
GITHUB_TRIGGER_CONFIG | Object | Configuration for using GitHub Enterprise for build triggers. |
.GITHUB_ENDPOINT | String |
The endpoint for GitHub Enterprise. |
.API_ENDPOINT | String |
The endpoint of the GitHub Enterprise API to use. Must be overridden for |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance; this cannot be shared with |
.CLIENT_SECRET | String | The registered client secret for this Red Hat Quay instance. |
Github build triggers example YAML
... ...
# ...
FEATURE_GITHUB_BUILD: true
GITHUB_TRIGGER_CONFIG:
GITHUB_ENDPOINT: https://github.com/
API_ENDPOINT: https://api.github.com/
CLIENT_ID: your-client-id
CLIENT_SECRET: your-client-secret
# ...
Field | Type | Description |
---|---|---|
FEATURE_BITBUCKET_BUILD | Boolean |
Whether to support Bitbucket build triggers. |
BITBUCKET_TRIGGER_CONFIG | Object | Configuration for using BitBucket for build triggers. |
.CONSUMER_KEY | String | The registered consumer key (client ID) for this Red Hat Quay instance. |
.CONSUMER_SECRET | String | The registered consumer secret (client secret) for this Red Hat Quay instance. |
Bitbucket build triggers example YAML
... ...
# ...
FEATURE_BITBUCKET_BUILD: true
BITBUCKET_TRIGGER_CONFIG:
CONSUMER_KEY: <your_consumer_key>
CONSUMER_SECRET: <your-consumer-secret>
# ...
Field | Type | Description |
---|---|---|
FEATURE_GITLAB_BUILD | Boolean |
Whether to support GitLab build triggers. |
GITLAB_TRIGGER_CONFIG | Object | Configuration for using Gitlab for build triggers. |
.GITLAB_ENDPOINT | String | The endpoint at which Gitlab Enterprise is running. |
.CLIENT_ID | String | The registered client ID for this Red Hat Quay instance. |
.CLIENT_SECRET | String | The registered client secret for this Red Hat Quay instance. |
GitLab build triggers example YAML
... ...
# ...
FEATURE_GITLAB_BUILD: true
GITLAB_TRIGGER_CONFIG:
GITLAB_ENDPOINT: https://gitlab.example.com/
CLIENT_ID: <your_gitlab_client_id>
CLIENT_SECRET: <your_gitlab_client_secret>
# ...
9.9.2. Build manager configuration fields
The following configuration fields control how the build manager component of Red Hat Quay orchestrates and manages container image builds. This includes settings for Redis coordination, executor backends such as Kubernetes or EC2, builder image configuration, and advanced scheduling and retry policies.
These fields must be configured to align with your infrastructure environment and workload requirements.
Field | Type | Description |
---|---|---|
ALLOWED_WORKER_COUNT | String |
Defines how many Build Workers are instantiated per Red Hat Quay pod. Typically set to |
ORCHESTRATOR_PREFIX | String | Defines a unique prefix to be added to all Redis keys. This is useful to isolate Orchestrator values from other Redis keys. |
REDIS_HOST | Object | The hostname for your Redis service. |
REDIS_PASSWORD | String | The password to authenticate into your Redis service. |
REDIS_SSL | Boolean | Defines whether or not your Redis connection uses SSL/TLS. |
REDIS_SKIP_KEYSPACE_EVENT_SETUP | Boolean |
By default, Red Hat Quay does not set up the keyspace events required for key events at runtime. To do so, set |
EXECUTOR | String |
Starts a definition of an Executor of this type. Valid values are |
BUILDER_NAMESPACE | String | Kubernetes namespace where Red Hat Quay Builds will take place. |
K8S_API_SERVER | Object | Hostname for API Server of the OpenShift Container Platform cluster where Builds will take place. |
K8S_API_TLS_CA | Object |
The filepath in the |
KUBERNETES_DISTRIBUTION | String |
Indicates which type of Kubernetes is being used. Valid values are |
CONTAINER_* | Object |
Define the resource requests and limits for each |
NODE_SELECTOR_* | Object |
Defines the node selector label name-value pair where |
CONTAINER_RUNTIME | Object |
Specifies whether the Builder should run |
SERVICE_ACCOUNT_NAME/SERVICE_ACCOUNT_TOKEN | Object |
Defines the Service Account name or token that will be used by |
QUAY_USERNAME/QUAY_PASSWORD | Object |
Defines the registry credentials needed to pull the Red Hat Quay build worker image that is specified in the |
WORKER_IMAGE | Object | Image reference for the Red Hat Quay Builder image. registry.redhat.io/quay/quay-builder |
WORKER_TAG | Object | Tag for the Builder image desired. The latest version is 3.14. |
BUILDER_VM_CONTAINER_IMAGE | Object |
The full reference to the container image holding the internal VM needed to run each Red Hat Quay Build. ( |
SETUP_TIME | String |
Specifies the number of seconds at which a Build times out if it has not yet registered itself with the Build Manager. Defaults at |
MINIMUM_RETRY_THRESHOLD | String |
This setting is used with multiple Executors. It indicates how many retries are attempted to start a Build before a different Executor is chosen. Setting to |
SSH_AUTHORIZED_KEYS | Object |
List of SSH keys to bootstrap in the |
Build manager configuration fields
... ...
# ...
ALLOWED_WORKER_COUNT: "1"
ORCHESTRATOR_PREFIX: "quaybuild:"
REDIS_HOST: redis.example.com
REDIS_PASSWORD: examplepassword
REDIS_SSL: true
REDIS_SKIP_KEYSPACE_EVENT_SETUP: false
EXECUTOR: kubernetes
BUILDER_NAMESPACE: quay-builder
K8S_API_SERVER: https://api.openshift.example.com:6443
K8S_API_TLS_CA: /etc/ssl/certs/ca.crt
KUBERNETES_DISTRIBUTION: openshift
CONTAINER_RUNTIME: podman
CONTAINER_MEMORY_LIMITS: 2Gi
NODE_SELECTOR_ROLE: quay-build-node
SERVICE_ACCOUNT_NAME: quay-builder-sa
QUAY_USERNAME: quayuser
QUAY_PASSWORD: quaypassword
WORKER_IMAGE: quay.io/quay/quay-builder
WORKER_TAG: latest
BUILDER_VM_CONTAINER_IMAGE: quay.io/quay/vm-builder:latest
SETUP_TIME: "500"
MINIMUM_RETRY_THRESHOLD: "1"
SSH_AUTHORIZED_KEYS:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAsomekey user@example.com
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAnotherkey user2@example.com
# ...
9.9.3. Build logs configuration fields
This section describes the available configuration fields for managing build logs in Red Hat Quay. These settings determine where build logs are archived, who can access them, and how they are stored.
Field | Type | Description |
---|---|---|
FEATURE_READER_BUILD_LOGS | Boolean |
If set to true, build logs can be read by those with |
LOG_ARCHIVE_LOCATION | String |
The storage location, defined in |
LOG_ARCHIVE_PATH | String |
The path under the configured storage engine in which to place the archived build logs in |
Build logs example YAML
... ...
# ...
FEATURE_READER_BUILD_LOGS: true
LOG_ARCHIVE_LOCATION: s3_us_east
LOG_ARCHIVE_PATH: archives/buildlogs
# ...
9.10. Tag and image management
This section describes the configuration fields that control how tags and images are managed within Red Hat Quay. These settings help automate image cleanup, manage repository mirrors, and enhance performance through caching.
You can use these fields to:
- Define expiration policies for untagged or outdated images.
- Enable and schedule mirroring of external repositories into your registry.
- Leverage model caching to optimize performance for tag and repository operations.
These options help maintain an up-to-date image registry environment.
9.10.1. Tag expiration configuration fields
The following configuration options are available to automate tag expiration and garbage collection. These features help manage storage usage by enabling cleanup of unused or expired tags based on defined policies.
Field | Type | Description |
---|---|---|
FEATURE_GARBAGE_COLLECTION | Boolean |
Whether garbage collection of repositories is enabled. |
TAG_EXPIRATION_OPTIONS | Array of string |
If enabled, the options that users can select for expiration of tags in their namespace. |
DEFAULT_TAG_EXPIRATION | String |
The default, configurable tag expiration time for time machine. |
FEATURE_CHANGE_TAG_EXPIRATION | Boolean |
Whether users and organizations are allowed to change the tag expiration for tags in their namespace. |
FEATURE_AUTO_PRUNE | Boolean |
When set to |
NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES | Integer |
The interval, in minutes, that defines the frequency to re-run notifications for expiring images. |
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY | Object | The default organization-wide auto-prune policy. |
.method: number_of_tags | Object | The option specifying the number of tags to keep. |
.value: <integer> | Integer |
When used with method: number_of_tags, denotes the number of tags to keep.
For example, to keep two tags, specify |
.creation_date | Object | The option specifying the duration of which to keep tags. |
.value: <integer> | Integer |
When used with creation_date, denotes how long to keep tags.
Can be set to seconds ( |
AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD | Integer | The period in which the auto-pruner worker runs at the registry level. By default, it is set to run one time per day (one time per 24 hours). Value must be in seconds. |
Example tag expiration example YAML
... ...
# ...
FEATURE_GARBAGE_COLLECTION: true
TAG_EXPIRATION_OPTIONS:
- 1w
- 2w
- 1m
- 90d
DEFAULT_TAG_EXPIRATION: 2w
FEATURE_CHANGE_TAG_EXPIRATION: true
FEATURE_AUTO_PRUNE: true
NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
method: number_of_tags
value: 10
AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD: 86400
# ...
- 1
- Specifies ten tags to remain.
Registry auto-prune policy by creation date example YAML
... ...
# ...
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
method: creation_date
value: 1y
# ...
- 1
- Specifies tags to be pruned one year after their creation date.
9.10.2. Mirroring configuration fields
Mirroring in Red Hat Quay enables automatic synchronization of repositories with upstream sources. This feature is useful for maintaining local mirrors of remote container images, ensuring availability in disconnected environments or improving performance through caching.
Additional information
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR | Boolean |
Enable or disable repository mirroring |
REPO_MIRROR_INTERVAL | Number |
The number of seconds between checking for repository mirror candidates |
REPO_MIRROR_SERVER_HOSTNAME | String |
Replaces the |
REPO_MIRROR_TLS_VERIFY | Boolean |
Require HTTPS and verify certificates of Quay registry during mirror. |
REPO_MIRROR_ROLLBACK | Boolean |
When set to
Default: |
Mirroring configuration example YAML
... ...
# ...
FEATURE_REPO_MIRROR: true
REPO_MIRROR_INTERVAL: 30
REPO_MIRROR_SERVER_HOSTNAME: "openshift-quay-service"
REPO_MIRROR_TLS_VERIFY: true
REPO_MIRROR_ROLLBACK: false
# ...
9.10.3. ModelCache configuration fields
ModelCache is a caching mechanism used by Red Hat Quay to store accessed data and reduce database load. Quay supports multiple backends for caching, including the default Memcache, as well as Redis and Redis Cluster.
- Memcache (default): requires no additional configuration.
- Redis: can be configured as a single instance or with a read-only replica.
- Redis Cluster: provides high availability and sharding for larger deployments.
Field | Type | Description |
---|---|---|
DATA_MODEL_CACHE_CONFIG.engine | String |
The cache backend engine. |
.redis_config.primary.host | String |
The hostname of the primary Redis instance when using the |
.redis_config.primary.port | Number | The port used by the primary Redis instance. |
.redis_config.primary.password | String |
The password for authenticating with the primary Redis instance. Only required if |
.redis_config.primary.ssl | Boolean | Whether to use SSL/TLS for the primary Redis connection. |
.redis_config.startup_nodes | Array of Map |
For |
redis_config.password | String |
Password used for authentication with the Redis cluster. Required if |
.redis_config.read_from_replicas | Boolean | Whether to allow read operations from Redis cluster replicas. |
.redis_config.skip_full_coverage_check | Boolean | If set to true, skips the Redis cluster full coverage check. |
.redis_config.ssl | Boolean | Whether to use SSL/TLS for Redis cluster communication. |
.replica.host | String | The hostname of the Redis replica instance. Optional. |
.replica.port | Number | The port used by the Redis replica instance. |
.replica.password | String |
The password for the Redis replica. Required if |
.replica.ssl | Boolean | Whether to use SSL/TLS for the Redis replica connection. |
Single Redis with optional replica example YAML
... ...
# ...
DATA_MODEL_CACHE_CONFIG:
engine: redis
redis_config:
primary:
host: <redis-primary.example.com>
port: 6379
password: <redis_password>>
ssl: true
replica:
host: <redis-replica.example.com>
port: 6379
password: <redis_password>
ssl: true
# ...
Clustered Redis example YAML
... ...
# ...
DATA_MODEL_CACHE_CONFIG:
engine: <rediscluster>
redis_config:
startup_nodes:
- host: <redis-node-1.example.com>
port: 6379
- host: <redis-node-2.example.com>
port: 6379
password: <cluster_password>
read_from_replicas: true
skip_full_coverage_check: true
ssl: true
# ...
9.11. Scanner and Metadata
This section describes configuration fields related to security scanning, metadata presentation, and artifact relationships within Red Hat Quay.
These settings enable enhanced visibility and security by allowing Red Hat Quay to:
- Integrate with a vulnerability scanner to assess container images for known CVEs.
- Render AI/ML model metadata through model cards stored in the registry.
- Expose relationships between container artifacts using the Referrers API, aligning with the OCI artifact specification.
Together, these features help improve software supply chain transparency, enforce security policies, and support emerging metadata-driven workflows.
9.11.1. Clair security scanner configuration fields
Red Hat Quay can leverage Clair security scanner to detect vulnerabilities in container images. These configuration fields control how the scanner is enabled, how frequently it indexes new content, which endpoints are used, and how notifications are handled.
Field | Type | Description |
---|---|---|
FEATURE_SECURITY_SCANNER | Boolean |
Enable or disable the security scanner |
FEATURE_SECURITY_NOTIFICATIONS | Boolean |
If the security scanner is enabled, turn on or turn off security notifications |
SECURITY_SCANNER_V4_REINDEX_THRESHOLD | String |
This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the |
SECURITY_SCANNER_V4_ENDPOINT | String |
The endpoint for the V4 security scanner |
SECURITY_SCANNER_V4_PSK | String | The generated pre-shared key (PSK) for Clair |
SECURITY_SCANNER_ENDPOINT | String |
The endpoint for the V2 security scanner |
SECURITY_SCANNER_INDEXING_INTERVAL | Integer |
This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Red Hat Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing. |
FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX | Boolean |
Whether to allow sending notifications about vulnerabilities for new pushes. |
SECURITY_SCANNER_V4_MANIFEST_CLEANUP | Boolean |
Whether the Red Hat Quay garbage collector removes manifests that are not referenced by other tags or manifests. |
NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX | String |
Set minimal security level for new notifications on detected vulnerabilities. Avoids creation of large number of notifications after first index. If not defined, defaults to |
SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE | String |
The maximum layer size allowed for indexing. If the layer size exceeds the configured size, the Red Hat Quay UI returns the following message: |
Security scanner YAML configuration
... ...
# ...
FEATURE_SECURITY_NOTIFICATIONS: true
FEATURE_SECURITY_SCANNER: true
FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true
...
SECURITY_SCANNER_INDEXING_INTERVAL: 30
SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true
SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081
SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ==
SERVER_HOSTNAME: quay-server.example.com
SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE: 8G
# ...
- 1
- Recommended maximum is
10G
.
9.11.1.1. Re-indexing with Clair v4
When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine (/indexer/api/v1/index_state
) to determine whether the scanner configuration has been changed.
Red Hat Quay leverages this index state by saving it to the index report when parsing to Quay’s database. If this state has changed since the manifest was previously scanned, Red Hat Quay will attempt to re-index that manifest during the periodic indexing process.
By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Red Hat Quay database.
9.11.2. Model card rendering configuration fields
Red Hat Quay supports the rendering of Model Cards—a form of metadata documentation commonly used in machine learning workflows—to improve the visibility and management of model-related content within OCI-compliant images.
Additional information
Field | Type | Description |
---|---|---|
FEATURE_UI_MODELCARD | Boolean |
Enables Model Card image tab in UI. Defaults to |
UI_MODELCARD_ARTIFACT_TYPE | String | Defines the model card artifact type. |
UI_MODELCARD_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
UI_MODELCARD_LAYER_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
Model card example YAML
FEATURE_UI_MODELCARD: true UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel UI_MODELCARD_ANNOTATION: org.opencontainers.image.description: "Model card metadata" UI_MODELCARD_LAYER_ANNOTATION: org.opencontainers.image.title: README.md
FEATURE_UI_MODELCARD: true
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel
UI_MODELCARD_ANNOTATION:
org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION:
org.opencontainers.image.title: README.md
- 1
- Enables the Model Card image tab in the UI.
- 2
- Defines the model card artifact type. In this example, the artifact type is
application/x-mlmodel
. - 3
- Optional. If an image does not have an
artifactType
defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matchingUI_MODELCARD_LAYER_ANNOTATION
. - 4
- Optional. If an image has an
artifactType
defined and multiple layers, this field is used to locate the specific layer containing the model card.
9.11.3. Open Container Initiative referrers API configuration field
The Open Container Initiative (OCI) referrers API aids in the retrieval and management of referrers helps improve container image management.
Additional information
Field | Type | Description |
---|---|---|
FEATURE_REFERRERS_API | Boolean | Enables OCI 1.1’s referrers API. |
OCI referrers enablement example YAML
... ...
# ...
FEATURE_REFERRERS_API: True
# ...
9.12. Quota management and proxy cache features
This section outlines configuration fields related to enforcing storage limits and improving image availability through proxy caching.
These features help registry administrators:
- Control how much storage organizations and users consume with configurable quotas.
- Improve access to upstream images by caching remote content locally via proxy cache.
- Monitor and manage resource consumption and availability across distributed environments.
Collectively, these capabilities ensure better performance, governance, and resiliency in managing container image workflows.
Additional resources
9.12.1. Quota management configuration fields
The following configuration fields enable and customize quota management functionality in Red Hat Quay. Quota management helps administrators enforce storage usage policies at the organization level by allowing them to set usage limits, calculate blob sizes, and control tag deletion behavior.
Field | Type | Description |
---|---|---|
FEATURE_QUOTA_MANAGEMENT | Boolean | Enables configuration, caching, and validation for quota management feature. **Default:** `False`
|
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES | String | Enables system default quota reject byte allowance for all organizations. By default, no limit is set. |
QUOTA_BACKFILL | Boolean | Enables the quota backfill worker to calculate the size of pre-existing blobs.
Default: |
QUOTA_TOTAL_DELAY_SECONDS | String | The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete.
Default: |
PERMANENTLY_DELETE_TAGS | Boolean | Enables functionality related to the removal of tags from the time machine window.
Default: |
RESET_CHILD_MANIFEST_EXPIRATION | Boolean |
Resets the expirations of temporary tags targeting the child manifests. With this feature set to
Default: |
Quota management example YAML
... ...
# ...
FEATURE_QUOTA_MANAGEMENT: true
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: "100gb"
QUOTA_BACKFILL: true
QUOTA_TOTAL_DELAY_SECONDS: "3600"
PERMANENTLY_DELETE_TAGS: true
RESET_CHILD_MANIFEST_EXPIRATION: true
# ...
9.12.2. Proxy cache configuration fields
The proxy cache configuration in Red Hat Quay enables Red Hat Quay to act as a pull-through cache for upstream container registries. When FEATURE_PROXY_CACHE
is enabled, Red Hat Quay can cache images that are pulled from external registries, reducing bandwidth consumption and improving image retrieval speed on subsequent requests.
Field | Type | Description |
---|---|---|
FEATURE_PROXY_CACHE | Boolean | Enables Red Hat Quay to act as a pull through cache for upstream registries.
Default: |
Proxy cache example YAML
... ...
# ...
FEATURE_PROXY_CACHE: true
# ...
9.13. QuayIntegration configuration fields
The QuayIntegration
custom resource enables integration between your OpenShift Container Platform cluster and a Red Hat Quay registry instance.
Additional resources
Name | Description | Schema |
---|---|---|
allowlistNamespaces | A list of namespaces to include. | Array |
clusterID | The ID associated with this cluster. | String |
credentialsSecret.key | The secret containing credentials to communicate with the Quay registry. | Object |
denylistNamespaces | A list of namespaces to exclude. | Array |
insecureRegistry | Whether to skip TLS verification to the Quay registry | Boolean |
quayHostname | The hostname of the Quay registry. | String |
scheduledImageStreamImport | Whether to enable image stream importing. | Boolean |
QuayIntegration example CR
apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration spec: clusterID: 1df512fc-bf70-11ee-bb31-001a4a160100 quayHostname: quay.example.com credentialsSecret: name: quay-creds-secret key: token allowlistNamespaces: - dev-team - prod-team denylistNamespaces: - test insecureRegistry: false scheduledImageStreamImport: true
apiVersion: quay.redhat.com/v1
kind: QuayIntegration
metadata:
name: example-quayintegration
spec:
clusterID: 1df512fc-bf70-11ee-bb31-001a4a160100
quayHostname: quay.example.com
credentialsSecret:
name: quay-creds-secret
key: token
allowlistNamespaces:
- dev-team
- prod-team
denylistNamespaces:
- test
insecureRegistry: false
scheduledImageStreamImport: true
9.14. Mail configuration fields
To enable email notifications from your Red Hat Quay instance, such as account confirmation, password reset, and security alerts. These settings allow Red Hat Quay to connect to your SMTP server and send outbound messages on behalf of your registry.
Field | Type | Description |
---|---|---|
FEATURE_MAILING | Boolean |
Whether emails are enabled |
MAIL_DEFAULT_SENDER | String |
If specified, the e-mail address used as the |
MAIL_PASSWORD | String | The SMTP password to use when sending e-mails |
MAIL_PORT | Number | The SMTP port to use. If not specified, defaults to 587. |
MAIL_SERVER | String |
The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true. |
MAIL_USERNAME | String | The SMTP username to use when sending e-mails |
MAIL_USE_TLS | Boolean |
If specified, whether to use TLS for sending e-mails |
Mail example YAML
... ...
# ...
FEATURE_MAILING: true
MAIL_DEFAULT_SENDER: "support@example.com"
MAIL_SERVER: "smtp.example.com"
MAIL_PORT: 587
MAIL_USERNAME: "smtp-user@example.com"
MAIL_PASSWORD: "your-smtp-password"
MAIL_USE_TLS: true
# ...
Chapter 10. Environment variable configuration
Red Hat Quay supports a limited set of environment variables that control runtime behavior and performance tuning. These values provide flexibility in specific scenarios where per-process behavior, connection counts, or regional configuration must be adjusted dynamically.
Use environment variables cautiously. These options typically override or augment existing configuration mechanisms.
This section documents environment variables related to the following components:
- Geo-replication preferences
- Database connection pooling
- HTTP connection concurrency
- Worker process scaling
10.1. Geo-replication
Red Hat Quay supports multi-region deployments where multiple instances operate across geographically distributed sites. In these scenarios, each site shares the same configuration and metadata, but storage backends might vary between regions.
To accommodate this, Red Hat Quay allows specifying a preferred storage engine for each deployment using an environment variable. This ensures that while metadata remains synchronized across all regions, each region can use its own optimized storage backend without requiring separate configuration files.
Use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable to explicitly set the preferred storage engine by its ID, as defined in DISTRIBUTED_STORAGE_CONFIG
.
Variable | Type | Description |
---|---|---|
QUAY_DISTRIBUTED_STORAGE_PREFERENCE | String | The preferred storage engine (by ID in DISTRIBUTED_STORAGE_CONFIG) to use. |
10.2. Database connection pooling
Red Hat Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.
Database connection pooling is enabled by default, and each process that interacts with the database contains a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Red Hat Quay container. Under certain deployments and loads, this might require analysis to ensure that Red Hat Quay does not exceed the configured database’s maximum connection count.
Overtime, the connection pools release idle connections. To release all connections immediately, Red Hat Quay requires a restart.
Variable | Type | Description |
---|---|---|
DB_CONNECTION_POOLING | String |
Whether to enable or disable database connection pooling. Defaults to true. Accepted values are |
If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml
option:
Database connection pooling example YAML
... ...
# ...
DB_CONNECTION_ARGS:
max_connections: 10
# ...
10.2.1. Disabling database pooling in standalone deployments
For standalone Red Hat Quay deployments, database connection pooling can be toggled off when starting your deployment. For example:
sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ -e DB_CONNECTION_POOLING=false
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
--name=quay \
-v $QUAY/config:/conf/stack:Z \
-v $QUAY/storage:/datastorage:Z \
-e DB_CONNECTION_POOLING=false
registry.redhat.io/quay/quay-rhel8:v3.12.1
10.2.2. Disabling database pooling for Red Hat Quay on OpenShift Container Platform
For Red Hat Quay on OpenShift Container Platform, database connection pooling can be configured by modifying the QuayRegistry
custom resource definition (CRD). For example:
Example QuayRegistry CRD
spec: components: - kind: quay managed: true overrides: env: - name: DB_CONNECTION_POOLING value: "false"
spec:
components:
- kind: quay
managed: true
overrides:
env:
- name: DB_CONNECTION_POOLING
value: "false"
10.3. HTTP connection counts
You can control the number of simultaneous HTTP connections handled by Red Hat Quay using environment variables. These limits apply either globally or can be scoped to individual components (registry, web UI, or security scanning). By default, each worker process allows up to 50
parallel connections.
This setting is distinct from the number of worker processes.
These connection-related environment variables can be configured differently depending on your deployment type:
-
In standalone deployments, configure connection counts in the
config.yaml
file. -
In Red Hat Quay on OpenShift Container Platform deployments, define the values in the
env
block of theQuayRegistry
CR.
Variable | Type | Description |
---|---|---|
WORKER_CONNECTION_COUNT | Number |
Global default for the maximum number of HTTP connections per worker process. |
WORKER_CONNECTION_COUNT_REGISTRY | Number |
HTTP connections per registry worker. |
WORKER_CONNECTION_COUNT_WEB | Number |
HTTP connections per web UI worker. |
WORKER_CONNECTION_COUNT_SECSCAN | Number |
HTTP connections per Clair security scanner worker. |
HTTP connection configuration for standalone Red Hat Quay deployments
WORKER_CONNECTION_COUNT: 10 WORKER_CONNECTION_COUNT_REGISTRY: 10 WORKER_CONNECTION_COUNT_WEB: 10 WORKER_CONNECTION_COUNT_SECSCAN: 10
WORKER_CONNECTION_COUNT: 10
WORKER_CONNECTION_COUNT_REGISTRY: 10
WORKER_CONNECTION_COUNT_WEB: 10
WORKER_CONNECTION_COUNT_SECSCAN: 10
HTTP connection configuration for Red Hat Quay on OpenShift Container Platform
env: - name: WORKER_CONNECTION_COUNT value: "10" - name: WORKER_CONNECTION_COUNT_REGISTRY value: "10" - name: WORKER_CONNECTION_COUNT_WEB value: "10" - name: WORKER_CONNECTION_COUNT_SECSCAN value: "10"
env:
- name: WORKER_CONNECTION_COUNT
value: "10"
- name: WORKER_CONNECTION_COUNT_REGISTRY
value: "10"
- name: WORKER_CONNECTION_COUNT_WEB
value: "10"
- name: WORKER_CONNECTION_COUNT_SECSCAN
value: "10"
10.4. Worker process counts
You can control the number of worker processes that handle incoming requests in Red Hat Quay using environment variables. These values define how many parallel processes are started to handle tasks for different components of the system, such as the registry, the web UI, and security scanning.
If not explicitly set, Red Hat Quay calculates the number of worker processes automatically based on the number of available CPU cores. While this dynamic scaling can optimize performance on larger machines, it might also lead to unnecessary resource usage in smaller environments.
In Red Hat Quay on OpenShift Container Platform deployments, the Operator sets the following default values:
-
WORKER_COUNT_REGISTRY
: 8 -
WORKER_COUNT_WEB
: 4 -
WORKER_COUNT_SECSCAN
: 2
Variable | Type | Description |
---|---|---|
WORKER_COUNT | Number | Generic override for number of processes |
WORKER_COUNT_REGISTRY | Number |
Specifies the number of processes to handle Registry requests within the |
WORKER_COUNT_WEB | Number |
Specifies the number of processes to handle UI/Web requests within the container |
WORKER_COUNT_SECSCAN | Number |
Specifies the number of processes to handle Security Scanning (e.g. Clair) integration within the container |
Worker count configuration for standalone Red Hat Quay deployments
WORKER_COUNT: 10 WORKER_COUNT_REGISTRY: 16 WORKER_COUNT_WEB: 8 WORKER_COUNT_SECSCAN: 4
WORKER_COUNT: 10
WORKER_COUNT_REGISTRY: 16
WORKER_COUNT_WEB: 8
WORKER_COUNT_SECSCAN: 4
Worker count configuration for Red Hat Quay on OpenShift Container Platform
env: - name: WORKER_COUNT value: "10" - name: WORKER_COUNT_REGISTRY value: "16" - name: WORKER_COUNT_WEB value: "8" - name: WORKER_COUNT_SECSCAN value: "4"
env:
- name: WORKER_COUNT
value: "10"
- name: WORKER_COUNT_REGISTRY
value: "16"
- name: WORKER_COUNT_WEB
value: "8"
- name: WORKER_COUNT_SECSCAN
value: "4"
Chapter 11. Clair security scanner
Configuration fields for Clair have been moved to Clair configuration overview. This chapter will be removed in a future version of Red Hat Quay.