Configure Red Hat Quay
Customizing Red Hat Quay using configuration options
Abstract
Chapter 1. Getting started with Red Hat Quay configuration
Red Hat Quay is a secure artifact registry that can be deployed as a self-managed installation, or through the Red Hat Quay on OpenShift Container Platform Operator. Each deployment type offers a different approach to configuration and management, but each rely on the same set of configuration parameters to control registry behavior. Common configuration parameters allow administrators to define how their registry interacts with users, storage backends, authentication providers, security policies, and other integrated services.
There are one of two ways to configure Red Hat Quay that depend on your deployment type:
-
On prem Red Hat Quay: With an on prem Red Hat Quay deployment, a registry administrator provides a
config.yaml
file that includes all required parameters. For this deployment type, the registry is unable to start without a valid configuration. -
Red Hat Quay Operator: By default, the Red Hat Quay Operator automatically configures your Red Hat Quay deployment by generating the minimal required values and deploying the necessary components for you. After the initial deployment, you can customize your registry’s behavior by modifying the
QuayRegistry
custom resource, or by using the OpenShift Container Platform Web Console.
This guide offers an overview of the following configuration concepts:
- How to retrieve, inspect, and modify your current configuration for both on prem and Operator-based Red Hat Quay deployment types.
- The minimal configuration fields required for startup.
- An overview of all available Red Hat Quay configuration fields and YAML examples for those fields.
Chapter 2. Red Hat Quay configuration disclaimer
In both self-managed and Operator-based deployments of Red Hat Quay, certain features and configuration parameters are not actively used or implemented. As a result, some feature flags, such as those that enable or disable specific functionality, or configuration parameters that are not explicitly documented or supported by or requested for documentation by Red Hat Support, should only be modified with caution.
Unused or undocumented features might not be fully tested, supported, or compatible with Red Hat Quay. Modifying these settings could result in unexpected behavior or disruptions to your deployment.
Chapter 3. Understanding the Red Hat Quay configuration file
Whether deployed on premise of by the Red Hat Quay on OpenShift Container Platform Operator, the registry’s behavior is defined by the config.yaml
file. The config.yaml
file must include all required configuration fields for the registry to start. Red Hat Quay administrators can also define optional parameters that customize their registry, such as authentication parameters, storage parameters, proxy cache parameters, and so on.
The config.yaml
file must be written using valid YAML ("YAML Ain’t Markup Language") syntax, and Red Hat Quay cannot start if the file itself contains any formatting errors or missing required fields. Regardless of deployment type, whether that is on premise or Red Hat Quay on OpenShift Container Platform that is configured by the Operator, the YAML principles stay the same, even if the required configuration fields are slightly different.
The following section outlines basic YAML syntax relevant to creating and editing the Red Hat Quay config.yaml
file. For a more complete overview of YAML, see What is YAML.
3.1. Key-value pairs
Configuration fields within a config.yaml
file are written as key-value pairs in the following form:
... ...
# ...
EXAMPLE_FIELD_NAME: <value>
# ...
Each line within a config.yaml
file contains a field name, followed by a colon, a space, and then an appropriate value that matches with the key. The following example shows you how the AUTHENTICATION_TYPE
configuration field must be formatted in your config.yaml
file.
AUTHENTICATION_TYPE: Database # ...
AUTHENTICATION_TYPE: Database
# ...
- 1
- The authentication engine to use for credential authentication.
In the previous example, the AUTHENTICATION_TYPE
is set to Database
, however, different deployment types require a different value. The following example shows you how your config.yaml
file might look if LDAP
, or Lightweight Directory Access Protocol, was used for authentication:
AUTHENTICATION_TYPE: LDAP # ...
AUTHENTICATION_TYPE: LDAP
# ...
3.2. Indentation and nesting
Many Red Hat Quay configuration fields require indentation to indicate nested structures. Indentation must be done by using white spaces, or literal space characters; tab characters are not allowed by design. Indentation must be consistent across the file. The following YAML snippet shows you how the BUILDLOGS_REDIS
field uses indentation for the required host
, password,
and port
fields:
... ...
# ...
BUILDLOGS_REDIS:
host: quay-server.example.com
password: example-password
port: 6379
# ...
3.3. Lists
In some cases, the Red Hat Quay configuration field relies on lists to define certain values. Lists are formatted by using a hyphen (-
) followed by a space. The following example shows you how the SUPER_USERS
configuration field uses a list to define superusers:
... ...
# ...
SUPER_USERS:
- quayadmin
# ...
3.4. Quoted values
Some Red Hat Quay configuration fields require the use of quotation marks (""
) to properly define a variable. This is generally not required. The following examples shows you how the FOOTER_LINKS
configuration field uses quotation marks to define the TERMS_OF_SERVICE_URL
, PRIVACY_POLICY_URL
, SECURITY_URL
, and ABOUT_URL
:
FOOTER_LINKS: "TERMS_OF_SERVICE_URL": "https://www.index.hr" "PRIVACY_POLICY_URL": "https://www.jutarnji.hr" "SECURITY_URL": "https://www.bug.hr" "ABOUT_URL": "https://www.zagreb.hr"
FOOTER_LINKS:
"TERMS_OF_SERVICE_URL": "https://www.index.hr"
"PRIVACY_POLICY_URL": "https://www.jutarnji.hr"
"SECURITY_URL": "https://www.bug.hr"
"ABOUT_URL": "https://www.zagreb.hr"
3.5. Comments
The hash symbol, or #
, can be placed at the beginning of a line to add comments or to temporarily disable a configuration field. They are ignored by the configuration parser and will not affect the behavior of the registry. For example:
... FEATURE_UI_V2: true ...
# ...
# FEATURE_UI_V2: true
# ...
In this example, the FEATURE_UI_V2
configuration is ignored by the parser, meaning that the option to use the v2 UI is disabled. Using the #
symbol on a required configuration field results in failure for the registry to start.
Chapter 4. On prem Red Hat Quay configuration overview
For on premise deployments of Red Hat Quay, the config.yaml
file that is managed by the administrator is mounted into the container at startup and read by Red Hat Quay during initialization. The config.yaml
file is not dynamically reloaded, meaning that any changes made to the file require restarting the registry container to take effect.
This chapter provides an overview of the following concepts:
- The minimal required configuration fields.
- How to edit and manage your configuration after deployment.
This section applies specifically to on premise Red Hat Quay deployment types. For information about configuring Red Hat Quay on OpenShift Container Platform, see. . .
4.1. Required configuration fields
The following configuration fields are required for an on premise deployment of Red Hat Quay:
Field | Type | Description |
AUTHENTICATION_TYPE | String |
The authentication engine to use for credential authentication. |
BUILDLOGS_REDIS | Object | Redis connection details for build logs caching. |
.host | String | The hostname at which Redis is accessible. |
.password | String | The password to connect to the Redis instance. |
DATABASE_SECRET_KEY | String |
Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. |
DB_URI | String | The URI for accessing the database, including any credentials. |
DISTRIBUTED_STORAGE_CONFIG | Object |
Configuration for storage engine(s) to use in Red Hat Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. |
SECRET_KEY | String | Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Red Hat Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. |
SERVER_HOSTNAME | String | The URL at which Red Hat Quay is accessible, without the scheme. |
SETUP_COMPLETE | Boolean |
This is an artifact left over from earlier versions of the software and currently it must be specified with a value of |
USER_EVENTS_REDIS | Object | Redis connection details for user event handling. |
.host | String | The hostname at which Redis is accessible. |
.port | Number | The port at which Redis is accessible. |
.password | String | The password to connect to the Redis instance. |
4.1.1. Minimal configuration file examples
This section provides two examples of a minimal configuration file: one example that uses local storage, and another example that uses cloud-based storage with Google Cloud Platform.
4.1.1.1. Minimal configuration using local storage
The following example shows a sample minimal configuration file that uses local storage for images.
Only use local storage when deploying a registry for proof of concept purposes. It is not intended for production purposes. When using local storage, you must map the registry to a local directory to the datastorage
path in the container when starting the registry. For more information, see Proof of Concept - Deploying Red Hat Quay
Local storage minimal configuration
AUTHENTICATION_TYPE: Database BUILDLOGS_REDIS: host: <quay-server.example.com> password: <password> port: <port> DATABASE_SECRET_KEY: <example_database_secret_key> DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry SECRET_KEY: <example_secret_key> SERVER_HOSTNAME: <server_host_name> SETUP_COMPLETE: true USER_EVENTS_REDIS: host: <redis_events_url> password: <password> port: <port>
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <password>
port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
host: <redis_events_url>
password: <password>
port: <port>
4.1.1.2. Minimal configuration using cloud-based storage
In most production environments, Red Hat Quay administrators use cloud or enterprise-grade storage backends provided by supported vendors. The following example shows you how to configure Red Hat Quay to use Google Cloud Platform for image storage. For a complete list of supported storage providers, see Image storage.
When using a cloud or enterprise-grade storage backend, additional configuration, such as mapping the registry to a local directory, is not required.
Cloud storage minimal configuration
AUTHENTICATION_TYPE: Database BUILDLOGS_REDIS: host: <quay-server.example.com> password: <password> port: <port> DATABASE_SECRET_KEY: <example_database_secret_key> DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay DISTRIBUTED_STORAGE_CONFIG: default: - GoogleCloudStorage - access_key: <access_key> bucket_name: <bucket_name> secret_key: <secret_key> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default SECRET_KEY: <example_secret_key> SERVER_HOSTNAME: <server_host_name> SETUP_COMPLETE: true USER_EVENTS_REDIS: host: <redis_events_url> password: <password> port: <port>
AUTHENTICATION_TYPE: Database
BUILDLOGS_REDIS:
host: <quay-server.example.com>
password: <password>
port: <port>
DATABASE_SECRET_KEY: <example_database_secret_key>
DB_URI: postgresql://<username>:<password>@<registry_url>.com:<port>/quay
DISTRIBUTED_STORAGE_CONFIG:
default:
- GoogleCloudStorage
- access_key: <access_key>
bucket_name: <bucket_name>
secret_key: <secret_key>
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
SECRET_KEY: <example_secret_key>
SERVER_HOSTNAME: <server_host_name>
SETUP_COMPLETE: true
USER_EVENTS_REDIS:
host: <redis_events_url>
password: <password>
port: <port>
4.2. Modifying your configuration file after deployment
After deploying a Red Hat Quay registry with an initial config.yaml
file, Red Hat Quay administrators can update the configuration file to enable or disable features as needed. This flexibility allows administrators to tailor the registry to fit their specific environment needs, or to meet certain security policies.
Because the config.yaml
file is not dynamically reloaded, you must restart the Red Hat Quay container after making changes for them to take effect.
The following procedure shows you how to retrieve the config.yaml
file from the quay-registry
container, how to enable a new feature by adding that feature’s configuration field to the file, and how to restart the quay-registry
container using Podman.
Prerequisites
- You have deployed Red Hat Quay.
- You are a registry administrator.
Procedure
If you have access to the
config.yaml
file:Navigate to the directory that is storing the
config.yaml
file. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd /home/<username>/<quay-deployment-directory>/config
$ cd /home/<username>/<quay-deployment-directory>/config
Make changes to the
config.yaml
file by adding a new feature flag. The following example enables the v2 UI:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... FEATURE_UI_V2: true # ...
-
Save the changes made to the
config.yaml
file. Restart the
quay-registry
pod by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman restart <container_id>
$ podman restart <container_id>
If you do not have access to the
config.yaml
file and need to create a new file while keeping the same credentials:Retrieve the container ID of your
quay-registry
pod by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman ps
$ podman ps
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5f2297ef53ff registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 20 hours ago Up 20 hours 0.0.0.0:5432->5432/tcp postgresql-quay 3b40fb83bead registry.redhat.io/rhel8/redis-5:1 run-redis 20 hours ago Up 20 hours 0.0.0.0:6379->6379/tcp redis 0b4b8fbfca6d registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.14.0-14 registry 20 hours ago Up 20 hours 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp, 7443/tcp, 9091/tcp, 55443/tcp quay
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5f2297ef53ff registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 20 hours ago Up 20 hours 0.0.0.0:5432->5432/tcp postgresql-quay 3b40fb83bead registry.redhat.io/rhel8/redis-5:1 run-redis 20 hours ago Up 20 hours 0.0.0.0:6379->6379/tcp redis 0b4b8fbfca6d registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.14.0-14 registry 20 hours ago Up 20 hours 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp, 7443/tcp, 9091/tcp, 55443/tcp quay
Copy the
config.yaml
file from thequay-registry
pod to a directory by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman cp <container_id>:/quay-registry/conf/stack/config.yaml ./config.yaml
$ podman cp <container_id>:/quay-registry/conf/stack/config.yaml ./config.yaml
Make changes to the
config.yaml
file by adding a new feature flag. The following example sets theAUTHENTICATION_TYPE
toLDAP
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... AUTHENTICATION_TYPE: LDAP # ...
Re-deploy the registry, mounting the
config.yaml
file into thequay-registry
configuration volume by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:v3.14.0
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:v3.14.0
4.3. Troubleshooting the configuration file
Failure to add all of the required configuration field, or to provide the proper information for some parameters, might result in the quay-registry
container failing to deploy. Use the following procedure to view and troubleshoot a failed on premise deployment type.
Prerequisites
- You have created a minimal configuration file.
Procedure
Attempt to deploy the
quay-registry
container by entering the following command. Note that this command uses the-it
, which shows you debugging information:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman run -it --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z -v /home/<username>/<quay-deployment-directory>/storage:/datastorage:Z 33f1c3dc86be
$ podman run -it --rm -p 80:8080 -p 443:8443 --name=quay -v /home/<username>/<quay-deployment-directory>/config:/conf/stack:Z -v /home/<username>/<quay-deployment-directory>/storage:/datastorage:Z 33f1c3dc86be
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow --- +------------------------+-------+--------+ | LDAP | - | X | +------------------------+-------+--------+ | LDAP_ADMIN_DN is required | X | +-----------------------------------------+ | LDAP_ADMIN_PSSWD is required | X | +-----------------------------------------+ | . . . Connection refused | X | +-----------------------------------------+ ---
--- +------------------------+-------+--------+ | LDAP | - | X | +------------------------+-------+--------+ | LDAP_ADMIN_DN is required | X | +-----------------------------------------+ | LDAP_ADMIN_PSSWD is required | X | +-----------------------------------------+ | . . . Connection refused | X | +-----------------------------------------+ ---
In this example, the
quay-registry
container failed to deploy because improper LDAP credentials were provided.
Chapter 5. Red Hat Quay Operator configuration overview
When deploying Red Hat Quay using the Operator on OpenShift Container Platform, configuration is managed declaratively through the QuayRegistry
custom resource (CR). This model allows cluster administrators to define the desired state of the Red Hat Quay deployment, including which components are enabled, storage backends, SSL/TLS configuration, and other core features.
After deploying Red Hat Quay on OpenShift Container Platform with the Operator, administrators can further customize their registry by updating the config.yaml
file and referencing it in a Kubernetes secret. This configuration bundle is linked to the QuayRegistry
CR through the configBundleSecret
field.
The Operator reconciles the state defined in the QuayRegistry
CR and its associated configuration, automatically deploying or updating registry components as needed.
This guide covers the basic concepts behind the QuayRegistry
CR and modifying your config.yaml
file on Red Hat Quay on OpenShift Container Platform deployments. More advanced topics, such as using unmanaged components within the QuayRegistry
CR, can be found in Deploying Red Hat Quay Operator on OpenShift Container Platform.
5.1. Understanding the QuayRegistry CR
By default, the QuayRegistry
CR contains the following key fields:
-
configBundleSecret
: The name of a Kubernetes Secret containing theconfig.yaml
file which defines additional configuration parameters. -
name
: The name of your Red Hat Quay registry. -
namespace
: The namespace, or project, in which the registry was created. spec.components
: A list of component that the Operator automatically manages. Eachspec.component
field contains two fields:-
kind
: The name of the component -
managed
: A boolean that addresses whether the component lifecycle is handled by the Red Hat Quay Operator. Settingmanaged: true
to a component in theQuayRegistry
CR means that the Operator manages the component.
-
All QuayRegistry
components are automatically managed and auto-filled upon reconciliation for visibility unless specified otherwise. The following sections highlight the major QuayRegistry
components and provide an example YAML file that shows the default settings.
5.2. Managed components
By default, the Operator handles all required configuration and installation needed for Red Hat Quay’s managed components.
If the opinionated deployment performed by the Red Hat Quay Operator is unsuitable for your environment, you can provide the Red Hat Quay Operator with unmanaged
resources, or overrides, as described in Using unmanaged components.
Field | Type | Description |
---|---|---|
| Boolean |
Holds overrides for deployment of Red Hat Quay on OpenShift Container Platform, such as environment variables and number of replicas. This component cannot be set to unmanaged ( |
| Boolean | Used for storing registry metadata. Currently, PostgreSQL version 13 is used. |
| Boolean | Provides image vulnerability scanning. |
| Boolean | Storage live builder logs and the locking mechanism that is required for garbage collection. |
| Boolean |
Adjusts the number of |
| Boolean |
Stores image layer blobs. When set to |
| Boolean | Provides an external entrypoint to the Red Hat Quay registry from outside of OpenShift Container Platform. |
| Boolean | Configures repository mirror workers to support optional repository mirroring. |
| Boolean |
Features include a Grafana dashboard, access to individual metrics, and notifications for frequently restarting |
| Boolean | Configures whether SSL/TLS is automatically handled. |
| Boolean | Configures a managed Clair database. This is a separate database than the PostgreSQL database that is used to deploy Red Hat Quay. |
The following example shows you the default configuration for the QuayRegistry
custom resource provided by the Red Hat Quay Operator. It is available on the OpenShift Container Platform web console.
Example QuayRegistry
custom resource
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: <example_registry> namespace: <namespace> spec: configBundleSecret: config-bundle-secret components: - kind: quay managed: true - kind: postgres managed: true - kind: clair managed: true - kind: redis managed: true - kind: horizontalpodautoscaler managed: true - kind: objectstorage managed: true - kind: route managed: true - kind: mirror managed: true - kind: monitoring managed: true - kind: tls managed: true - kind: clairpostgres managed: true
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: <example_registry>
namespace: <namespace>
spec:
configBundleSecret: config-bundle-secret
components:
- kind: quay
managed: true
- kind: postgres
managed: true
- kind: clair
managed: true
- kind: redis
managed: true
- kind: horizontalpodautoscaler
managed: true
- kind: objectstorage
managed: true
- kind: route
managed: true
- kind: mirror
managed: true
- kind: monitoring
managed: true
- kind: tls
managed: true
- kind: clairpostgres
managed: true
5.3. Modifying the QuayRegistry CR after deployment
After you have installed the Red Hat Quay Operator and created an initial deployment, you can modify the QuayRegistry
custom resource (CR) to customize or reconfigure aspects of the Red Hat Quay environment.
Red Hat Quay administrators might modify the QuayRegistry CR for the following reasons:
-
To change component management: Switch components from
managed: true
tomanaged: false
in order to bring your own infrastructure. For example, you might setkind: objectstorage
to unmanaged to integrate external object storage platforms such as Google Cloud Storage or Nutanix. -
To apply custom configuration: Update or replace the
configBundleSecret
to apply new configuration settings, for example, authentication providers, external SSL/TLS settings, feature flags. -
To enable or disable features: Toggle features like repository mirroring, Clair scanning, or horizontal pod autoscaling by modifying the
spec.components
list. - To scale the deployment: Adjust environment variables or replica counts for the Quay application.
- To integrate with external services: Provide configuration for external PostgreSQL, Redis, or Clair databases, and update endpoints or credentials.
5.3.1. Modifying QuayRegistry CR components by using the OpenShift Container Platform web console
The QuayRegistry
can be modified by using the OpenShift Container Platform web console.
Prerequisites
- You are logged into OpenShift Container Platform as a user with admin privileges.
- You have installed the Red Hat Quay Operator.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators.
- Click Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- Click YAML.
-
Adjust the
managed
field of the desired component to eithertrue
orfalse
. Click Save.
NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies.
5.3.2. Modifying QuayRegistry components CR by using the CLI
The QuayRegistry
CR can be modified by using the CLI.
Prerequisites
- You are logged in to your OpenShift Container Platform cluster as a user with admin privileges.
Procedure
Edit the
QuayRegistry
CR by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc edit quayregistry <registry_name> -n <namespace>
$ oc edit quayregistry <registry_name> -n <namespace>
Make the desired changes to the
QuayRegistry
CR.NoteSetting a component to unmanaged (
managed: false
) might require additional configuration. For more information about setting unmanaged components in theQuayRegistry
CR, see Using unmanaged components for dependencies.- Save the changes.
5.3.3. Understanding the configBundleSecret
The spec.configBundleSecret
field is an optional reference to the name of a Secret in the same namespace as the QuayRegistry
resource. This Secret must contain a config.yaml
key/value pair, where the value is a Red Hat Quay configuration file.
The configBundleSecret
stores the config.yaml
file, which defines configuration settings for Red Hat Quay, such as:
- Authentication backends (for example, OIDC, LDAP)
- External TLS termination settings
- Repository creation policies
- Feature flags
- Notification settings
Red Hat Quay administrators might update this secret to:
- Enable a new authentication method
- Add custom SSL/TLS certificates
- Modify security scanning settings
If this field is omitted, the Red Hat Quay Operator automatically generates a configuration secret based on default values and managed component settings. If the field is provided, the contents of the config.yaml
are used as the base configuration and are merged with values from managed components to form the final configuration, which is mounted into the quay
application pods.
5.3.3.1. Modifying the configuration file by using the OpenShift Container Platform web console
Use the following procedure to modify the config.yaml
file that is stored by the configBundleSecret
by using the OpenShift Container Platform web console.
Prerequisites
- You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
Procedure
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- On the QuayRegistry details page, click the name of your Config Bundle Secret, for example, example-registry-config-bundle.
- Click Actions → Edit Secret.
In the Value box, add the desired key/value pair. For example, to add a superuser to your Red Hat Quay on OpenShift Container Platform deployment, add the following reference:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow SUPER_USERS: - quayadmin
SUPER_USERS: - quayadmin
- Click Save.
Verification
Verify that the changes have been accepted:
- On the OpenShift Container Platform web console, click Operators → Installed Operators → Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
Click Events. If successful, the following message is displayed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow All objects created/updated successfully
All objects created/updated successfully
You must base64 encode any updated config.yaml before placing it in the Secret. Ensure the Secret name matches the value specified in spec.configBundleSecret. Once the Secret is updated, the Operator detects the change and automatically rolls out updates to the Red Hat Quay pods.
For detailed steps, see "Updating configuration secrets through the Red Hat Quay UI."
5.3.3.2. Modifying the configuration file by using the CLI
You can modify the config.yaml
file that is stored by the configBundleSecret
by downloading the existing configuration using the CLI. After making changes, you can re-upload the configBundleSecret
resource to make changes to the Red Hat Quay registry.
Modifying the config.yaml
file that is stored by the configBundleSecret
resource is a multi-step procedure that requires base64 decoding the existing configuration file and then uploading the changes. For most cases, using the OpenShift Container Platform web console to make changes to the config.yaml
file is simpler.
Prerequisites
- You are logged in to the OpenShift Container Platform cluster as a user with admin privileges.
Procedure
Describe the
QuayRegistry
resource by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe quayregistry -n <quay_namespace>
$ oc describe quayregistry -n <quay_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... Config Bundle Secret: example-registry-config-bundle-v123x # ...
Obtain the secret data by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'
$ oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow { "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
{ "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" }
Decode the data into a YAML file into the current directory by passing in the
>> config.yaml
flag. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
$ echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
-
Make the desired changes to your
config.yaml
file, and then save the file asconfig.yaml
. Create a new
configBundleSecret
YAML by entering the following command.Copy to Clipboard Copied! Toggle word wrap Toggle overflow touch <new_configBundleSecret_name>.yaml
$ touch <new_configBundleSecret_name>.yaml
Create the new
configBundleSecret
resource, passing in theconfig.yaml
file` by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \ --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml
$ oc -n <namespace> create secret generic <secret_name> \ --from-file=config.yaml=</path/to/config.yaml> \
1 --dry-run=client -o yaml > <new_configBundleSecret_name>.yaml
- 1
- Where
<config.yaml>
is yourbase64 decoded
config.yaml
file.
Create the
configBundleSecret
resource by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -n <namespace> -f <new_configBundleSecret_name>.yaml
$ oc create -n <namespace> -f <new_configBundleSecret_name>.yaml
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow secret/config-bundle created
secret/config-bundle created
Update the
QuayRegistry
YAML file to reference the newconfigBundleSecret
object by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'
$ oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"<new_configBundleSecret_name>"}}'
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow quayregistry.quay.redhat.com/example-registry patched
quayregistry.quay.redhat.com/example-registry patched
Verification
Verify that the
QuayRegistry
CR has been updated with the newconfigBundleSecret
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc describe quayregistry -n <quay_namespace>
$ oc describe quayregistry -n <quay_namespace>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... ...
# ... Config Bundle Secret: <new_configBundleSecret_name> # ...
After patching the registry, the Red Hat Quay Operator automatically reconciles the changes.
5.4. Configuration updates for Red Hat Quay 3.14
The following sections detail new configuration fields added in Red Hat Quay 3.14.
5.4.1. Model card rendering configuration fields
The following configuration fields have been added to support model card rendering on the v2 UI.
Field | Type | Description |
---|---|---|
FEATURE_UI_MODELCARD | Boolean |
Enables Model card image tab in UI. Defaults to |
UI_MODELCARD_ARTIFACT_TYPE | String | Defines the model card artifact type. |
UI_MODELCARD_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
UI_MODELCARD_LAYER_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
Example model card YAML
FEATURE_UI_MODELCARD: true UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel UI_MODELCARD_ANNOTATION: org.opencontainers.image.description: "Model card metadata" UI_MODELCARD_LAYER_ANNOTATION: org.opencontainers.image.title: README.md
FEATURE_UI_MODELCARD: true
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel
UI_MODELCARD_ANNOTATION:
org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION:
org.opencontainers.image.title: README.md
- 1
- Enables the Model Card image tab in the UI.
- 2
- Defines the model card artifact type. In this example, the artifact type is
application/x-mlmodel
. - 3
- Optional. If an image does not have an
artifactType
defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matchingUI_MODELCARD_LAYER_ANNOTATION
. - 4
- Optional. If an image has an
artifactType
defined and multiple layers, this field is used to locate the specific layer containing the model card.
Chapter 6. Configuration fields
This section describes the both required and optional configuration fields when deploying Red Hat Quay.
6.1. Required configuration fields
The fields required to configure Red Hat Quay are covered in the following sections:
6.2. Automation options
The following sections describe the available automation options for Red Hat Quay deployments:
6.3. Optional configuration fields
Optional fields for Red Hat Quay can be found in the following sections:
- Basic configuration
- SSL
- LDAP
- Repository mirroring
- Quota management
- Security scanner
- Helm
- Action log
- Build logs
- Dockerfile build
- OAuth
- Configuring nested repositories
- Adding other OCI media types to Quay
- User
- Recaptcha
- ACI
- JWT
- App tokens
- Miscellaneous
- User interface v2
- IPv6 configuration field
- Legacy options
6.4. General required fields
The following table describes the required configuration fields for a Red Hat Quay deployment:
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE | String |
The authentication engine to use for credential authentication. |
PREFERRED_URL_SCHEME | String |
The URL scheme to use when accessing Red Hat Quay. |
SERVER_HOSTNAME | String |
The URL at which Red Hat Quay is accessible, without the scheme. |
DATABASE_SECRET_KEY | String |
Key used to encrypt sensitive fields within the database. This value should never be changed once set, otherwise all reliant fields, for example, repository mirror username and password configurations, are invalidated. |
SECRET_KEY | String | Key used to encrypt the session cookie and the CSRF token needed for correct interpretation of the user session. The value should not be changed when set. Should be persistent across all Red Hat Quay instances. If not persistent across all instances, login failures and other errors related to session persistence might occur. |
SETUP_COMPLETE | Boolean |
This is an artifact left over from earlier versions of the software and currently it must be specified with a value of |
6.5. Database configuration
This section describes the database configuration fields available for Red Hat Quay deployments.
6.5.1. Database URI
With Red Hat Quay, connection to the database is configured by using the required DB_URI
field.
The following table describes the DB_URI
configuration field:
Field | Type | Description |
---|---|---|
DB_URI | String | The URI for accessing the database, including any credentials.
Example postgresql://quayuser:quaypass@quay-server.example.com:5432/quay |
6.5.2. Database connection arguments
Optional connection arguments are configured by the DB_CONNECTION_ARGS
parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS
are generic, while others are database specific.
The following table describes database connection arguments:
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS | Object | Optional connection arguments for the database, such as timeouts and SSL/TLS. |
.autorollback | Boolean |
Whether to use thread-local connections. |
.threadlocals | Boolean |
Whether to use auto-rollback connections. |
6.5.2.1. PostgreSQL SSL/TLS connection arguments
With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration:
DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert
DB_CONNECTION_ARGS:
sslmode: verify-ca
sslrootcert: /path/to/cacert
The sslmode
option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes:
Mode | Description |
---|---|
disable | Your configuration only tries non-SSL/TLS connections. |
allow | Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. |
prefer | Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. |
require | Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. |
verify-ca | Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). |
verify-full | Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. |
For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.
6.5.2.2. MySQL SSL/TLS connection arguments
The following example shows a sample MySQL SSL/TLS configuration:
DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert
DB_CONNECTION_ARGS:
ssl:
ca: /path/to/cacert
Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs.
6.6. Image storage
This section details the image storage features and configuration fields that are available with Red Hat Quay.
6.6.1. Image storage features
The following table describes the image storage features for Red Hat Quay:
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR | Boolean |
If set to true, enables repository mirroring. |
FEATURE_PROXY_STORAGE | Boolean |
Whether to proxy all direct download URLs in storage through NGINX. |
FEATURE_STORAGE_REPLICATION | Boolean |
Whether to automatically replicate between storage engines. |
6.6.2. Image storage configuration fields
The following table describes the image storage configuration fields for Red Hat Quay:
Field | Type | Description |
---|---|---|
DISTRIBUTED_STORAGE_CONFIG | Object |
Configuration for storage engine(s) to use in Red Hat Quay. Each key represents an unique identifier for a storage engine. The value consists of a tuple of (key, value) forming an object describing the storage engine parameters. |
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS | Array of string |
The list of storage engine(s) (by ID in |
DISTRIBUTED_STORAGE_PREFERENCE | Array of string |
The preferred storage engine(s) (by ID in |
MAXIMUM_LAYER_SIZE | String |
Maximum allowed size of an image layer. |
6.6.3. Local storage
The following YAML shows a sample configuration using local storage:
DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
DISTRIBUTED_STORAGE_CONFIG:
default:
- LocalStorage
- storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
6.6.4. OpenShift Container Storage/NooBaa
The following YAML shows a sample configuration using an OpenShift Container Storage/NooBaa instance:
DISTRIBUTED_STORAGE_CONFIG: rhocsStorage: - RHOCSStorage - access_key: access_key_here secret_key: secret_key_here bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56 hostname: s3.openshift-storage.svc.cluster.local is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100 server_side_assembly: true
DISTRIBUTED_STORAGE_CONFIG:
rhocsStorage:
- RHOCSStorage
- access_key: access_key_here
secret_key: secret_key_here
bucket_name: quay-datastore-9b2108a3-29f5-43f2-a9d5-2872174f9a56
hostname: s3.openshift-storage.svc.cluster.local
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100
server_side_assembly: true
6.6.5. Ceph Object Gateway/RadosGW storage
The following YAML shows a sample configuration using Ceph/RadosGW.
RadosGW is an on-premises S3-compatible storage solution. Note that this differs from general AWS S3Storage, which is specifically designed for use with Amazon Web Services S3. This means that RadosGW implements the S3 API and requires credentials like access_key
, secret_key
, and bucket_name
. For more information about Ceph Object Gateway and the S3 API, see Ceph Object Gateway and the S3 API.
RadosGW with general s3 access
DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: - RadosGWStorage - access_key: <access_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: true port: '443' secret_key: <secret_key_here> storage_path: /datastorage/registry maximum_chunk_size_mb: 100 server_side_assembly: true
DISTRIBUTED_STORAGE_CONFIG:
radosGWStorage:
- RadosGWStorage
- access_key: <access_key_here>
bucket_name: <bucket_name_here>
hostname: <hostname_here>
is_secure: true
port: '443'
secret_key: <secret_key_here>
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100
server_side_assembly: true
- 1
- Used for general s3 access. Note that general s3 access is not strictly limited to Amazon Web Services (AWS) s3, and can be used with RadosGW or other storage services. For an example of general s3 access using the AWS S3 driver, see "AWS S3 storage".
- 2
- Optional. Defines the maximum chunk size in MB for the final copy. Has no effect if
server_side_assembly
is set tofalse
. - 3
- Optional. Whether Red Hat Quay should try and use server side assembly and the final chunked copy instead of client assembly. Defaults to
true
.
6.6.6. AWS S3 storage
The following YAML shows a sample configuration using AWS S3 storage.
... ...
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- S3Storage
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
s3_region: <region>
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
# ...
- 1
- The
S3Storage
storage driver should only be used for AWS S3 buckets. Note that this differs from general S3 access, where the RadosGW driver or other storage services can be used. For an example, see "Example B: Using RadosGW with general S3 access". - 2
- Optional. The Amazon Web Services region. Defaults to
us-east-1
.
6.6.6.1. AWS STS S3 storage
The following YAML shows an example configuration for using Amazon Web Services (AWS) Security Token Service (STS) with Red Hat Quay on OpenShift Container Platform configurations.
... ...
# ...
DISTRIBUTED_STORAGE_CONFIG:
default:
- STSS3Storage
- sts_role_arn: <role_arn>
s3_bucket: <s3_bucket_name>
storage_path: <storage_path>
sts_user_access_key: <s3_user_access_key>
sts_user_secret_key: <s3_user_secret_key>
s3_region: <region>
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- default
# ...
6.6.6.2. AWS Cloudfront storage
Use the following example when configuring AWS Cloudfront for your Red Hat Quay deployment.
When configuring AWS Cloudfront storage, the following conditions must be met for proper use with Red Hat Quay:
-
You must set an Origin path that is consistent with Red Hat Quay’s storage path as defined in your
config.yaml
file. Failure to meet this require results in a403
error when pulling an image. For more information, see Origin path. - You must configure a Bucket policy and a Cross-origin resource sharing (CORS) policy.
-
You must set an Origin path that is consistent with Red Hat Quay’s storage path as defined in your
Cloudfront S3 example YAML
DISTRIBUTED_STORAGE_CONFIG: default: - CloudFrontedS3Storage - cloudfront_distribution_domain: <CLOUDFRONT_DISTRIBUTION_DOMAIN> cloudfront_key_id: <CLOUDFRONT_KEY_ID> cloudfront_privatekey_filename: <CLOUDFRONT_PRIVATE_KEY_FILENAME> host: <S3_HOST> s3_access_key: <S3_ACCESS_KEY> s3_bucket: <S3_BUCKET_NAME> s3_secret_key: <S3_SECRET_KEY> storage_path: <STORAGE_PATH> s3_region: <S3_REGION> DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default
DISTRIBUTED_STORAGE_CONFIG:
default:
- CloudFrontedS3Storage
- cloudfront_distribution_domain: <CLOUDFRONT_DISTRIBUTION_DOMAIN>
cloudfront_key_id: <CLOUDFRONT_KEY_ID>
cloudfront_privatekey_filename: <CLOUDFRONT_PRIVATE_KEY_FILENAME>
host: <S3_HOST>
s3_access_key: <S3_ACCESS_KEY>
s3_bucket: <S3_BUCKET_NAME>
s3_secret_key: <S3_SECRET_KEY>
storage_path: <STORAGE_PATH>
s3_region: <S3_REGION>
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- default
DISTRIBUTED_STORAGE_PREFERENCE:
- default
Bucket policy example
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<S3_BUCKET_NAME>/*" }, { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::<S3_BUCKET_NAME>" } ] }
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<S3_BUCKET_NAME>/*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/CloudFront Origin Access Identity <CLOUDFRONT_OAI_ID>"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<S3_BUCKET_NAME>"
}
]
}
6.6.7. Google Cloud Storage
The following YAML shows a sample configuration using Google Cloud Storage:
DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry boto_timeout: 120 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay-bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
boto_timeout: 120
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
- 1
- Optional. The time, in seconds, until a timeout exception is thrown when attempting to read from a connection. The default is
60
seconds. Also encompasses the time, in seconds, until a timeout exception is thrown when attempting to make a connection. The default is60
seconds.
6.6.8. Azure Storage
The following YAML shows a sample configuration using Azure Storage:
DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
- azure_account_name: azure_account_name_here
azure_container: azure_container_here
storage_path: /datastorage/registry
azure_account_key: azure_account_key_here
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
- 1
- The
endpoint_url
parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Azure region.As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
6.6.9. Swift storage
The following YAML shows a sample configuration using Swift storage:
DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 3 os_options: tenant_id: <osp_tenant_id_here> user_domain_name: <osp_domain_name_here> ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage
DISTRIBUTED_STORAGE_CONFIG:
swiftStorage:
- SwiftStorage
- swift_user: swift_user_here
swift_password: swift_password_here
swift_container: swift_container_here
auth_url: https://example.org/swift/v1/quay
auth_version: 3
os_options:
tenant_id: <osp_tenant_id_here>
user_domain_name: <osp_domain_name_here>
ca_cert_path: /conf/stack/swift.cert"
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- swiftStorage
6.6.10. Nutanix object storage
The following YAML shows a sample configuration using Nutanix object storage.
DISTRIBUTED_STORAGE_CONFIG: nutanixStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - nutanixStorage
DISTRIBUTED_STORAGE_CONFIG:
nutanixStorage: #storage config name
- RadosGWStorage #actual driver
- access_key: access_key_here #parameters
secret_key: secret_key_here
bucket_name: bucket_name_here
hostname: hostname_here
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config
- nutanixStorage
6.6.11. IBM Cloud object storage
The following YAML shows a sample configuration using IBM Cloud object storage.
DISTRIBUTED_STORAGE_CONFIG: default: - IBMCloudStorage #actual driver - access_key: <access_key_here> #parameters secret_key: <secret_key_here> bucket_name: <bucket_name_here> hostname: <hostname_here> is_secure: 'true' port: '443' storage_path: /datastorage/registry maximum_chunk_size_mb: 100mb minimum_chunk_size_mb: 5mb DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - default DISTRIBUTED_STORAGE_PREFERENCE: - default
DISTRIBUTED_STORAGE_CONFIG:
default:
- IBMCloudStorage #actual driver
- access_key: <access_key_here> #parameters
secret_key: <secret_key_here>
bucket_name: <bucket_name_here>
hostname: <hostname_here>
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
maximum_chunk_size_mb: 100mb
minimum_chunk_size_mb: 5mb
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- default
DISTRIBUTED_STORAGE_PREFERENCE:
- default
- 1
- Optional. Recommended to be set to
100mb
. - 2
- Optional. Defaults to
5mb
. Do not adjust this field without consulting Red Support, because it can have unintended consequences.
6.6.12. NetApp ONTAP S3 object storage
The following YAML shows a sample configuration using NetApp ONTAP S3.
DISTRIBUTED_STORAGE_CONFIG: local_us: - RadosGWStorage - access_key: <access_key> bucket_name: <bucket_name> hostname: <host_url_address> is_secure: true port: <port> secret_key: <secret_key> storage_path: /datastorage/registry signature_version: v4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - local_us DISTRIBUTED_STORAGE_PREFERENCE: - local_us
DISTRIBUTED_STORAGE_CONFIG:
local_us:
- RadosGWStorage
- access_key: <access_key>
bucket_name: <bucket_name>
hostname: <host_url_address>
is_secure: true
port: <port>
secret_key: <secret_key>
storage_path: /datastorage/registry
signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- local_us
DISTRIBUTED_STORAGE_PREFERENCE:
- local_us
6.6.13. Hitachi Content Platform object storage
The following YAML shows a sample configuration using HCP for object storage.
Example HCP storage configuration
DISTRIBUTED_STORAGE_CONFIG: hcp_us: - RadosGWStorage - access_key: <access_key> bucket_name: <bucket_name> hostname: <hitachi_hostname_example> is_secure: true secret_key: <secret_key> storage_path: /datastorage/registry signature_version: v4 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - hcp_us DISTRIBUTED_STORAGE_PREFERENCE: - hcp_us
DISTRIBUTED_STORAGE_CONFIG:
hcp_us:
- RadosGWStorage
- access_key: <access_key>
bucket_name: <bucket_name>
hostname: <hitachi_hostname_example>
is_secure: true
secret_key: <secret_key>
storage_path: /datastorage/registry
signature_version: v4
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
- hcp_us
DISTRIBUTED_STORAGE_PREFERENCE:
- hcp_us
6.7. Redis configuration fields
This section details the configuration fields available for Redis deployments.
6.7.1. Build logs
The following build logs configuration fields are available for Redis deployments:
Field | Type | Description |
---|---|---|
BUILDLOGS_REDIS | Object | Redis connection details for build logs caching. |
.host | String |
The hostname at which Redis is accessible. |
.port | Number |
The port at which Redis is accessible. |
.password | String |
The password to connect to the Redis instance. |
.ssl | Boolean | Whether to enable TLS communication between Redis and Quay. Defaults to false. |
6.7.2. User events
The following user event fields are available for Redis deployments:
Field | Type | Description |
---|---|---|
USER_EVENTS_REDIS | Object | Redis connection details for user event handling. |
.host | String |
The hostname at which Redis is accessible. |
.port | Number |
The port at which Redis is accessible. |
.password | String |
The password to connect to the Redis instance. |
.ssl | Boolean | Whether to enable TLS communication between Redis and Quay. Defaults to false. |
.ssl_keyfile | String |
The name of the key database file, which houses the client certificate to be used. |
.ssl_certfile | String |
Used for specifying the file path of the SSL certificate. |
.ssl_cert_reqs | String |
Used to specify the level of certificate validation to be performed during the SSL/TLS handshake. |
.ssl_ca_certs | String |
Used to specify the path to a file containing a list of trusted Certificate Authority (CA) certificates. |
.ssl_ca_data | String |
Used to specify a string containing the trusted CA certificates in PEM format. |
.ssl_check_hostname | Boolean |
Used when setting up an SSL/TLS connection to a server. It specifies whether the client should check that the hostname in the server’s SSL/TLS certificate matches the hostname of the server it is connecting to. |
6.7.3. Example Redis configuration
The following YAML shows a sample configuration using Redis with optional SSL/TLS fields:
BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 ssl: true ssl_*: <path_location_or_certificate>
BUILDLOGS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
ssl: true
USER_EVENTS_REDIS:
host: quay-server.example.com
password: strongpassword
port: 6379
ssl: true
ssl_*: <path_location_or_certificate>
If your deployment uses Azure Cache for Redis and ssl
is set to true
, the port defaults to 6380
.
6.8. ModelCache configuration options
The following options are available on Red Hat Quay for configuring ModelCache.
6.8.1. Memcache configuration option
Memcache is the default ModelCache configuration option. With Memcache, no additional configuration is necessary.
6.8.2. Single Redis configuration option
The following configuration is for a single Redis instance with optional read-only replicas:
DATA_MODEL_CACHE_CONFIG: engine: redis redis_config: primary: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false > replica: host: <host> port: <port> password: <password if ssl is true> ssl: <true | false >
DATA_MODEL_CACHE_CONFIG:
engine: redis
redis_config:
primary:
host: <host>
port: <port>
password: <password if ssl is true>
ssl: <true | false >
replica:
host: <host>
port: <port>
password: <password if ssl is true>
ssl: <true | false >
6.8.3. Clustered Redis configuration option
Use the following configuration for a clustered Redis instance:
DATA_MODEL_CACHE_CONFIG: engine: rediscluster redis_config: startup_nodes: - host: <cluster-host> port: <port> password: <password if ssl: true> read_from_replicas: <true|false> skip_full_coverage_check: <true | false> ssl: <true | false >
DATA_MODEL_CACHE_CONFIG:
engine: rediscluster
redis_config:
startup_nodes:
- host: <cluster-host>
port: <port>
password: <password if ssl: true>
read_from_replicas: <true|false>
skip_full_coverage_check: <true | false>
ssl: <true | false >
6.9. Tag expiration configuration fields
The following tag expiration configuration fields are available with Red Hat Quay:
Field | Type | Description |
---|---|---|
FEATURE_GARBAGE_COLLECTION | Boolean |
Whether garbage collection of repositories is enabled. |
TAG_EXPIRATION_OPTIONS | Array of string |
If enabled, the options that users can select for expiration of tags in their namespace. |
DEFAULT_TAG_EXPIRATION | String |
The default, configurable tag expiration time for time machine. |
FEATURE_CHANGE_TAG_EXPIRATION | Boolean |
Whether users and organizations are allowed to change the tag expiration for tags in their namespace. |
FEATURE_AUTO_PRUNE | Boolean |
When set to |
NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES | Integer |
The interval, in minutes, that defines the frequency to re-run notifications for expiring images. |
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY | Object | The default organization-wide auto-prune policy. |
.method: number_of_tags | Object | The option specifying the number of tags to keep. |
.value: <integer> | Integer |
When used with method: number_of_tags, denotes the number of tags to keep.
For example, to keep two tags, specify |
.creation_date | Object | The option specifying the duration of which to keep tags. |
.value: <integer> | Integer |
When used with creation_date, denotes how long to keep tags.
Can be set to seconds ( |
AUTO_PRUNING_DEFAULT_POLICY_POLL_PERIOD | Integer | The period in which the auto-pruner worker runs at the registry level. By default, it is set to run one time per day (one time per 24 hours). Value must be in seconds. |
6.9.1. Example tag expiration configuration
The following YAML example shows you a sample tag expiration configuration.
... ...
# ...
DEFAULT_TAG_EXPIRATION: 2w
TAG_EXPIRATION_OPTIONS:
- 0s
- 1d
- 1w
- 2w
- 4w
- 3y
# ...
6.9.2. Registry-wide auto-prune policies examples
The following YAML examples show you registry-wide auto-pruning examples by both number of tags and creation date.
Example registry auto-prune policy by number of tags
... ...
# ...
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
method: number_of_tags
value: 10
# ...
- 1
- In this scenario, ten tags remain.
Example registry auto-prune policy by creation date
... ...
# ...
DEFAULT_NAMESPACE_AUTOPRUNE_POLICY:
method: creation_date
value: 1y
# ...
6.10. Quota management configuration fields
Field | Type | Description |
---|---|---|
FEATURE_QUOTA_MANAGEMENT | Boolean | Enables configuration, caching, and validation for quota management feature. **Default:** `False`
|
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES | String | Enables system default quota reject byte allowance for all organizations. By default, no limit is set. |
QUOTA_BACKFILL | Boolean | Enables the quota backfill worker to calculate the size of pre-existing blobs.
Default: |
QUOTA_TOTAL_DELAY_SECONDS | String | The time delay for starting the quota backfill. Rolling deployments can cause incorrect totals. This field must be set to a time longer than it takes for the rolling deployment to complete.
Default: |
PERMANENTLY_DELETE_TAGS | Boolean | Enables functionality related to the removal of tags from the time machine window.
Default: |
RESET_CHILD_MANIFEST_EXPIRATION | Boolean |
Resets the expirations of temporary tags targeting the child manifests. With this feature set to
Default: |
6.10.1. Example quota management configuration
The following YAML is the suggested configuration when enabling quota management.
Quota management YAML configuration
FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true
FEATURE_QUOTA_MANAGEMENT: true
FEATURE_GARBAGE_COLLECTION: true
PERMANENTLY_DELETE_TAGS: true
QUOTA_TOTAL_DELAY_SECONDS: 1800
RESET_CHILD_MANIFEST_EXPIRATION: true
6.11. Proxy cache configuration fields
Field | Type | Description |
---|---|---|
FEATURE_PROXY_CACHE | Boolean | Enables Red Hat Quay to act as a pull through cache for upstream registries.
Default: |
6.12. Robot account configuration fields
Field | Type | Description |
---|---|---|
ROBOTS_DISALLOW | Boolean |
When set to |
6.13. Pre-configuring Red Hat Quay for automation
Red Hat Quay supports several configuration options that enable automation. Users can configure these options before deployment to reduce the need for interaction with the user interface.
6.13.1. Allowing the API to create the first user
To create the first user, users need to set the FEATURE_USER_INITIALIZE
parameter to true
and call the /api/v1/user/initialize
API. Unlike all other registry API calls that require an OAuth token generated by an OAuth application in an existing organization, the API endpoint does not require authentication.
Users can use the API to create a user such as quayadmin
after deploying Red Hat Quay, provided no other users have been created. For more information, see Using the API to create the first user.
6.13.2. Enabling general API access
Users should set the BROWSER_API_CALLS_XHR_ONLY
configuration option to false
to allow general access to the Red Hat Quay registry API.
6.13.3. Adding a superuser
After deploying Red Hat Quay, users can create a user and give the first user administrator privileges with full permissions. Users can configure full permissions in advance by using the SUPER_USER
configuration object. For example:
... ...
# ...
SERVER_HOSTNAME: quay-server.example.com
SETUP_COMPLETE: true
SUPER_USERS:
- quayadmin
# ...
6.13.4. Restricting user creation
After you have configured a superuser, you can restrict the ability to create new users to the superuser group by setting the FEATURE_USER_CREATION
to false
. For example:
... ...
# ...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
# ...
6.13.5. Enabling new functionality in Red Hat Quay 3.14
To use new Red Hat Quay 3.14 functions, enable some or all of the following features:
... ...
# ...
FEATURE_UI_V2: true
FEATURE_UI_V2_REPO_SETTINGS: true
FEATURE_AUTO_PRUNE: true
ROBOTS_DISALLOW: false
# ...
6.13.6. Suggested configuration for automation
The following config.yaml
parameters are suggested for automation:
... ...
# ...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
# ...
6.13.7. Deploying the Red Hat Quay Operator using the initial configuration
Use the following procedure to deploy Red Hat Quay on OpenShift Container Platform using the initial configuration.
Prerequisites
-
You have installed the
oc
CLI.
Procedure
Create a secret using the configuration file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret
$ oc create secret generic -n quay-enterprise --from-file config.yaml=./config.yaml init-config-bundle-secret
Create a
quayregistry.yaml
file. Identify the unmanaged components and reference the created secret, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: init-config-bundle-secret
Deploy the Red Hat Quay registry:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -n quay-enterprise -f quayregistry.yaml
$ oc create -n quay-enterprise -f quayregistry.yaml
Next Steps
6.13.8. Using the API to create the first user
Use the following procedure to create the first user in your Red Hat Quay organization.
Prerequisites
-
The config option
FEATURE_USER_INITIALIZE
must be set totrue
. - No users can already exist in the database.
This procedure requests an OAuth token by specifying "access_token": true
.
Open your Red Hat Quay configuration file and update the following configuration fields:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin
FEATURE_USER_INITIALIZE: true SUPER_USERS: - quayadmin
Stop the Red Hat Quay service by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman stop quay
$ sudo podman stop quay
Start the Red Hat Quay service by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v $QUAY/config:/conf/stack:Z -v $QUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}
$ sudo podman run -d -p 80:8080 -p 443:8443 --name=quay -v $QUAY/config:/conf/stack:Z -v $QUAY/storage:/datastorage:Z {productrepo}/{quayimage}:{productminv}
Run the following
CURL
command to generate a new user with a username, password, email, and access token:Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "quayadmin@example.com", "access_token": true}'
$ curl -X POST -k http://quay-server.example.com/api/v1/user/initialize --header 'Content-Type: application/json' --data '{ "username": "quayadmin", "password":"quaypass12345", "email": "quayadmin@example.com", "access_token": true}'
If successful, the command returns an object with the username, email, and encrypted password. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"quayadmin@example.com","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow
{"access_token":"6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED", "email":"quayadmin@example.com","encrypted_password":"1nZMLH57RIE5UGdL/yYpDOHLqiNCgimb6W9kfF8MjZ1xrfDpRyRs9NUnUuNuAitW","username":"quayadmin"} # gitleaks:allow
If a user already exists in the database, an error is returned:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"message":"Cannot initialize user in a non-empty database"}
{"message":"Cannot initialize user in a non-empty database"}
If your password is not at least eight characters or contains whitespace, an error is returned:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow {"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}
{"message":"Failed to initialize user: Invalid password, password must be at least 8 characters and contain no whitespace."}
Log in to your Red Hat Quay deployment by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false
$ sudo podman login -u quayadmin -p quaypass12345 http://quay-server.example.com --tls-verify=false
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Login Succeeded!
Login Succeeded!
6.13.8.1. Using the OAuth token
After invoking the API, you can call out the rest of the Red Hat Quay API by specifying the returned OAuth code.
Prerequisites
-
You have invoked the
/api/v1/user/initialize
API, and passed in the username, password, and email address.
Procedure
Obtain the list of current users by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -X GET -k -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/
$ curl -X GET -k -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/superuser/users/
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow { "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "quayadmin@example.com", "verified": true, "avatar": { "name": "quayadmin", "hash": "3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c", "color": "#e7ba52", "kind": "user" }, "super_user": true, "enabled": true } ] }
{ "users": [ { "kind": "user", "name": "quayadmin", "username": "quayadmin", "email": "quayadmin@example.com", "verified": true, "avatar": { "name": "quayadmin", "hash": "3e82e9cbf62d25dec0ed1b4c66ca7c5d47ab9f1f271958298dea856fb26adc4c", "color": "#e7ba52", "kind": "user" }, "super_user": true, "enabled": true } ] }
In this instance, the details for the
quayadmin
user are returned as it is the only user that has been created so far.
6.13.8.2. Using the API to create an organization
The following procedure details how to use the API to create a Red Hat Quay organization.
Prerequisites
-
You have invoked the
/api/v1/user/initialize
API, and passed in the username, password, and email address. - You have called out the rest of the Red Hat Quay API by specifying the returned OAuth code.
Procedure
To create an organization, use a POST call to
api/v1/organization/
endpoint:Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -X POST -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{"name": "testorg", "email": "testorg@example.com"}'
$ curl -X POST -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/ --data '{"name": "testorg", "email": "testorg@example.com"}'
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow "Created"
"Created"
You can retrieve the details of the organization you created by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow curl -X GET -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg
$ curl -X GET -k --header 'Content-Type: application/json' -H "Authorization: Bearer 6B4QTRSTSD1HMIG915VPX7BMEZBVB9GPNY2FC2ED" https://min-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow { "name": "testorg", "email": "testorg@example.com", "avatar": { "name": "testorg", "hash": "5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8", "color": "#a55194", "kind": "user" }, "is_admin": true, "is_member": true, "teams": { "owners": { "name": "owners", "description": "", "role": "admin", "avatar": { "name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team" }, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false } }, "ordered_teams": [ "owners" ], "invoice_email": false, "invoice_email_address": null, "tag_expiration_s": 1209600, "is_free_account": true }
{ "name": "testorg", "email": "testorg@example.com", "avatar": { "name": "testorg", "hash": "5f113632ad532fc78215c9258a4fb60606d1fa386c91b141116a1317bf9c53c8", "color": "#a55194", "kind": "user" }, "is_admin": true, "is_member": true, "teams": { "owners": { "name": "owners", "description": "", "role": "admin", "avatar": { "name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team" }, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false } }, "ordered_teams": [ "owners" ], "invoice_email": false, "invoice_email_address": null, "tag_expiration_s": 1209600, "is_free_account": true }
6.14. Basic configuration fields
Field | Type | Description |
---|---|---|
REGISTRY_TITLE | String |
If specified, the long-form title for the registry. Displayed in frontend of your Red Hat Quay deployment, for example, at the sign in page of your organization. Should not exceed 35 characters. |
REGISTRY_TITLE_SHORT | String |
If specified, the short-form title for the registry. Title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization’s Tutorial page. |
CONTACT_INFO | Array of String | If specified, contact information to display on the contact page. If only a single piece of contact information is specified, the contact footer will link directly. |
[0] | String |
Adds a link to send an e-mail. |
[1] | String |
Adds a link to visit an IRC chat room. |
[2] | String |
Adds a link to call a phone number. |
[3] | String |
Adds a link to a defined URL. |
6.15. SSL configuration fields
Field | Type | Description |
---|---|---|
PREFERRED_URL_SCHEME | String |
One of |
SERVER_HOSTNAME | String |
The URL at which Red Hat Quay is accessible, without the scheme |
SSL_CIPHERS | Array of String |
If specified, the nginx-defined list of SSL ciphers to enabled and disabled |
SSL_PROTOCOLS | Array of String |
If specified, nginx is configured to enabled a list of SSL protocols defined in the list. Removing an SSL protocol from the list disables the protocol during Red Hat Quay startup. |
SESSION_COOKIE_SECURE | Boolean |
Whether the |
6.15.1. Configuring SSL
Copy the certificate file and primary key file to your configuration directory, ensuring they are named
ssl.cert
andssl.key
respectively:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp ~/ssl.cert $QUAY/config cp ~/ssl.key $QUAY/config cd $QUAY/config
$ cp ~/ssl.cert $QUAY/config $ cp ~/ssl.key $QUAY/config $ cd $QUAY/config
Edit the
config.yaml
file and specify that you want Quay to handle TLS:config.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ...
... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ...
-
Stop the
Quay
container and restart the registry
6.16. Adding additional Certificate Authorities to the Red Hat Quay container
The extra_ca_certs
directory is the directory where additional Certificate Authorities (CAs) can be stored to extend the set of trusted certificates. These certificates are used by Red Hat Quay to verify SSL/TLS connections with external services. When deploying Red Hat Quay, you can place the necessary CAs in this directory to ensure that connections to services like LDAP, OIDC, and storage systems are properly secured and validated.
For standalone Red Hat Quay deployments, you must create this directory and copy the additional CA certificates into that directory.
Prerequisites
- You have a CA for the desired service.
Procedure
View the certificate to be added to the container by entering the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow cat storage.crt
$ cat storage.crt
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV... -----END CERTIFICATE-----
-----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV... -----END CERTIFICATE-----
Create the
extra_ca_certs
in the/config
folder of your Red Hat Quay directory by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir -p /path/to/quay_config_folder/extra_ca_certs
$ mkdir -p /path/to/quay_config_folder/extra_ca_certs
Copy the CA file to the
extra_ca_certs
folder. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow cp storage.crt /path/to/quay_config_folder/extra_ca_certs/
$ cp storage.crt /path/to/quay_config_folder/extra_ca_certs/
Ensure that the
storage.crt
file exists within theextra_ca_certs
folder by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow tree /path/to/quay_config_folder/extra_ca_certs
$ tree /path/to/quay_config_folder/extra_ca_certs
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /path/to/quay_config_folder/extra_ca_certs ├── storage.crt----
/path/to/quay_config_folder/extra_ca_certs ├── storage.crt----
Obtain the
CONTAINER ID
of yourQuay
consider by entering the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman ps
$ podman ps
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:{productminv} "/sbin/my_init" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:{productminv} "/sbin/my_init" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller
Restart the container by entering the following command
Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman restart 5a3e82c4a75f
$ podman restart 5a3e82c4a75f
Confirm that the certificate was copied into the container namespace by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem
$ podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem
Example output
Copy to Clipboard Copied! Toggle word wrap Toggle overflow -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV... -----END CERTIFICATE-----
-----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV... -----END CERTIFICATE-----
6.17. LDAP configuration fields
Field | Type | Description |
---|---|---|
AUTHENTICATION_TYPE | String |
Must be set to |
FEATURE_TEAM_SYNCING | Boolean |
Whether to allow for team membership to be synced from a backing group in the authentication engine (OIDC, LDAP, or Keystone). |
FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP | Boolean |
If enabled, non-superusers can setup team syncrhonization. |
LDAP_ADMIN_DN | String | The admin DN for LDAP authentication. |
LDAP_ADMIN_PASSWD | String | The admin password for LDAP authentication. |
LDAP_ALLOW_INSECURE_FALLBACK | Boolean | Whether or not to allow SSL insecure fallback for LDAP authentication. |
LDAP_BASE_DN | Array of String | The base DN for LDAP authentication. |
LDAP_EMAIL_ATTR | String | The email attribute for LDAP authentication. |
LDAP_UID_ATTR | String | The uid attribute for LDAP authentication. |
LDAP_URI | String | The LDAP URI. |
LDAP_USER_FILTER | String | The user filter for LDAP authentication. |
LDAP_USER_RDN | Array of String | The user RDN for LDAP authentication. |
LDAP_SECONDARY_USER_RDNS | Array of String | Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. |
TEAM_RESYNC_STALE_TIME | String |
If team syncing is enabled for a team, how often to check its membership and resync if necessary. |
LDAP_SUPERUSER_FILTER | String |
Subset of the With this field, administrators can add or remove superusers without having to update the Red Hat Quay configuration file and restart their deployment.
This field requires that your |
LDAP_GLOBAL_READONLY_SUPERUSER_FILTER | String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the |
LDAP_RESTRICTED_USER_FILTER | String |
Subset of the
This field requires that your |
FEATURE_RESTRICTED_USERS | Boolean |
When set to
Default: |
LDAP_TIMEOUT | Integer |
Specifies the time limit, in seconds, for LDAP operations. This limits the amount of time an LDAP search, bind, or other operation can take. Similar to the |
LDAP_NETWORK_TIMEOUT | Integer |
Specifies the time limit, in seconds, for establishing a connection to the LDAP server. This is the maximum time Red Hat Quay waits for a response during network operations, similar to the |
6.17.1. LDAP configuration references
Use the following references to update your config.yaml
file with the desired LDAP settings.
6.17.1.1. Basic LDAP configuration
Use the following reference for a basic LDAP configuration.
--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - dc=example - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=people LDAP_SECONDARY_USER_RDNS: - ou=<example_organization_unit_one> - ou=<example_organization_unit_two> - ou=<example_organization_unit_three> - ou=<example_organization_unit_four>
---
AUTHENTICATION_TYPE: LDAP
---
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- dc=example
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com)
LDAP_USER_RDN:
- ou=people
LDAP_SECONDARY_USER_RDNS:
- ou=<example_organization_unit_one>
- ou=<example_organization_unit_two>
- ou=<example_organization_unit_three>
- ou=<example_organization_unit_four>
- 1
- Required. Must be set to
LDAP
. - 2
- Required. The admin DN for LDAP authentication.
- 3
- Required. The admin password for LDAP authentication.
- 4
- Required. Whether to allow SSL/TLS insecure fallback for LDAP authentication.
- 5
- Required. The base DN for LDAP authentication.
- 6
- Required. The email attribute for LDAP authentication.
- 7
- Required. The UID attribute for LDAP authentication.
- 8
- Required. The LDAP URI.
- 9
- Required. The user filter for LDAP authentication.
- 10
- Required. The user RDN for LDAP authentication.
- 11
- Optional. Secondary User Relative DNs if there are multiple Organizational Units where user objects are located.
6.17.1.2. LDAP restricted user configuration
Use the following reference for an LDAP restricted user configuration.
... ... ... ...
# ...
AUTHENTICATION_TYPE: LDAP
# ...
FEATURE_RESTRICTED_USERS: true
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
# ...
6.17.1.3. LDAP superuser configuration reference
Use the following reference for an LDAP superuser configuration.
... ... ...
# ...
AUTHENTICATION_TYPE: LDAP
# ...
LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
LDAP_ADMIN_PASSWD: ABC123
LDAP_ALLOW_INSECURE_FALLBACK: false
LDAP_BASE_DN:
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
LDAP_EMAIL_ATTR: mail
LDAP_UID_ATTR: uid
LDAP_URI: ldap://<example_url>.com
LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com)
LDAP_SUPERUSER_FILTER: (<filterField>=<value>)
LDAP_USER_RDN:
- ou=<example_organization_unit>
- o=<organization_id>
- dc=<example_domain_component>
- dc=com
# ...
- 1
- Configures specified users as superusers.
6.18. Mirroring configuration fields
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR | Boolean |
Enable or disable repository mirroring |
REPO_MIRROR_INTERVAL | Number |
The number of seconds between checking for repository mirror candidates |
REPO_MIRROR_SERVER_HOSTNAME | String |
Replaces the |
REPO_MIRROR_TLS_VERIFY | Boolean |
Require HTTPS and verify certificates of Quay registry during mirror. |
REPO_MIRROR_ROLLBACK | Boolean |
When set to
Default: |
6.19. Security scanner configuration fields
Field | Type | Description |
---|---|---|
FEATURE_SECURITY_SCANNER | Boolean |
Enable or disable the security scanner |
FEATURE_SECURITY_NOTIFICATIONS | Boolean |
If the security scanner is enabled, turn on or turn off security notifications |
SECURITY_SCANNER_V4_REINDEX_THRESHOLD | String |
This parameter is used to determine the minimum time, in seconds, to wait before re-indexing a manifest that has either previously failed or has changed states since the last indexing. The data is calculated from the |
SECURITY_SCANNER_V4_ENDPOINT | String |
The endpoint for the V4 security scanner |
SECURITY_SCANNER_V4_PSK | String | The generated pre-shared key (PSK) for Clair |
SECURITY_SCANNER_ENDPOINT | String |
The endpoint for the V2 security scanner |
SECURITY_SCANNER_INDEXING_INTERVAL | Integer |
This parameter is used to determine the number of seconds between indexing intervals in the security scanner. When indexing is triggered, Red Hat Quay will query its database for manifests that must be indexed by Clair. These include manifests that have not yet been indexed and manifests that previously failed indexing. |
FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX | Boolean |
Whether to allow sending notifications about vulnerabilities for new pushes. |
SECURITY_SCANNER_V4_MANIFEST_CLEANUP | Boolean |
Whether the Red Hat Quay garbage collector removes manifests that are not referenced by other tags or manifests. |
NOTIFICATION_MIN_SEVERITY_ON_NEW_INDEX | String |
Set minimal security level for new notifications on detected vulnerabilities. Avoids creation of large number of notifications after first index. If not defined, defaults to |
SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE | String |
The maximum layer size allowed for indexing. If the layer size exceeds the configured size, the Red Hat Quay UI returns the following message: |
6.19.1. Re-indexing with Clair v4
When Clair v4 indexes a manifest, the result should be deterministic. For example, the same manifest should produce the same index report. This is true until the scanners are changed, as using different scanners will produce different information relating to a specific manifest to be returned in the report. Because of this, Clair v4 exposes a state representation of the indexing engine (/indexer/api/v1/index_state
) to determine whether the scanner configuration has been changed.
Red Hat Quay leverages this index state by saving it to the index report when parsing to Quay’s database. If this state has changed since the manifest was previously scanned, Red Hat Quay will attempt to re-index that manifest during the periodic indexing process.
By default this parameter is set to 30 seconds. Users might decrease the time if they want the indexing process to run more frequently, for example, if they did not want to wait 30 seconds to see security scan results in the UI after pushing a new tag. Users can also change the parameter if they want more control over the request pattern to Clair and the pattern of database operations being performed on the Red Hat Quay database.
6.19.2. Example security scanner configuration
The following YAML is the suggested configuration when enabling the security scanner feature.
Security scanner YAML configuration
FEATURE_SECURITY_NOTIFICATIONS: true FEATURE_SECURITY_SCANNER: true FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true ... SECURITY_SCANNER_INDEXING_INTERVAL: 30 SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081 SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ== SERVER_HOSTNAME: quay-server.example.com SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE: 8G ...
FEATURE_SECURITY_NOTIFICATIONS: true
FEATURE_SECURITY_SCANNER: true
FEATURE_SECURITY_SCANNER_NOTIFY_ON_NEW_INDEX: true
...
SECURITY_SCANNER_INDEXING_INTERVAL: 30
SECURITY_SCANNER_V4_MANIFEST_CLEANUP: true
SECURITY_SCANNER_V4_ENDPOINT: http://quay-server.example.com:8081
SECURITY_SCANNER_V4_PSK: MTU5YzA4Y2ZkNzJoMQ==
SERVER_HOSTNAME: quay-server.example.com
SECURITY_SCANNER_V4_INDEX_MAX_LAYER_SIZE: 8G
...
- 1
- Recommended maximum is
10G
.
6.20. Helm configuration fields
Field | Type | Description |
---|---|---|
FEATURE_GENERAL_OCI_SUPPORT | Boolean |
Enable support for OCI artifacts. |
The following Open Container Initiative (OCI) artifact types are built into Red Hat Quay by default and are enabled through the FEATURE_GENERAL_OCI_SUPPORT configuration field:
Field | Media Type | Supported content types |
---|---|---|
Helm |
|
|
Cosign |
|
|
SPDX |
|
|
Syft |
|
|
CycloneDX |
|
|
In-toto |
|
|
Unknown |
|
|
6.20.1. Configuring Helm
The following YAML is the example configuration when enabling Helm.
Helm YAML configuration
FEATURE_GENERAL_OCI_SUPPORT: true
FEATURE_GENERAL_OCI_SUPPORT: true
6.21. Open Container Initiative configuration fields
Field | Type | Description |
---|---|---|
FEATURE_REFERRERS_API | Boolean | Enables OCI 1.1’s referrers API. |
Example OCI referrers enablement YAML
... ...
# ...
FEATURE_REFERRERS_API: True
# ...
6.21.1. Model card rendering
The following configuration fields have been added to support model card rendering on the v2 UI.
Field | Type | Description |
---|---|---|
FEATURE_UI_MODELCARD | Boolean |
Enables Model Card image tab in UI. Defaults to |
UI_MODELCARD_ARTIFACT_TYPE | String | Defines the model card artifact type. |
UI_MODELCARD_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
UI_MODELCARD_LAYER_ANNOTATION | Object | This optional field defines the layer annotation of the model card stored in an OCI image. |
Example model card YAML
FEATURE_UI_MODELCARD: true UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel UI_MODELCARD_ANNOTATION: org.opencontainers.image.description: "Model card metadata" UI_MODELCARD_LAYER_ANNOTATION: org.opencontainers.image.title: README.md
FEATURE_UI_MODELCARD: true
UI_MODELCARD_ARTIFACT_TYPE: application/x-mlmodel
UI_MODELCARD_ANNOTATION:
org.opencontainers.image.description: "Model card metadata"
UI_MODELCARD_LAYER_ANNOTATION:
org.opencontainers.image.title: README.md
- 1
- Enables the Model Card image tab in the UI.
- 2
- Defines the model card artifact type. In this example, the artifact type is
application/x-mlmodel
. - 3
- Optional. If an image does not have an
artifactType
defined, this field is checked at the manifest level. If a matching annotation is found, the system then searches for a layer with an annotation matchingUI_MODELCARD_LAYER_ANNOTATION
. - 4
- Optional. If an image has an
artifactType
defined and multiple layers, this field is used to locate the specific layer containing the model card.
6.22. Action log configuration fields
6.22.1. Action log storage configuration
Field | Type | Description |
---|---|---|
FEATURE_LOG_EXPORT | Boolean |
Whether to allow exporting of action logs. |
LOGS_MODEL | String |
Specifies the preferred method for handling log data. |
LOGS_MODEL_CONFIG | Object | Logs model config for action logs. |
ALLOW_WITHOUT_STRICT_LOGGING | Boolean |
When set to |
6.22.1.1. Elasticsearch configuration fields
The following fields are available when configuring Elasticsearch for Red Hat Quay.
LOGS_MODEL_CONFIG [object]: Logs model config for action logs.
elasticsearch_config [object]: Elasticsearch cluster configuration.
access_key [string]: Elasticsearch user (or IAM key for AWS ES).
-
Example:
some_string
-
Example:
host [string]: Elasticsearch cluster endpoint.
-
Example:
host.elasticsearch.example
-
Example:
index_prefix [string]: Elasticsearch’s index prefix.
-
Example:
logentry_
-
Example:
- index_settings [object]: Elasticsearch’s index settings
use_ssl [boolean]: Use ssl for Elasticsearch. Defaults to
True
.-
Example:
True
-
Example:
secret_key [string]: Elasticsearch password (or IAM secret for AWS ES).
-
Example:
some_secret_string
-
Example:
aws_region [string]: Amazon web service region.
-
Example:
us-east-1
-
Example:
port [number]: Elasticsearch cluster endpoint port.
-
Example:
1234
-
Example:
kinesis_stream_config [object]: AWS Kinesis Stream configuration.
aws_secret_key [string]: AWS secret key.
-
Example:
some_secret_key
-
Example:
stream_name [string]: Kinesis stream to send action logs to.
-
Example:
logentry-kinesis-stream
-
Example:
aws_access_key [string]: AWS access key.
-
Example:
some_access_key
-
Example:
retries [number]: Max number of attempts made on a single request.
-
Example:
5
-
Example:
read_timeout [number]: Number of seconds before timeout when reading from a connection.
-
Example:
5
-
Example:
max_pool_connections [number]: The maximum number of connections to keep in a connection pool.
-
Example:
10
-
Example:
aws_region [string]: AWS region.
-
Example:
us-east-1
-
Example:
connect_timeout [number]: Number of seconds before timeout when attempting to make a connection.
-
Example:
5
-
Example:
producer [string]: Logs producer if logging to Elasticsearch.
- enum: kafka, elasticsearch, kinesis_stream
-
Example:
kafka
kafka_config [object]: Kafka cluster configuration.
topic [string]: Kafka topic to publish log entries to.
-
Example:
logentry
-
Example:
- bootstrap_servers [array]: List of Kafka brokers to bootstrap the client from.
max_block_seconds [number]: Max number of seconds to block during a
send()
, either because the buffer is full or metadata unavailable.-
Example:
10
-
Example:
6.22.1.2. Splunk configuration fields
The following fields are available when configuring Splunk for Red Hat Quay.
-
producer [string]:
splunk
. Use when configuring Splunk. splunk_config [object]: Logs model configuration for Splunk action logs or the Splunk cluster configuration.
- host [string]: Splunk cluster endpoint.
- port [integer]: Splunk management cluster endpoint port.
- bearer_token [string]: The bearer token for Splunk.
-
verify_ssl [boolean]: Enable (
True
) or disable (False
) TLS/SSL verification for HTTPS connections. - index_prefix [string]: Splunk’s index prefix.
-
ssl_ca_path [string]: The relative container path to a single
.pem
file containing a certificate authority (CA) for SSL validation.
Example Splunk configuration
... ...
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
producer: splunk
splunk_config:
host: http://<user_name>.remote.csb
port: 8089
bearer_token: <bearer_token>
url_scheme: <http/https>
verify_ssl: False
index_prefix: <splunk_log_index_name>
ssl_ca_path: <location_to_ssl-ca-cert.pem>
# ...
6.22.1.3. Splunk HEC configuration fields
The following fields are available when configuring Splunk HTTP Event Collector (HEC) for Red Hat Quay.
-
producer [string]:
splunk_hec
. Use when configuring Splunk HEC. splunk_hec_config [object]: Logs model configuration for Splunk HTTP event collector action logs configuration.
- host [string]: Splunk cluster endpoint.
- port [integer]: Splunk management cluster endpoint port.
- hec_token [string]: HEC token for Splunk.
-
url_scheme [string]: The URL scheme for access the Splunk service. If Splunk is behind SSL/TLS, must be
https
. -
verify_ssl [boolean]: Enable (
true
) or disable (false
) SSL/TLS verification for HTTPS connections. - index [string]: The Splunk index to use.
- splunk_host [string]: The host name to log this event.
-
splunk_sourcetype [string]: The name of the Splunk
sourcetype
to use.
... ...
# ...
LOGS_MODEL: splunk
LOGS_MODEL_CONFIG:
producer: splunk_hec
splunk_hec_config:
host: prd-p-aaaaaq.splunkcloud.com
port: 8088
hec_token: 12345678-1234-1234-1234-1234567890ab
url_scheme: https
verify_ssl: False
index: quay
splunk_host: quay-dev
splunk_sourcetype: quay_logs
# ...
6.22.2. Action log rotation and archiving configuration
Field | Type | Description |
---|---|---|
FEATURE_ACTION_LOG_ROTATION | Boolean |
Enabling log rotation and archival will move all logs older than 30 days to storage. |
ACTION_LOG_ARCHIVE_LOCATION | String |
If action log archiving is enabled, the storage engine in which to place the archived data. |
ACTION_LOG_ARCHIVE_PATH | String |
If action log archiving is enabled, the path in storage in which to place the archived data. |
ACTION_LOG_ROTATION_THRESHOLD | String |
The time interval after which to rotate logs. |
6.22.3. Action log audit configuration
Field | Type | Description |
---|---|---|
ACTION_LOG_AUDIT_LOGINS | Boolean |
When set to |
6.23. Build logs configuration fields
Field | Type | Description |
---|---|---|
FEATURE_READER_BUILD_LOGS | Boolean |
If set to true, build logs can be read by those with |
LOG_ARCHIVE_LOCATION | String |
The storage location, defined in |
LOG_ARCHIVE_PATH | String |
The path under the configured storage engine in which to place the archived build logs in |
6.24. Dockerfile build triggers fields
Field | Type | Description |
---|---|---|
FEATURE_BUILD_SUPPORT | Boolean |
Whether to support Dockerfile build. |
SUCCESSIVE_TRIGGER_FAILURE_DISABLE_THRESHOLD | Number |
If not set to |
SUCCESSIVE_TRIGGER_INTERNAL_ERROR_DISABLE_THRESHOLD | Number |
If not set to |
6.24.1. GitHub build triggers
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_BUILD | Boolean |
Whether to support GitHub build triggers. |
|
|
|
GITHUB_TRIGGER_CONFIG | Object | Configuration for using GitHub Enterprise for build triggers. |
.GITHUB_ENDPOINT | String |
The endpoint for GitHub Enterprise. |
.API_ENDPOINT | String |
The endpoint of the GitHub Enterprise API to use. Must be overridden for |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance; this cannot be shared with |
.CLIENT_SECRET | String | The registered client secret for this Red Hat Quay instance. |
6.24.2. BitBucket build triggers
Field | Type | Description |
---|---|---|
FEATURE_BITBUCKET_BUILD | Boolean |
Whether to support Bitbucket build triggers. |
|
|
|
BITBUCKET_TRIGGER_CONFIG | Object | Configuration for using BitBucket for build triggers. |
.CONSUMER_KEY | String | The registered consumer key (client ID) for this Red Hat Quay instance. |
.CONSUMER_SECRET | String | The registered consumer secret (client secret) for this Red Hat Quay instance. |
6.24.3. GitLab build triggers
Field | Type | Description |
---|---|---|
FEATURE_GITLAB_BUILD | Boolean |
Whether to support GitLab build triggers. |
|
|
|
GITLAB_TRIGGER_CONFIG | Object | Configuration for using Gitlab for build triggers. |
.GITLAB_ENDPOINT | String | The endpoint at which Gitlab Enterprise is running. |
.CLIENT_ID | String | The registered client ID for this Red Hat Quay instance. |
.CLIENT_SECRET | String | The registered client secret for this Red Hat Quay instance. |
6.25. Build manager configuration fields
Field | Type | Description |
---|---|---|
ALLOWED_WORKER_COUNT | String |
Defines how many Build Workers are instantiated per Red Hat Quay pod. Typically set to |
ORCHESTRATOR_PREFIX | String | Defines a unique prefix to be added to all Redis keys. This is useful to isolate Orchestrator values from other Redis keys. |
REDIS_HOST | Object | The hostname for your Redis service. |
REDIS_PASSWORD | String | The password to authenticate into your Redis service. |
REDIS_SSL | Boolean | Defines whether or not your Redis connection uses SSL/TLS. |
REDIS_SKIP_KEYSPACE_EVENT_SETUP | Boolean |
By default, Red Hat Quay does not set up the keyspace events required for key events at runtime. To do so, set |
EXECUTOR | String |
Starts a definition of an Executor of this type. Valid values are |
BUILDER_NAMESPACE | String | Kubernetes namespace where Red Hat Quay Builds will take place. |
K8S_API_SERVER | Object | Hostname for API Server of the OpenShift Container Platform cluster where Builds will take place. |
K8S_API_TLS_CA | Object |
The filepath in the |
KUBERNETES_DISTRIBUTION | String |
Indicates which type of Kubernetes is being used. Valid values are |
CONTAINER_* | Object |
Define the resource requests and limits for each |
NODE_SELECTOR_* | Object |
Defines the node selector label name-value pair where |
CONTAINER_RUNTIME | Object |
Specifies whether the Builder should run |
SERVICE_ACCOUNT_NAME/SERVICE_ACCOUNT_TOKEN | Object |
Defines the Service Account name or token that will be used by |
QUAY_USERNAME/QUAY_PASSWORD | Object |
Defines the registry credentials needed to pull the Red Hat Quay build worker image that is specified in the |
WORKER_IMAGE | Object | Image reference for the Red Hat Quay Builder image. registry.redhat.io/quay/quay-builder |
WORKER_TAG | Object | Tag for the Builder image desired. The latest version is 3.14. |
BUILDER_VM_CONTAINER_IMAGE | Object |
The full reference to the container image holding the internal VM needed to run each Red Hat Quay Build. ( |
SETUP_TIME | String |
Specifies the number of seconds at which a Build times out if it has not yet registered itself with the Build Manager. Defaults at |
MINIMUM_RETRY_THRESHOLD | String |
This setting is used with multiple Executors. It indicates how many retries are attempted to start a Build before a different Executor is chosen. Setting to |
SSH_AUTHORIZED_KEYS | Object |
List of SSH keys to bootstrap in the |
6.26. OAuth configuration fields
Field | Type | Description |
---|---|---|
DIRECT_OAUTH_CLIENTID_WHITELIST | Array of String | A list of client IDs for Quay-managed applications that are allowed to perform direct OAuth approval without user approval. |
FEATURE_ASSIGN_OAUTH_TOKEN | Boolean | Allows organization administrators to assign OAuth tokens to other users. |
6.26.1. GitHub OAuth configuration fields
Field | Type | Description |
---|---|---|
FEATURE_GITHUB_LOGIN | Boolean |
Whether GitHub login is supported |
GITHUB_LOGIN_CONFIG | Object | Configuration for using GitHub (Enterprise) as an external login provider. |
.ALLOWED_ORGANIZATIONS | Array of String | The names of the GitHub (Enterprise) organizations whitelisted to work with the ORG_RESTRICT option. |
.API_ENDPOINT | String |
The endpoint of the GitHub (Enterprise) API to use. Must be overridden for github.com |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance; cannot be shared with |
.CLIENT_SECRET | String |
The registered client secret for this Red Hat Quay instance. |
.GITHUB_ENDPOINT | String |
The endpoint for GitHub (Enterprise). |
.ORG_RESTRICT | Boolean | If true, only users within the organization whitelist can login using this provider. |
6.26.2. Google OAuth configuration fields
Field | Type | Description |
---|---|---|
FEATURE_GOOGLE_LOGIN | Boolean |
Whether Google login is supported. |
GOOGLE_LOGIN_CONFIG | Object | Configuration for using Google for external authentication. |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance. |
.CLIENT_SECRET | String |
The registered client secret for this Red Hat Quay instance. |
6.27. OIDC configuration fields
Field | Type | Description |
<string>_LOGIN_CONFIG | String |
The parent key that holds the OIDC configuration settings. Typically the name of the OIDC provider, for example, |
.CLIENT_ID | String |
The registered client ID for this Red Hat Quay instance. |
.CLIENT_SECRET | String |
The registered client secret for this Red Hat Quay instance. |
.DEBUGLOG | Boolean | Whether to enable debugging. |
.LOGIN_BINDING_FIELD | String | Used when the internal authorization is set to LDAP. Red Hat Quay reads this parameter and tries to search through the LDAP tree for the user with this username. If it exists, it automatically creates a link to that LDAP account. |
.LOGIN_SCOPES | Object | Adds additional scopes that Red Hat Quay uses to communicate with the OIDC provider. |
.OIDC_ENDPOINT_CUSTOM_PARAMS | String |
Support for custom query parameters on OIDC endpoints. The following endpoints are supported: |
.OIDC_ISSUER | String |
Allows the user to define the issuer to verify. For example, JWT tokens container a parameter known as |
.OIDC_SERVER | String |
The address of the OIDC server that is being used for authentication. |
.PREFERRED_USERNAME_CLAIM_NAME | String | Sets the preferred username to a parameter from the token. |
.SERVICE_ICON | String | Changes the icon on the login screen. |
.SERVICE_NAME | String |
The name of the service that is being authenticated. |
.VERIFIED_EMAIL_CLAIM_NAME | String | The name of the claim that is used to verify the email address of the user. |
.PREFERRED_GROUP_CLAIM_NAME | String | The key name within the OIDC token payload that holds information about the user’s group memberships. |
.OIDC_DISABLE_USER_ENDPOINT | Boolean |
Whether to allow or disable the |
6.27.1. OIDC configuration
The following example shows a sample OIDC configuration.
Example OIDC configuration
AUTHENTICATION_TYPE: OIDC # ... AZURE_LOGIN_CONFIG: CLIENT_ID: <client_id> CLIENT_SECRET: <client_secret> OIDC_SERVER: <oidc_server_address_> DEBUGGING: true SERVICE_NAME: Microsoft Entra ID VERIFIED_EMAIL_CLAIM_NAME: <verified_email> OIDC_DISABLE_USER_ENDPOINT: true OIDC_ENDPOINT_CUSTOM_PARAMS": "authorization_endpoint": "some": "param", # ...
AUTHENTICATION_TYPE: OIDC
# ...
AZURE_LOGIN_CONFIG:
CLIENT_ID: <client_id>
CLIENT_SECRET: <client_secret>
OIDC_SERVER: <oidc_server_address_>
DEBUGGING: true
SERVICE_NAME: Microsoft Entra ID
VERIFIED_EMAIL_CLAIM_NAME: <verified_email>
OIDC_DISABLE_USER_ENDPOINT: true
OIDC_ENDPOINT_CUSTOM_PARAMS":
"authorization_endpoint":
"some": "param",
# ...
6.28. Nested repositories configuration fields
Support for nested repository path names has been added under the FEATURE_EXTENDED_REPOSITORY_NAMES
property. This optional configuration is added to the config.yaml by default. Enablement allows the use of /
in repository names.
Field | Type | Description |
---|---|---|
FEATURE_EXTENDED_REPOSITORY_NAMES | Boolean |
Enable support for nested repositories |
OCI and nested repositories configuration example
FEATURE_EXTENDED_REPOSITORY_NAMES: true
FEATURE_EXTENDED_REPOSITORY_NAMES: true
6.29. QuayIntegration configuration fields
The following configuration fields are available for the QuayIntegration custom resource:
Name | Description | Schema |
---|---|---|
allowlistNamespaces | A list of namespaces to include. | Array |
clusterID | The ID associated with this cluster. | String |
credentialsSecret.key | The secret containing credentials to communicate with the Quay registry. | Object |
denylistNamespaces | A list of namespaces to exclude. | Array |
insecureRegistry | Whether to skip TLS verification to the Quay registry | Boolean |
quayHostname | The hostname of the Quay registry. | String |
scheduledImageStreamImport | Whether to enable image stream importing. | Boolean |
6.30. Mail configuration fields
Field | Type | Description |
---|---|---|
FEATURE_MAILING | Boolean |
Whether emails are enabled |
MAIL_DEFAULT_SENDER | String |
If specified, the e-mail address used as the |
MAIL_PASSWORD | String | The SMTP password to use when sending e-mails |
MAIL_PORT | Number | The SMTP port to use. If not specified, defaults to 587. |
MAIL_SERVER | String |
The SMTP server to use for sending e-mails. Only required if FEATURE_MAILING is set to true. |
MAIL_USERNAME | String | The SMTP username to use when sending e-mails |
MAIL_USE_TLS | Boolean |
If specified, whether to use TLS for sending e-mails |
6.31. User configuration fields
Field | Type | Description |
---|---|---|
FEATURE_SUPER_USERS | Boolean |
Whether superusers are supported |
FEATURE_USER_CREATION | Boolean |
Whether users can be created (by non-superusers) |
FEATURE_USER_LAST_ACCESSED | Boolean |
Whether to record the last time a user was accessed |
FEATURE_USER_LOG_ACCESS | Boolean |
If set to true, users will have access to audit logs for their namespace |
FEATURE_USER_METADATA | Boolean |
Whether to collect and support user metadata |
FEATURE_USERNAME_CONFIRMATION | Boolean |
If set to true, users can confirm and modify their initial usernames when logging in via OpenID Connect (OIDC) or a non-database internal authentication provider like LDAP. |
FEATURE_USER_RENAME | Boolean |
If set to true, users can rename their own namespace |
FEATURE_INVITE_ONLY_USER_CREATION | Boolean |
Whether users being created must be invited by another user |
FRESH_LOGIN_TIMEOUT | String |
The time after which a fresh login requires users to re-enter their password |
USERFILES_LOCATION | String |
ID of the storage engine in which to place user-uploaded files |
USERFILES_PATH | String |
Path under storage in which to place user-uploaded files |
USER_RECOVERY_TOKEN_LIFETIME | String |
The length of time a token for recovering a user accounts is valid |
FEATURE_SUPERUSERS_FULL_ACCESS | Boolean | Grants superusers the ability to read, write, and delete content from other repositories in namespaces that they do not own or have explicit permissions for.
Default: |
FEATURE_SUPERUSERS_ORG_CREATION_ONLY | Boolean | Whether to only allow superusers to create organizations.
Default: |
FEATURE_RESTRICTED_USERS | Boolean |
When set to
Default: |
RESTRICTED_USERS_WHITELIST | String |
When set with |
GLOBAL_READONLY_SUPER_USERS | String |
When set, grants users of this list read access to all repositories, regardless of whether they are public repositories. Only works for those superusers defined with the |
6.31.1. User configuration fields references
Use the following references to update your config.yaml
file with the desired configuration field.
6.31.1.1. FEATURE_SUPERUSERS_FULL_ACCESS configuration reference
--- SUPER_USERS: - quayadmin FEATURE_SUPERUSERS_FULL_ACCESS: True ---
---
SUPER_USERS:
- quayadmin
FEATURE_SUPERUSERS_FULL_ACCESS: True
---
6.31.1.2. GLOBAL_READONLY_SUPER_USERS configuration reference
--- GLOBAL_READONLY_SUPER_USERS: - user1 ---
---
GLOBAL_READONLY_SUPER_USERS:
- user1
---
6.31.1.3. FEATURE_RESTRICTED_USERS configuration reference
--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true ---
---
AUTHENTICATION_TYPE: Database
---
---
FEATURE_RESTRICTED_USERS: true
---
6.31.1.4. RESTRICTED_USERS_WHITELIST configuration reference
Prerequisites
-
FEATURE_RESTRICTED_USERS
is set totrue
in yourconfig.yaml
file.
--- AUTHENTICATION_TYPE: Database --- --- FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - user1 ---
---
AUTHENTICATION_TYPE: Database
---
---
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST:
- user1
---
When this field is set, whitelisted users can create organizations, or read or write content from the repository even if FEATURE_RESTRICTED_USERS
is set to true
. Other users, for example, user2
, user3
, and user4
are restricted from creating organizations, reading, or writing content
6.32. Recaptcha configuration fields
Field | Type | Description |
---|---|---|
FEATURE_RECAPTCHA | Boolean |
Whether Recaptcha is necessary for user login and recovery |
RECAPTCHA_SECRET_KEY | String | If recaptcha is enabled, the secret key for the Recaptcha service |
RECAPTCHA_SITE_KEY | String | If recaptcha is enabled, the site key for the Recaptcha service |
6.33. ACI configuration fields
Field | Type | Description |
---|---|---|
FEATURE_ACI_CONVERSION | Boolean |
Whether to enable conversion to ACIs |
GPG2_PRIVATE_KEY_FILENAME | String | The filename of the private key used to decrypte ACIs |
GPG2_PRIVATE_KEY_NAME | String | The name of the private key used to sign ACIs |
GPG2_PUBLIC_KEY_FILENAME | String | The filename of the public key used to encrypt ACIs |
6.34. JWT configuration fields
Field | Type | Description |
---|---|---|
JWT_AUTH_ISSUER | String |
The endpoint for JWT users |
JWT_GETUSER_ENDPOINT | String |
The endpoint for JWT users |
JWT_QUERY_ENDPOINT | String |
The endpoint for JWT queries |
JWT_VERIFY_ENDPOINT | String |
The endpoint for JWT verification |
6.35. App tokens configuration fields
Field | Type | Description |
---|---|---|
FEATURE_APP_SPECIFIC_TOKENS | Boolean |
If enabled, users can create tokens for use by the Docker CLI |
APP_SPECIFIC_TOKEN_EXPIRATION | String |
The expiration for external app tokens. |
EXPIRED_APP_SPECIFIC_TOKEN_GC | String |
Duration of time expired external app tokens will remain before being garbage collected |
6.36. Miscellaneous configuration fields
Field | Type | Description |
---|---|---|
ALLOW_PULLS_WITHOUT_STRICT_LOGGING | String |
If true, pulls will still succeed even if the pull audit log entry cannot be written . This is useful if the database is in a read-only state and it is desired for pulls to continue during that time. |
AVATAR_KIND | String |
The types of avatars to display, either generated inline (local) or Gravatar (gravatar) |
BROWSER_API_CALLS_XHR_ONLY | Boolean |
If enabled, only API calls marked as being made by an XHR will be allowed from browsers |
DEFAULT_NAMESPACE_MAXIMUM_BUILD_COUNT | Number |
The default maximum number of builds that can be queued in a namespace. |
ENABLE_HEALTH_DEBUG_SECRET | String | If specified, a secret that can be given to health endpoints to see full debug info when not authenticated as a superuser |
EXTERNAL_TLS_TERMINATION | Boolean |
Set to |
FRESH_LOGIN_TIMEOUT | String |
The time after which a fresh login requires users to re-enter their password |
HEALTH_CHECKER | String |
The configured health check |
PROMETHEUS_NAMESPACE | String |
The prefix applied to all exposed Prometheus metrics |
PUBLIC_NAMESPACES | Array of String | If a namespace is defined in the public namespace list, then it will appear on all users' repository list pages, regardless of whether the user is a member of the namespace. Typically, this is used by an enterprise customer in configuring a set of "well-known" namespaces. |
REGISTRY_STATE | String |
The state of the registry |
SEARCH_MAX_RESULT_PAGE_COUNT | Number |
Maximum number of pages the user can paginate in search before they are limited |
SEARCH_RESULTS_PER_PAGE | Number |
Number of results returned per page by search page |
V2_PAGINATION_SIZE | Number |
The number of results returned per page in V2 registry APIs |
WEBHOOK_HOSTNAME_BLACKLIST | Array of String | The set of hostnames to disallow from webhooks when validating, beyond localhost |
CREATE_PRIVATE_REPO_ON_PUSH | Boolean |
Whether new repositories created by push are set to private visibility |
CREATE_NAMESPACE_ON_PUSH | Boolean |
Whether new push to a non-existent organization creates it |
NON_RATE_LIMITED_NAMESPACES | Array of String |
If rate limiting has been enabled using |
Boolean | When set, allows users to try the beta UI environment.
Default: | |
FEATURE_REQUIRE_TEAM_INVITE | Boolean |
Whether to require invitations when adding a user to a team |
FEATURE_REQUIRE_ENCRYPTED_BASIC_AUTH | Boolean |
Whether non-encrypted passwords (as opposed to encrypted tokens) can be used for basic auth |
FEATURE_RATE_LIMITS | Boolean |
Whether to enable rate limits on API and registry endpoints. Setting FEATURE_RATE_LIMITS to |
FEATURE_FIPS | Boolean |
If set to true, Red Hat Quay will run using FIPS-compliant hash functions |
FEATURE_AGGREGATED_LOG_COUNT_RETRIEVAL | Boolean |
Whether to allow retrieval of aggregated log counts |
FEATURE_ANONYMOUS_ACCESS | Boolean |
Whether to allow anonymous users to browse and pull public repositories |
FEATURE_DIRECT_LOGIN | Boolean |
Whether users can directly login to the UI |
FEATURE_LIBRARY_SUPPORT | Boolean |
Whether to allow for "namespace-less" repositories when pulling and pushing from Docker |
FEATURE_PARTIAL_USER_AUTOCOMPLETE | Boolean |
If set to true, autocompletion will apply to partial usernames+ |
FEATURE_PERMANENT_SESSIONS | Boolean |
Whether sessions are permanent |
FEATURE_PUBLIC_CATALOG | Boolean |
If set to true, the |
DISABLE_PUSHES | Boolean |
Disables pushes of new content to the registry while retaining all other functionality. Differs from |
6.37. Legacy configuration fields
The following fields are deprecated or obsolete.
Field | Type | Description |
---|---|---|
FEATURE_BLACKLISTED_EMAILS | Boolean | If set to true, no new User accounts may be created if their email domain is blacklisted |
BLACKLISTED_EMAIL_DOMAINS | Array of String |
The list of email-address domains that is used if FEATURE_BLACKLISTED_EMAILS is set to true |
BLACKLIST_V2_SPEC | String |
The Docker CLI versions to which Red Hat Quay will respond that V2 is unsupported |
DOCUMENTATION_ROOT | String | Root URL for documentation links. This field is useful when Red Hat Quay is configured for disconnected environments to set an alternatively, or allowlisted, documentation link. |
SECURITY_SCANNER_V4_NAMESPACE_WHITELIST | String | The namespaces for which the security scanner should be enabled |
FEATURE_RESTRICTED_V1_PUSH | Boolean |
If set to true, only namespaces listed in V1_PUSH_WHITELIST support V1 push |
V1_PUSH_WHITELIST | Array of String | The array of namespace names that support V1 push if FEATURE_RESTRICTED_V1_PUSH is set to true |
FEATURE_HELM_OCI_SUPPORT | Boolean |
Enable support for Helm artifacts. |
ALLOWED_OCI_ARTIFACT_TYPES | Object | The set of allowed OCI artifact MIME types and the associated layer types. |
6.38. User interface v2 configuration fields
Field | Type | Description |
---|---|---|
FEATURE_UI_V2 | Boolean | When set, allows users to try the beta UI environment.
+ Default: |
FEATURE_UI_V2_REPO_SETTINGS | Boolean |
When set to
+ Default: |
6.38.1. v2 user interface configuration
With FEATURE_UI_V2
enabled, you can toggle between the current version of the user interface and the new version of the user interface.
- This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags.
- When running Red Hat Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI.
- There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Red Hat Quay uses the standard definition of megabyte (MB) to report image manifest sizes.
Procedure
In your deployment’s
config.yaml
file, add theFEATURE_UI_V2
parameter and set it totrue
, for example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow --- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---
--- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---
- Log in to your Red Hat Quay deployment.
In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI. Click the toggle button to set it to new UI, and then click Use Beta Environment, for example:
6.39. IPv6 configuration field
Field | Type | Description |
---|---|---|
FEATURE_LISTEN_IP_VERSION | String | Enables IPv4, IPv6, or dual-stack protocol family. This configuration field must be properly set, otherwise Red Hat Quay fails to start.
Default:
Additional configurations: |
6.40. Branding configuration fields
Field | Type | Description |
---|---|---|
BRANDING | Object | Custom branding for logos and URLs in the Red Hat Quay UI. |
.logo | String |
Main logo image URL.
The header logo defaults to 205x30 PX. The form logo on the Red Hat Quay sign in screen of the web UI defaults to 356.5x39.7 PX. |
.footer_img | String |
Logo for UI footer. Defaults to 144x34 PX. |
.footer_url | String |
Link for footer image. |
6.40.1. Example configuration for Red Hat Quay branding
Branding config.yaml example
BRANDING: logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg footer_url: https://opensourceworld.org/
BRANDING:
logo: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
footer_img: https://www.mend.io/wp-content/media/2020/03/5-tips_small.jpg
footer_url: https://opensourceworld.org/
6.42. Session timeout configuration field
The following configuration field relies on on the Flask API configuration field of the same name.
Field | Type | Description |
---|---|---|
PERMANENT_SESSION_LIFETIME | Integer |
A
Default: |
6.42.1. Example session timeout configuration
The following YAML is the suggest configuration when enabling session lifetime.
Altering session lifetime is not recommended. Administrators should be aware of the allotted time when setting a session timeout. If you set the time too early, it might interrupt your workflow.
Session timeout YAML configuration
PERMANENT_SESSION_LIFETIME: 3000
PERMANENT_SESSION_LIFETIME: 3000
Chapter 7. Environment variables
Red Hat Quay supports a limited number of environment variables for dynamic configuration.
7.1. Geo-replication
The same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable.
Variable | Type | Description |
---|---|---|
QUAY_DISTRIBUTED_STORAGE_PREFERENCE | String | The preferred storage engine (by ID in DISTRIBUTED_STORAGE_CONFIG) to use. |
7.2. Database connection pooling
Red Hat Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.
Database connection pooling is enabled by default, and each process that interacts with the database contains a connection pool. These per-process connection pools are configured to maintain a maximum of 20 connections. Under heavy load, it is possible to fill the connection pool for every process within a Red Hat Quay container. Under certain deployments and loads, this might require analysis to ensure that Red Hat Quay does not exceed the configured database’s maximum connection count.
Overtime, the connection pools release idle connections. To release all connections immediately, Red Hat Quay requires a restart.
For standalone Red Hat Quay deployments, database connection pooling can be toggled off when starting your deployment. For example:
sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ -e DB_CONNECTION_POOLING=false
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \
--name=quay \
-v $QUAY/config:/conf/stack:Z \
-v $QUAY/storage:/datastorage:Z \
-e DB_CONNECTION_POOLING=false
registry.redhat.io/quay/quay-rhel8:v3.12.1
For Red Hat Quay on OpenShift Container Platform, database connection pooling can be configured by modifying the QuayRegistry
custom resource definition (CRD). For example:
Example QuayRegistry CRD
spec: components: - kind: quay managed: true overrides: env: - name: DB_CONNECTION_POOLING value: "false"
spec:
components:
- kind: quay
managed: true
overrides:
env:
- name: DB_CONNECTION_POOLING
value: "false"
Variable | Type | Description |
---|---|---|
DB_CONNECTION_POOLING | String |
Whether to enable or disable database connection pooling. Defaults to true. Accepted values are |
If database connection pooling is enabled, it is possible to change the maximum size of the connection pool. This can be done through the following config.yaml
option:
config.yaml
... DB_CONNECTION_ARGS: max_connections: 10 ...
...
DB_CONNECTION_ARGS:
max_connections: 10
...
7.3. HTTP connection counts
It is possible to specify the quantity of simultaneous HTTP connections using environment variables. These can be specified as a whole, or for a specific component. The default for each is 50
parallel connections per process.
Variable | Type | Description |
---|---|---|
WORKER_CONNECTION_COUNT | Number |
Simultaneous HTTP connections |
WORKER_CONNECTION_COUNT_REGISTRY | Number |
Simultaneous HTTP connections for registry |
WORKER_CONNECTION_COUNT_WEB | Number |
Simultaneous HTTP connections for web UI |
WORKER_CONNECTION_COUNT_SECSCAN | Number |
Simultaneous HTTP connections for Clair |
7.4. Worker count variables
Variable | Type | Description |
---|---|---|
WORKER_COUNT | Number | Generic override for number of processes |
WORKER_COUNT_REGISTRY | Number |
Specifies the number of processes to handle Registry requests within the |
WORKER_COUNT_WEB | Number |
Specifies the number of processes to handle UI/Web requests within the container |
WORKER_COUNT_SECSCAN | Number |
Specifies the number of processes to handle Security Scanning (e.g. Clair) integration within the container |
7.5. Debug variables
The following debug variables are available on Red Hat Quay.
Variable | Type | Description |
---|---|---|
DEBUGLOG | Boolean | Whether to enable or disable debug logs. |
USERS_DEBUG |
Integer. Either |
Used to debug LDAP operations in clear text, including passwords. Must be used with Important
Setting |
Chapter 8. Clair security scanner
8.1. Clair configuration overview
Clair is configured by a structured YAML file. Each Clair node needs to specify what mode it will run in and a path to a configuration file through CLI flags or environment variables. For example:
clair -conf ./path/to/config.yaml -mode indexer
$ clair -conf ./path/to/config.yaml -mode indexer
or
clair -conf ./path/to/config.yaml -mode matcher
$ clair -conf ./path/to/config.yaml -mode matcher
The aforementioned commands each start two Clair nodes using the same configuration file. One runs the indexing facilities, while other runs the matching facilities.
If you are running Clair in combo
mode, you must supply the indexer, matcher, and notifier configuration blocks in the configuration.
8.1.1. Information about using Clair in a proxy environment
Environment variables respected by the Go standard library can be specified if needed, for example:
HTTP_PROXY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>
$ export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>
HTTPS_PROXY
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>
$ export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>
SSL_CERT_DIR
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>
$ export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>
NO_PROXY
Copy to Clipboard Copied! Toggle word wrap Toggle overflow export NO_PROXY=<comma_separated_list_of_hosts_and_domains>
$ export NO_PROXY=<comma_separated_list_of_hosts_and_domains>
If you are using a proxy server in your environment with Clair’s updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. For example, the osv
updater requires access to https://osv-vulnerabilities.storage.googleapis.com
to fetch ecosystem data dumps. In this scenario, the URL must be added to the proxy allowlist. For a full list of updater URLs, see "Clair updater URLs".
You must also ensure that the standard Clair URLs are added to the proxy allowlist:
-
https://search.maven.org/solrsearch/select
-
https://catalog.redhat.com/api/containers/
-
https://access.redhat.com/security/data/metrics/repository-to-cpe.json
-
https://access.redhat.com/security/data/metrics/container-name-repos-map.json
When configuring the proxy server, take into account any authentication requirements or specific proxy settings needed to enable seamless communication between Clair and these URLs. By thoroughly documenting and addressing these considerations, you can ensure that Clair functions effectively while routing its updater traffic through the proxy.
8.1.2. Clair configuration reference
The following YAML shows an example Clair configuration:
http_listen_addr: "" introspection_addr: "" log_level: "" tls: {} indexer: connstring: "" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: "" indexer_addr: "" migrations: false period: "" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: "" migrations: false indexer_addr: "" matcher_addr: "" poll_interval: "" delivery_interval: "" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: "" probability: null jaeger: agent: endpoint: "" collector: endpoint: "" username: null password: null service_name: "" tags: nil buffer_max: 0 metrics: name: "" prometheus: endpoint: null dogstatsd: url: ""
http_listen_addr: ""
introspection_addr: ""
log_level: ""
tls: {}
indexer:
connstring: ""
scanlock_retry: 0
layer_scan_concurrency: 5
migrations: false
scanner: {}
airgap: false
matcher:
connstring: ""
indexer_addr: ""
migrations: false
period: ""
disable_updaters: false
update_retention: 2
matchers:
names: nil
config: nil
updaters:
sets: nil
config: nil
notifier:
connstring: ""
migrations: false
indexer_addr: ""
matcher_addr: ""
poll_interval: ""
delivery_interval: ""
disable_summary: false
webhook: null
amqp: null
stomp: null
auth:
psk: nil
trace:
name: ""
probability: null
jaeger:
agent:
endpoint: ""
collector:
endpoint: ""
username: null
password: null
service_name: ""
tags: nil
buffer_max: 0
metrics:
name: ""
prometheus:
endpoint: null
dogstatsd:
url: ""
The above YAML file lists every key for completeness. Using this configuration file as-is will result in some options not having their defaults set normally.
8.1.3. Clair general fields
The following table describes the general configuration fields available for a Clair deployment.
Field | Typhttp_listen_ae | Description |
---|---|---|
http_listen_addr | String | Configures where the HTTP API is exposed.
Default: |
introspection_addr | String | Configures where Clair’s metrics and health endpoints are exposed. |
log_level | String | Sets the logging level. Requires one of the following strings: debug-color, debug, info, warn, error, fatal, panic |
tls | String | A map containing the configuration for serving the HTTP API of TLS/SSL and HTTP/2. |
.cert | String | The TLS certificate to be used. Must be a full-chain certificate. |
Example configuration for general Clair fields
The following example shows a Clair configuration.
Example configuration for general Clair fields
... ...
# ...
http_listen_addr: 0.0.0.0:6060
introspection_addr: 0.0.0.0:8089
log_level: info
# ...
8.1.4. Clair indexer configuration fields
The following table describes the configuration fields for Clair’s indexer
component.
Field | Type | Description |
---|---|---|
indexer | Object | Provides Clair indexer node configuration. |
.airgap | Boolean | Disables HTTP access to the internet for indexers and fetchers. Private IPv4 and IPv6 addresses are allowed. Database connections are unaffected. |
.connstring | String | A Postgres connection string. Accepts format as a URL or libpq connection string. |
.index_report_request_concurrency | Integer |
Rate limits the number of index report creation requests. Setting this to
The API returns a |
.scanlock_retry | Integer | A positive integer representing seconds. Concurrent indexers lock on manifest scans to avoid clobbering. This value tunes how often a waiting indexer polls for the lock. |
.layer_scan_concurrency | Integer | Positive integer limiting the number of concurrent layer scans. Indexers will match a manifest’s layer concurrently. This value tunes the number of layers an indexer scans in parallel. |
.migrations | Boolean | Whether indexer nodes handle migrations to their database. |
.scanner | String | Indexer configuration. Scanner allows for passing configuration options to layer scanners. The scanner will have this configuration pass to it on construction if designed to do so. |
.scanner.dist | String | A map with the name of a particular scanner and arbitrary YAML as a value. |
.scanner.package | String | A map with the name of a particular scanner and arbitrary YAML as a value. |
.scanner.repo | String | A map with the name of a particular scanner and arbitrary YAML as a value. |
Example indexer configuration
The following example shows a hypothetical indexer configuration for Clair.
Example indexer configuration
... ...
# ...
indexer:
connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable
scanlock_retry: 10
layer_scan_concurrency: 5
migrations: true
# ...
8.1.5. Clair matcher configuration fields
The following table describes the configuration fields for Clair’s matcher
component.
Differs from matchers
configuration fields.
Field | Type | Description |
---|---|---|
matcher | Object | Provides Clair matcher node configuration. |
.cache_age | String | Controls how long users should be hinted to cache responses for. |
.connstring | String | A Postgres connection string. Accepts format as a URL or libpq connection string. |
.max_conn_pool | Integer | Limits the database connection pool size. Clair allows for a custom connection pool size. This number directly sets how many active database connections are allowed concurrently. This parameter will be ignored in a future version. Users should configure this through the connection string. |
.indexer_addr | String | A matcher contacts an indexer to create a vulnerability report. The location of this indexer is required.
Defaults to |
.migrations | Boolean | Whether matcher nodes handle migrations to their databases. |
.period | String | Determines how often updates for new security advisories take place.
Defaults to |
.disable_updaters | Boolean | Whether to run background updates or not.
Default: |
.update_retention | Integer | Sets the number of update operations to retain between garbage collection cycles. This should be set to a safe MAX value based on database size constraints.
Defaults to
If a value of less than |
Example matcher configuration
Example matcher configuration
... ...
# ...
matcher:
connstring: >-
host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS>
sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem
sslrootcert=/etc/clair/ssl/ca.pem
indexer_addr: http://clair-v4/
disable_updaters: false
migrations: true
period: 6h
update_retention: 2
# ...
8.1.6. Clair matchers configuration fields
The following table describes the configuration fields for Clair’s matchers
component.
Differs from matcher
configuration fields.
Field | Type | Description |
---|---|---|
matchers | Array of strings |
Provides configuration for the in-tree |
.names | String |
A list of string values informing the matcher factory about enabled matchers. If value is set to |
.config | String | Provides configuration to a specific matcher. A map keyed by the name of the matcher containing a sub-object which will be provided to the matchers factory constructor. For example: |
Example matchers configuration
The following example shows a hypothetical Clair deployment that only requires only the alpine
, aws
, debian
, oracle
matchers.
Example matchers configuration
... ...
# ...
matchers:
names:
- "alpine-matcher"
- "aws"
- "debian"
- "oracle"
# ...
8.1.7. Clair updaters configuration fields
The following table describes the configuration fields for Clair’s updaters
component.
Field | Type | Description |
---|---|---|
updaters | Object | Provides configuration for the matcher’s update manager. |
.sets | String | A list of values informing the update manager which updaters to run.
If value is set to If left blank, zero updaters run. |
.config | String | Provides configuration to specific updater sets. A map keyed by the name of the updater set containing a sub-object which will be provided to the updater set’s constructor. For a list of the sub-objects for each updater, see "Advanced updater configuration". |
Example updaters configuration
In the following configuration, only the rhel
set is configured. The ignore_unpatched
variable, which is specific to the rhel
updater, is also defined.
Example updaters configuration
... ...
# ...
updaters:
sets:
- rhel
config:
rhel:
ignore_unpatched: false
# ...
8.1.8. Clair notifier configuration fields
The general notifier configuration fields for Clair are listed below.
Field | Type | Description |
---|---|---|
notifier | Object | Provides Clair notifier node configuration. |
.connstring | String | Postgres connection string. Accepts format as URL, or libpq connection string. |
.migrations | Boolean | Whether notifier nodes handle migrations to their database. |
.indexer_addr | String | A notifier contacts an indexer to create or obtain manifests affected by vulnerabilities. The location of this indexer is required. |
.matcher_addr | String | A notifier contacts a matcher to list update operations and acquire diffs. The location of this matcher is required. |
.poll_interval | String | The frequency at which the notifier will query a matcher for update operations. |
.delivery_interval | String | The frequency at which the notifier attempts delivery of created, or previously failed, notifications. |
.disable_summary | Boolean | Controls whether notifications should be summarized to one per manifest. |
Example notifier configuration
The following notifier
snippet is for a minimal configuration.
Example notifier configuration
... ...
# ...
notifier:
connstring: >-
host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS
sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem
sslrootcert=/etc/clair/ssl/ca.pem
indexer_addr: http://clair-v4/
matcher_addr: http://clair-v4/
delivery_interval: 5s
migrations: true
poll_interval: 15s
webhook:
target: "http://webhook/"
callback: "http://clair-notifier/notifier/api/v1/notifications"
headers: ""
amqp: null
stomp: null
# ...
8.1.8.1. Clair webhook configuration fields
The following webhook fields are available for the Clair notifier environment.
.webhook | Object | Configures the notifier for webhook delivery. |
.webhook.target | String | URL where the webhook will be delivered. |
.webhook.callback | String | The callback URL where notifications can be retrieved. The notification ID will be appended to this URL. This will typically be where the Clair notifier is hosted. |
.webhook.headers | String | A map associating a header name to a list of values. |
Example webhook configuration
Example webhook configuration
... ... ...
# ...
notifier:
# ...
webhook:
target: "http://webhook/"
callback: "http://clair-notifier/notifier/api/v1/notifications"
# ...
8.1.8.2. Clair amqp configuration fields
The following Advanced Message Queuing Protocol (AMQP) fields are available for the Clair notifier environment.
.amqp | Object | Configures the notifier for AMQP delivery. [NOTE] ==== Clair does not declare any AMQP components on its own. All attempts to use an exchange or queue are passive only and will fail. Broker administrators should setup exchanges and queues ahead of time. ==== |
.amqp.direct | Boolean |
If |
.amqp.rollup | Integer |
When |
.amqp.exchange | Object | The AMQP exchange to connect to. |
.amqp.exchange.name | String | The name of the exchange to connect to. |
.amqp.exchange.type | String | The type of the exchange. Typically one of the following: direct, fanout, topic, headers. |
.amqp.exchange.durability | Boolean | Whether the configured queue is durable. |
.amqp.exchange.auto_delete | Boolean |
Whether the configured queue uses an |
.amqp.routing_key | String | The name of the routing key each notification is sent with. |
.amqp.callback | String |
If |
.amqp.uris | String | A list of one or more AMQP brokers to connect to, in priority order. |
.amqp.tls | Object | Configures TLS/SSL connection to an AMQP broker. |
.amqp.tls.root_ca | String | The filesystem path where a root CA can be read. |
.amqp.tls.cert | String | The filesystem path where a TLS/SSL certificate can be read.
[NOTE] ==== Clair also allows |
.amqp.tls.key | String | The filesystem path where a TLS/SSL private key can be read. |
Example AMQP configuration
The following example shows a hypothetical AMQP configuration for Clair.
Example AMQP configuration
... ... ...
# ...
notifier:
# ...
amqp:
exchange:
name: ""
type: "direct"
durable: true
auto_delete: false
uris: ["amqp://user:pass@host:10000/vhost"]
direct: false
routing_key: "notifications"
callback: "http://clair-notifier/notifier/api/v1/notifications"
tls:
root_ca: "optional/path/to/rootca"
cert: "madatory/path/to/cert"
key: "madatory/path/to/key"
# ...
8.1.8.3. Clair STOMP configuration fields
The following Simple Text Oriented Message Protocol (STOMP) fields are available for the Clair notifier environment.
.stomp | Object | Configures the notifier for STOMP delivery. |
---|---|---|
.stomp.direct | Boolean |
If |
.stomp.rollup | Integer |
If |
.stomp.callback | String |
If |
.stomp.destination | String | The STOMP destination to deliver notifications to. |
.stomp.uris | String | A list of one or more STOMP brokers to connect to in priority order. |
.stomp.tls | Object | Configured TLS/SSL connection to STOMP broker. |
.stomp.tls.root_ca | String | The filesystem path where a root CA can be read.
[NOTE] ==== Clair also respects |
.stomp.tls.cert | String | The filesystem path where a TLS/SSL certificate can be read. |
.stomp.tls.key | String | The filesystem path where a TLS/SSL private key can be read. |
.stomp.user | String | Configures login details for the STOMP broker. |
.stomp.user.login | String | The STOMP login to connect with. |
.stomp.user.passcode | String | The STOMP passcode to connect with. |
Example STOMP configuration
The following example shows a hypothetical STOMP configuration for Clair.
Example STOMP configuration
... ... ...
# ...
notifier:
# ...
stomp:
desitnation: "notifications"
direct: false
callback: "http://clair-notifier/notifier/api/v1/notifications"
login:
login: "username"
passcode: "passcode"
tls:
root_ca: "optional/path/to/rootca"
cert: "madatory/path/to/cert"
key: "madatory/path/to/key"
# ...
8.1.9. Clair authorization configuration fields
The following authorization configuration fields are available for Clair.
Field | Type | Description |
---|---|---|
auth | Object |
Defines Clair’s external and intra-service JWT based authentication. If multiple |
.psk | String | Defines pre-shared key authentication. |
.psk.key | String | A shared base64 encoded key distributed between all parties signing and verifying JWTs. |
.psk.iss | String | A list of JWT issuers to verify. An empty list accepts any issuer in a JWT claim. |
Example authorization configuration
The following authorization
snippet is for a minimal configuration.
Example authorization configuration
... ...
# ...
auth:
psk:
key: MTU5YzA4Y2ZkNzJoMQ==
iss: ["quay"]
# ...
8.1.10. Clair trace configuration fields
The following trace configuration fields are available for Clair.
Field | Type | Description |
---|---|---|
trace | Object | Defines distributed tracing configuration based on OpenTelemetry. |
.name | String | The name of the application traces will belong to. |
.probability | Integer | The probability a trace will occur. |
.jaeger | Object | Defines values for Jaeger tracing. |
.jaeger.agent | Object | Defines values for configuring delivery to a Jaeger agent. |
.jaeger.agent.endpoint | String |
An address in the |
.jaeger.collector | Object | Defines values for configuring delivery to a Jaeger collector. |
.jaeger.collector.endpoint | String |
An address in the |
.jaeger.collector.username | String | A Jaeger username. |
.jaeger.collector.password | String | A Jaeger password. |
.jaeger.service_name | String | The service name registered in Jaeger. |
.jaeger.tags | String | Key-value pairs to provide additional metadata. |
.jaeger.buffer_max | Integer | The maximum number of spans that can be buffered in memory before they are sent to the Jaeger backend for storage and analysis. |
Example trace configuration
The following example shows a hypothetical trace configuration for Clair.
Example trace configuration
... ...
# ...
trace:
name: "jaeger"
probability: 1
jaeger:
agent:
endpoint: "localhost:6831"
service_name: "clair"
# ...
8.1.11. Clair metrics configuration fields
The following metrics configuration fields are available for Clair.
Field | Type | Description |
---|---|---|
metrics | Object | Defines distributed tracing configuration based on OpenTelemetry. |
.name | String | The name of the metrics in use. |
.prometheus | String | Configuration for a Prometheus metrics exporter. |
.prometheus.endpoint | String | Defines the path where metrics are served. |
Example metrics configuration
The following example shows a hypothetical metrics configuration for Clair.
Example metrics configuration
... ...
# ...
metrics:
name: "prometheus"
prometheus:
endpoint: "/metricsz"
# ...