Manage Red Hat Quay
Preface
Once you have deployed a Red Hat Quay registry, there are many ways you can further configure and manage that deployment. Topics covered here include:
- Advanced Red Hat Quay configuration
- Setting notifications to alert you of a new Red Hat Quay release
- Securing connections with SSL/TLS certificates
- Directing action logs storage to Elasticsearch
- Configuring image security scanning with Clair
- Scan pod images with the Container Security Operator
- Integrate Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator
- Mirroring images with repository mirroring
- Sharing Red Hat Quay images with a BitTorrent service
- Authenticating users with LDAP
- Enabling Quay for Prometheus and Grafana metrics
- Setting up geo-replication
- Troubleshooting Red Hat Quay
For a complete list of Red Hat Quay configuration fields, see the Configure Red Hat Quay page.
Chapter 1. Advanced Red Hat Quay configuration
You can configure your Red Hat Quay after initial deployment using one of the following interfaces:
-
The Red Hat Quay Config Tool. With this tool, a web-based interface for configuring the Red Hat Quay cluster is provided when running the
Quay
container inconfig
mode. This method is recommended for configuring the Red Hat Quay service. -
Editing the
config.yaml
. Theconfig.yaml
file contains most configuration information for the Red Hat Quay cluster. Editing theconfig.yaml
file directly is possible, but it is only recommended for advanced tuning and performance features that are not available through the Config Tool. - Red Hat Quay API. Some Red Hat Quay features can be configured through the API.
This content in this section describes how to use each of the aforementioned interfaces and how to configure your deployment with advanced features.
1.1. Using Red Hat Quay Config Tool to modify Red Hat Quay
The Red Hat Quay Config Tool is made available by running a Quay
container in config
mode alongside the regular Red Hat Quay service.
Use the following sections to run the Config Tool from the Red Hat Quay Operator, or to run the Config Tool on host systems from the command line interface (CLI).
1.1.1. Running the Config Tool from the Red Hat Quay Operator
When running the Red Hat Quay Operator on OpenShift Container Platform, the Config Tool is readily available to use. Use the following procedure to access the Red Hat Quay Config Tool.
Prerequisites
- You have deployed the Red Hat Quay Operator on OpenShift Container Platform.
Procedure.
-
On the OpenShift console, select the Red Hat Quay project, for example,
quay-enterprise
. In the navigation pane, select Networking → Routes. You should see routes to both the Red Hat Quay application and Config Tool, as shown in the following image:
-
Select the route to the Config Tool, for example,
example-quayecosystem-quay-config
. The Config Tool UI should open in your browser. Select Modify configuration for this cluster to bring up the Config Tool setup, for example:
- Make the desired changes, and then select Save Configuration Changes.
- Make any corrections needed by clicking Continue Editing, or, select Next to continue.
-
When prompted, select Download Configuration. This will download a tarball of your new
config.yaml
, as well as any certificates and keys used with your Red Hat Quay setup. Theconfig.yaml
can be used to make advanced changes to your configuration or use as a future reference. - Select Go to deployment rollout → Populate the configuration to deployments. Wait for the Red Hat Quay pods to restart for the changes to take effect.
1.1.2. Running the Config Tool from the command line
If you are running Red Hat Quay from a host system, you can use the following procedure to make changes to your configuration after the initial deployment.
Prerequisites
-
You have installed either
podman
ordocker
.
-
You have installed either
- Start Red Hat Quay in configuration mode.
On the first
Quay
node, enter the following command:$ podman run --rm -it --name quay_config -p 8080:8080 \ -v path/to/config-bundle:/conf/stack \ registry.redhat.io/quay/quay-rhel8:v3.9.7 config <my_secret_password>
NoteTo modify an existing config bundle, you can mount your configuration directory into the
Quay
container.-
When the Red Hat Quay configuration tool starts, open your browser and navigate to the URL and port used in your configuration file, for example,
quay-server.example.com:8080
. - Enter your username and password.
- Modify your Red Hat Quay cluster as desired.
1.1.3. Deploying the config tool using TLS certificates
You can deploy the config tool with secured TLS certificates by passing environment variables to the runtime variable. This ensures that sensitive data like credentials for the database and storage backend are protected.
The public and private keys must contain valid Subject Alternative Names (SANs) for the route that you deploy the config tool on.
The paths can be specified using CONFIG_TOOL_PRIVATE_KEY
and CONFIG_TOOL_PUBLIC_KEY
.
If you are running your deployment from a container, the CONFIG_TOOL_PRIVATE_KEY
and CONFIG_TOOL_PUBLIC_KEY
values the locations of the certificates inside of the container. For example:
$ podman run --rm -it --name quay_config -p 7070:8080 \ -v ${PRIVATE_KEY_PATH}:/tls/localhost.key \ -v ${PUBLIC_KEY_PATH}:/tls/localhost.crt \ -e CONFIG_TOOL_PRIVATE_KEY=/tls/localhost.key \ -e CONFIG_TOOL_PUBLIC_KEY=/tls/localhost.crt \ -e DEBUGLOG=true \ -ti config-app:dev
1.2. Using the API to modify Red Hat Quay
See the Red Hat Quay API Guide for information on how to access Red Hat Quay API.
1.3. Editing the config.yaml
file to modify Red Hat Quay
Some advanced configuration features that are not available through the Config Tool can be implemented by editing the config.yaml
file directly. Available settings are described in the Schema for Red Hat Quay configuration
The following examples are settings you can change directly in the config.yaml
file.
1.3.1. Add name and company to Red Hat Quay sign-in
By setting the following field, users are prompted for their name and company when they first sign in. This is an optional field, but can provide your with extra data about your Red Hat Quay users.
--- FEATURE_USER_METADATA: true ---
1.3.2. Disable TLS Protocols
You can change the SSL_PROTOCOLS
setting to remove SSL protocols that you do not want to support in your Red Hat Quay instance. For example, to remove TLS v1 support from the default SSL_PROTOCOLS:['TLSv1','TLSv1.1','TLSv1.2']
, change it to the following:
--- SSL_PROTOCOLS : ['TLSv1.1','TLSv1.2'] ---
1.3.3. Rate limit API calls
Adding the FEATURE_RATE_LIMITS
parameter to the config.yaml
file causes nginx
to limit certain API calls to 30-per-second. If FEATURE_RATE_LIMITS
is not set, API calls are limited to 300-per-second, effectively making them unlimited.
Rate limiting is important when you must ensure that the available resources are not overwhelmed with traffic.
Some namespaces might require unlimited access, for example, if they are important to CI/CD and take priority. In that scenario, those namespaces might be placed in a list in the config.yaml
file using the NON_RATE_LIMITED_NAMESPACES
.
1.3.4. Adjust database connection pooling
Red Hat Quay is composed of many different processes which all run within the same container. Many of these processes interact with the database.
With the DB_CONNECTION_POOLING
parameter, each process that interacts with the database will contain a connection pool These per-process connection pools are configured to maintain a maximum of 20 connections. When under heavy load, it is possible to fill the connection pool for every process within a Red Hat Quay container. Under certain deployments and loads, this might require analysis to ensure that Red Hat Quay does not exceed the database’s configured maximum connection count.
Over time, the connection pools will release idle connections. To release all connections immediately, Red Hat Quay must be restarted.
Database connection pooling can be toggled by setting the DB_CONNECTION_POOLING
to true
or false
. For example:
--- DB_CONNECTION_POOLING: true ---
When DB_CONNECTION_POOLING
is enabled, you can change the maximum size of the connection pool with the DB_CONNECTION_ARGS
in your config.yaml
. For example:
--- DB_CONNECTION_ARGS: max_connections: 10 ---
1.3.4.1. Database connection arguments
You can customize your Red Hat Quay database connection settings within the config.yaml
file. These are dependent on your deployment’s database driver, for example, psycopg2
for Postgres and pymysql
for MySQL. You can also pass in argument used by Peewee’s connection pooling mechanism. For example:
--- DB_CONNECTION_ARGS: max_connections: n # Max Connection Pool size. (Connection Pooling only) timeout: n # Time to hold on to connections. (Connection Pooling only) stale_timeout: n # Number of seconds to block when the pool is full. (Connection Pooling only) ---
1.3.4.2. Database SSL configuration
Some key-value pairs defined under the DB_CONNECTION_ARGS
field are generic, while others are specific to the database. In particular, SSL configuration depends on the database that you are deploying.
1.3.4.2.1. PostgreSQL SSL connection arguments
The following YAML shows a sample PostgreSQL SSL configuration:
--- DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert ---
The sslmode
parameter determines whether, or with, what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes for the sslmode
parameter:
- disabl:: Only try a non-SSL connection.
- allow: Try a non-SSL connection first. Upon failure, try an SSL connection.
- prefer: Default. Try an SSL connection first. Upon failure, try a non-SSL connection.
-
require: Only try an SSL connection. If a root CA file is present, verify the connection in the same way as if
verify-ca
was specified. - verify-ca: Only try an SSL connection, and verify that the server certificate is issued by a trust certificate authority (CA).
- verify-full: Only try an SSL connection. Verify that the server certificate is issued by a trust CA, and that the requested server host name matches that in the certificate.
For more information about the valid arguments for PostgreSQL, see Database Connection Control Functions.
1.3.4.2.2. MySQL SSL connection arguments
The following YAML shows a sample MySQL SSL configuration:
--- DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert ---
For more information about the valid connection arguments for MySQL, see Connecting to the Server Using URI-Like Strings or Key-Value Pairs.
1.3.4.3. HTTP connection counts
You can specify the quantity of simultaneous HTTP connections using environment variables. The environment variables can be specified as a whole, or for a specific component. The default for each is 50 parallel connections per process. See the following YAML for example environment variables;
--- WORKER_CONNECTION_COUNT_REGISTRY=n WORKER_CONNECTION_COUNT_WEB=n WORKER_CONNECTION_COUNT_SECSCAN=n WORKER_CONNECTION_COUNT=n ---
Specifying a count for a specific component will override any value set in the WORKER_CONNECTION_COUNT
configuration field.
1.3.4.4. Dynamic process counts
To estimate the quantity of dynamically sized processes, the following calculation is used by default.
Red Hat Quay queries the available CPU count from the entire machine. Any limits applied using kubernetes or other non-virtualized mechanisms will not affect this behavior. Red Hat Quay makes its calculation based on the total number of processors on the Node. The default values listed are simply targets, but shall not exceed the maximum or be lower than the minimum.
Each of the following process quantities can be overridden using the environment variable specified below:
registry - Provides HTTP endpoints to handle registry action
- minimum: 8
- maximum: 64
- default: $CPU_COUNT x 4
- environment variable: WORKER_COUNT_REGISTRY
web - Provides HTTP endpoints for the web-based interface
- minimum: 2
- maximum: 32
- default: $CPU_COUNT x 2
- environment_variable: WORKER_COUNT_WEB
secscan - Interacts with Clair
- minimum: 2
- maximum: 4
- default: $CPU_COUNT x 2
- environment variable: WORKER_COUNT_SECSCAN
1.3.4.5. Environment variables
Red Hat Quay allows overriding default behavior using environment variables. The following table lists and describes each variable and the values they can expect.
Variable | Description | Values |
---|---|---|
WORKER_COUNT_REGISTRY |
Specifies the number of processes to handle registry requests within the |
Integer between |
WORKER_COUNT_WEB | Specifies the number of processes to handle UI/Web requests within the container. |
Integer between |
WORKER_COUNT_SECSCAN | Specifies the number of processes to handle Security Scanning (for example, Clair) integration within the container. |
Integer. Because the Operator specifies 2 vCPUs for resource requests and limits, setting this value between |
DB_CONNECTION_POOLING | Toggle database connection pooling. |
|
1.3.4.6. Turning off connection pooling
Red Hat Quay deployments with a large amount of user activity can regularly hit the 2k maximum database connection limit. In these cases, connection pooling, which is enabled by default for Red Hat Quay, can cause database connection count to rise exponentially and require you to turn off connection pooling.
If turning off connection pooling is not enough to prevent hitting the 2k database connection limit, you need to take additional steps to deal with the problem. If this happens, you might need to increase the maximum database connections to better suit your workload.
Chapter 2. Using the configuration API
The configuration tool exposes 4 endpoints that can be used to build, validate, bundle and deploy a configuration. The config-tool API is documented at https://github.com/quay/config-tool/blob/master/pkg/lib/editor/API.md. In this section, you will see how to use the API to retrieve the current configuration and how to validate any changes you make.
2.1. Retrieving the default configuration
If you are running the configuration tool for the first time, and do not have an existing configuration, you can retrieve the default configuration. Start the container in config mode:
$ sudo podman run --rm -it --name quay_config \ -p 8080:8080 \ registry.redhat.io/quay/quay-rhel8:v3.9.7 config secret
Use the config
endpoint of the configuration API to get the default:
$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq
The value returned is the default configuration in JSON format:
{ "config.yaml": { "AUTHENTICATION_TYPE": "Database", "AVATAR_KIND": "local", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DEFAULT_TAG_EXPIRATION": "2w", "EXTERNAL_TLS_TERMINATION": false, "FEATURE_ACTION_LOG_ROTATION": false, "FEATURE_ANONYMOUS_ACCESS": true, "FEATURE_APP_SPECIFIC_TOKENS": true, .... } }
2.2. Retrieving the current configuration
If you have already configured and deployed the Quay registry, stop the container and restart it in configuration mode, loading the existing configuration as a volume:
$ sudo podman run --rm -it --name quay_config \ -p 8080:8080 \ -v $QUAY/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7 config secret
Use the config
endpoint of the API to get the current configuration:
$ curl -X GET -u quayconfig:secret http://quay-server:8080/api/v1/config | jq
The value returned is the current configuration in JSON format, including database and Redis configuration data:
{ "config.yaml": { .... "BROWSER_API_CALLS_XHR_ONLY": false, "BUILDLOGS_REDIS": { "host": "quay-server", "password": "strongpassword", "port": 6379 }, "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay", "DEFAULT_TAG_EXPIRATION": "2w", .... } }
2.3. Validating configuration using the API
You can validate a configuration by posting it to the config/validate
endpoint:
curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { "config.yaml": { .... "BROWSER_API_CALLS_XHR_ONLY": false, "BUILDLOGS_REDIS": { "host": "quay-server", "password": "strongpassword", "port": 6379 }, "DATABASE_SECRET_KEY": "4b1c5663-88c6-47ac-b4a8-bb594660f08b", "DB_CONNECTION_ARGS": { "autorollback": true, "threadlocals": true }, "DB_URI": "postgresql://quayuser:quaypass@quay-server:5432/quay", "DEFAULT_TAG_EXPIRATION": "2w", .... } } http://quay-server:8080/api/v1/config/validate | jq
The returned value is an array containing the errors found in the configuration. If the configuration is valid, an empty array []
is returned.
2.4. Determining the required fields
You can determine the required fields by posting an empty configuration structure to the config/validate
endpoint:
curl -u quayconfig:secret --header 'Content-Type: application/json' --request POST --data ' { "config.yaml": { } } http://quay-server:8080/api/v1/config/validate | jq
The value returned is an array indicating which fields are required:
[ { "FieldGroup": "Database", "Tags": [ "DB_URI" ], "Message": "DB_URI is required." }, { "FieldGroup": "DistributedStorage", "Tags": [ "DISTRIBUTED_STORAGE_CONFIG" ], "Message": "DISTRIBUTED_STORAGE_CONFIG must contain at least one storage location." }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME is required" }, { "FieldGroup": "HostSettings", "Tags": [ "SERVER_HOSTNAME" ], "Message": "SERVER_HOSTNAME must be of type Hostname" }, { "FieldGroup": "Redis", "Tags": [ "BUILDLOGS_REDIS" ], "Message": "BUILDLOGS_REDIS is required" } ]
Chapter 3. Getting Red Hat Quay release notifications
To keep up with the latest Red Hat Quay releases and other changes related to Red Hat Quay, you can sign up for update notifications on the Red Hat Customer Portal. After signing up for notifications, you will receive notifications letting you know when there is new a Red Hat Quay version, updated documentation, or other Red Hat Quay news.
- Log into the Red Hat Customer Portal with your Red Hat customer account credentials.
-
Select your user name (upper-right corner) to see Red Hat Account and Customer Portal selections:
- Select Notifications. Your profile activity page appears.
- Select the Notifications tab.
- Select Manage Notifications.
- Select Follow, then choose Products from the drop-down box.
-
From the drop-down box next to the Products, search for and select Red Hat Quay:
- Select the SAVE NOTIFICATION button. Going forward, you will receive notifications when there are changes to the Red Hat Quay product, such as a new release.
Chapter 4. Using SSL to protect connections to Red Hat Quay
4.1. Using SSL/TLS
To configure Red Hat Quay with a self-signed certificate, you must create a Certificate Authority (CA) and then generate the required key and certificate files.
The following examples assume you have configured the server hostname quay-server.example.com
using DNS or another naming mechanism, such as adding an entry in your /etc/hosts
file:
$ cat /etc/hosts ... 192.168.1.112 quay-server.example.com
4.2. Creating a certificate authority and signing a certificate
Use the following procedures to create a certificate file and a primary key file named ssl.cert
and ssl.key
.
4.2.1. Creating a certificate authority
Use the following procedure to create a certificate authority (CA)
Procedure
Generate the root CA key by entering the following command:
$ openssl genrsa -out rootCA.key 2048
Generate the root CA certificate by entering the following command:
$ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem
Enter the information that will be incorporated into your certificate request, including the server hostname, for example:
Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com
4.2.2. Signing a certificate
Use the following procedure to sign a certificate.
Procedure
Generate the server key by entering the following command:
$ openssl genrsa -out ssl.key 2048
Generate a signing request by entering the following command:
$ openssl req -new -key ssl.key -out ssl.csr
Enter the information that will be incorporated into your certificate request, including the server hostname, for example:
Country Name (2 letter code) [XX]:IE State or Province Name (full name) []:GALWAY Locality Name (eg, city) [Default City]:GALWAY Organization Name (eg, company) [Default Company Ltd]:QUAY Organizational Unit Name (eg, section) []:DOCS Common Name (eg, your name or your server's hostname) []:quay-server.example.com
Create a configuration file
openssl.cnf
, specifying the server hostname, for example:openssl.cnf
[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = quay-server.example.com IP.1 = 192.168.1.112
Use the configuration file to generate the certificate
ssl.cert
:$ openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf
4.3. Configuring SSL using the command line interface
Use the following procedure to configure SSL/TLS using the command line interface.
Prerequisites
- You have created a certificate authority and signed the certificate.
Procedure
Copy the certificate file and primary key file to your configuration directory, ensuring they are named
ssl.cert
andssl.key
respectively:cp ~/ssl.cert ~/ssl.key $QUAY/config
Change into the
$QUAY/config
directory by entering the following command:$ cd $QUAY/config
Edit the
config.yaml
file and specify that you want Red Hat Quay to handle TLS/SSL:config.yaml
... SERVER_HOSTNAME: quay-server.example.com ... PREFERRED_URL_SCHEME: https ...
Optional: Append the contents of the rootCA.pem file to the end of the ssl.cert file by entering the following command:
$ cat rootCA.pem >> ssl.cert
Stop the
Quay
container by entering the following command:$ sudo podman stop quay
Restart the registry by entering the following command:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7
4.4. Configuring SSL/TLS using the Red Hat Quay UI
Use the following procedure to configure SSL/TLS using the Red Hat Quay UI.
To configure SSL using the command line interface, see "Configuring SSL/TLS using the command line interface".
Prerequisites
- You have created a certificate authority and signed the certificate.
Procedure
Start the
Quay
container in configuration mode:$ sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.9.7 config secret
- In the Server Configuration section, select Red Hat Quay handles TLS for SSL/TLS. Upload the certificate file and private key file created earlier, ensuring that the Server Hostname matches the value used when the certificates were created.
- Validate and download the updated configuration.
Stop the
Quay
container and then restart the registry by entering the following command:$ sudo podman rm -f quay $ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7
4.5. Testing SSL configuration using the command line
Use the
podman login
command to attempt to log in to the Quay registry with SSL enabled:$ sudo podman login quay-server.example.com Username: quayadmin Password: Error: error authenticating creds for "quay-server.example.com": error pinging docker registry quay-server.example.com: Get "https://quay-server.example.com/v2/": x509: certificate signed by unknown authority
Podman does not trust self-signed certificates. As a workaround, use the
--tls-verify
option:$ sudo podman login --tls-verify=false quay-server.example.com Username: quayadmin Password: Login Succeeded!
Configuring Podman to trust the root Certificate Authority (CA) is covered in a subsequent section.
4.6. Testing SSL configuration using the browser
When you attempt to access the Quay registry, in this case, https://quay-server.example.com
, the browser warns of the potential risk:
Proceed to the log in screen, and the browser will notify you that the connection is not secure:
Configuring the system to trust the root Certificate Authority (CA) is covered in the subsequent section.
4.7. Configuring podman to trust the Certificate Authority
Podman uses two paths to locate the CA file, namely, /etc/containers/certs.d/
and /etc/docker/certs.d/
.
Copy the root CA file to one of these locations, with the exact path determined by the server hostname, and naming the file
ca.crt
:$ sudo cp rootCA.pem /etc/containers/certs.d/quay-server.example.com/ca.crt
Alternatively, if you are using Docker, you can copy the root CA file to the equivalent Docker directory:
$ sudo cp rootCA.pem /etc/docker/certs.d/quay-server.example.com/ca.crt
You should no longer need to use the --tls-verify=false
option when logging in to the registry:
$ sudo podman login quay-server.example.com Username: quayadmin Password: Login Succeeded!
4.8. Configuring the system to trust the certificate authority
Use the following procedure to configure your system to trust the certificate authority.
Procedure
Enter the following command to copy the
rootCA.pem
file to the consolidated system-wide trust store:$ sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/
Enter the following command to update the system-wide trust store configuration:
$ sudo update-ca-trust extract
Optional. You can use the
trust list
command to ensure that theQuay
server has been configured:$ trust list | grep quay label: quay-server.example.com
Now, when you browse to the registry at
https://quay-server.example.com
, the lock icon shows that the connection is secure:To remove the
rootCA.pem
file from system-wide trust, delete the file and update the configuration:$ sudo rm /etc/pki/ca-trust/source/anchors/rootCA.pem
$ sudo update-ca-trust extract
$ trust list | grep quay
More information can be found in the RHEL 9 documentation in the chapter Using shared system certificates.
Chapter 5. Adding TLS Certificates to the Red Hat Quay Container
To add custom TLS certificates to Red Hat Quay, create a new directory named extra_ca_certs/
beneath the Red Hat Quay config directory. Copy any required site-specific TLS certificates to this new directory.
5.1. Add TLS certificates to Red Hat Quay
View certificate to be added to the container
$ cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----
Create certs directory and copy certificate there
$ mkdir -p quay/config/extra_ca_certs $ cp storage.crt quay/config/extra_ca_certs/ $ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt
Obtain the
Quay
container’sCONTAINER ID
withpodman ps
:$ sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.9.7 "/sbin/my_init" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller
Restart the container with that ID:
$ sudo podman restart 5a3e82c4a75f
Examine the certificate copied into the container namespace:
$ sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV
5.2. Adding custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes
When deployed on Kubernetes, Red Hat Quay mounts in a secret as a volume to store config assets. Currently, this breaks the upload certificate function of the superuser panel.
As a temporary workaround, base64
encoded certificates can be added to the secret after Red Hat Quay has been deployed.
Use the following procedure to add custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes.
Prerequisites
- Red Hat Quay has been deployed.
-
You have a custom
ca.crt
file.
Procedure
Base64 encode the contents of an SSL/TLS certificate by entering the following command:
$ cat ca.crt | base64 -w 0
Example output
...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
Enter the following
kubectl
command to edit thequay-enterprise-config-secret
file:$ kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret
Add an entry for the certificate and paste the full
base64
encoded stringer under the entry. For example:custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
Use the
kubectl delete
command to remove all Red Hat Quay pods. For example:$ kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-config-editor-6dfdcfc44f-hlvwm quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms
Afterwards, the Red Hat Quay deployment automatically schedules replace pods with the new certificate data.
Chapter 6. Configuring action log storage for Elasticsearch and Splunk
By default, the previous three months of usage logs are stored in the Red Hat Quay database and exposed through the web UI on organization and repository levels. Appropriate administrative privileges are required to see log entries. For deployments with a large amount of logged operations, you can store the usage logs in Elasticsearch and Splunk instead of the Red Hat Quay database backend.
6.1. Configuring action log storage for Elasticsearch
To configure action log storage for Elasticsearch, you must provide your own Elasticsearch stack, as it is not included with Red Hat Quay as a customizable component.
Enabling Elasticsearch logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. The resulting configuration is stored in the config.yaml
file. When configured, usage log access continues to be provided through the web UI for repositories and organizations.
Use the following procedure to configure action log storage for Elasticsearch:
Procedure
- Obtain an Elasticsearch account.
- Open the Red Hat Quay Config Tool (either during or after Red Hat Quay deployment).
Scroll to the Action Log Storage Configuration setting and select Elasticsearch. The following figure shows the Elasticsearch settings that appear:
Fill in the following information for your Elasticsearch instance:
- Elasticsearch hostname: The hostname or IP address of the system providing the Elasticsearch service.
- Elasticsearch port: The port number providing the Elasticsearch service on the host you just entered. Note that the port must be accessible from all systems running the Red Hat Quay registry. The default is TCP port 9200.
- Elasticsearch access key: The access key needed to gain access to the Elastic search service, if required.
- Elasticsearch secret key: The secret key needed to gain access to the Elastic search service, if required.
- AWS region: If you are running on AWS, set the AWS region (otherwise, leave it blank).
- Index prefix: Choose a prefix to attach to log entries.
Logs Producer: Choose either Elasticsearch (default) or Kinesis to direct logs to an intermediate Kinesis stream on AWS. You need to set up your own pipeline to send logs from Kinesis to Elasticsearch (for example, Logstash). The following figure shows additional fields you would need to fill in for Kinesis:
If you chose Elasticsearch as the Logs Producer, no further configuration is needed. If you chose Kinesis, fill in the following:
- Stream name: The name of the Kinesis stream.
- AWS access key: The name of the AWS access key needed to gain access to the Kinesis stream, if required.
- AWS secret key: The name of the AWS secret key needed to gain access to the Kinesis stream, if required.
- AWS region: The AWS region.
- When you are done, save the configuration. The configuration tool checks your settings. If there is a problem connecting to the Elasticsearch or Kinesis services, you will see an error and have the opportunity to continue editing. Otherwise, logging will begin to be directed to your Elasticsearch configuration after the cluster restarts with the new configuration.
6.2. Configuring action log storage for Splunk
Splunk is an alternative to Elasticsearch that can provide log analyses for your Red Hat Quay data.
Enabling Splunk logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. The resulting configuration is stored in the config.yaml
file. When configured, usage log access continues to be provided through the Splunk web UI for repositories and organizations.
Use the following procedures to enable Splunk for your Red Hat Quay deployment.
6.2.1. Installing and creating a username for Splunk
Use the following procedure to install and create Splunk credentials.
Procedure
- Create a Splunk account by navigating to Splunk and entering the required credentials.
- Navigate to the Splunk Enterprise Free Trial page, select your platform and installation package, and then click Download Now.
-
Install the Splunk software on your machine. When prompted, create a username, for example,
splunk_admin
and password. -
After creating a username and password, a localhost URL will be provided for your Splunk deployment, for example,
http://<sample_url>.remote.csb:8000/
. Open the URL in your preferred browser. - Log in with the username and password you created during installation. You are directed to the Splunk UI.
6.2.2. Generating a Splunk token
Use one of the following procedures to create a bearer token for Splunk.
6.2.2.1. Generating a Splunk token using the Splunk UI
Use the following procedure to create a bearer token for Splunk using the Splunk UI.
Prerequisites
- You have installed Splunk and created a username.
Procedure
- On the Splunk UI, navigate to Settings → Tokens.
- Click Enable Token Authentication.
- Ensure that Token Authentication is enabled by clicking Token Settings and selecting Token Authentication if necessary.
- Optional: Set the expiration time for your token. This defaults at 30 days.
- Click Save.
- Click New Token.
- Enter information for User and Audience.
- Optional: Set the Expiration and Not Before information.
Click Create. Your token appears in the Token box. Copy the token immediately.
ImportantIf you close out of the box before copying the token, you must create a new token. The token in its entirety is not available after closing the New Token window.
6.2.2.2. Generating a Splunk token using the CLI
Use the following procedure to create a bearer token for Splunk using the CLI.
Prerequisites
- You have installed Splunk and created a username.
Procedure
In your CLI, enter the following
CURL
command to enable token authentication, passing in your Splunk username and password:$ curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false
Create a token by entering the following
CURL
command, passing in your Splunk username and password.$ curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d
- Save the generated bearer token.
6.2.3. Configuring Red Hat Quay to use Splunk
Use the following procedure to configure Red Hat Quay to use Splunk.
Prerequisites
- You have installed Splunk and created a username.
- You have generated a Splunk bearer token.
Procedure
Open your Red Hat Quay
config.yaml
file and add the following configuration fields:--- LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb 1 port: 8089 2 bearer_token: <bearer_token> 3 url_scheme: <http/https> 4 verify_ssl: False 5 index_prefix: <splunk_log_index_name> 6 ssl_ca_path: <location_to_ssl-ca-cert.pem> 7 ---
- 1
- String. The Splunk cluster endpoint.
- 2
- Integer. The Splunk management cluster endpoint port. Differs from the Splunk GUI hosted port. Can be found on the Splunk UI under Settings → Server Settings → General Settings.
- 3
- String. The generated bearer token for Splunk.
- 4
- String. The URL scheme for access the Splunk service. If Splunk is configured to use TLS/SSL, this must be
https
. - 5
- Boolean. Whether to enable TLS/SSL. Defaults to
true
. - 6
- String. The Splunk index prefix. Can be a new, or used, index. Can be created from the Splunk UI.
- 7
- String. The relative container path to a single
.pem
file containing a certificate authority (CA) for TLS/SSL validation.
If you are configuring
ssl_ca_path
, you must configure the SSL/TLS certificate so that Red Hat Quay will trust it.-
If you are using a standalone deployment of Red Hat Quay, SSL/TLS certificates can be provided by placing the certificate file inside of the
extra_ca_certs
directory, or inside of the relative container path and specified byssl_ca_path
. If you are using the Red Hat Quay Operator, create a config bundle secret, including the certificate authority (CA) of the Splunk server. For example:
$ oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret
Specify the
conf/stack/extra_ca_certs/splunkserver.crt
file in yourconfig.yaml
. For example:LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com port: 8089 bearer_token: eyJra url_scheme: https verify_ssl: true index_prefix: quay123456 ssl_ca_path: conf/stack/splunkserver.crt
-
If you are using a standalone deployment of Red Hat Quay, SSL/TLS certificates can be provided by placing the certificate file inside of the
6.2.4. Creating an action log
Use the following procedure to create a user account that can forward action logs to Splunk.
You must use the Splunk UI to view Red Hat Quay action logs. At this time, viewing Splunk action logs on the Red Hat Quay Usage Logs page is unsupported, and returns the following message: Method not implemented. Splunk does not support log lookups
.
Prerequisites
- You have installed Splunk and created a username.
- You have generated a Splunk bearer token.
-
You have configured your Red Hat Quay
config.yaml
file to enable Splunk.
Procedure
- Log in to your Red Hat Quay deployment.
- Click on the name of the organization that you will use to create an action log for Splunk.
- In the navigation pane, click Robot Accounts → Create Robot Account.
-
When prompted, enter a name for the robot account, for example
spunkrobotaccount
, then click Create robot account. - On your browser, open the Splunk UI.
- Click Search and Reporting.
In the search bar, enter the name of your index, for example,
<splunk_log_index_name>
and press Enter.The search results populate on the Splunk UI, showing information like
host
,sourcetype
, etc. By clicking the>
arrow, you can see metadata for the logs, such as theip
, JSON metadata, and account name.
Chapter 7. Clair for Red Hat Quay
Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments.
7.1. Clair vulnerability databases
Clair uses the following vulnerability databases to report for issues in your images:
- Ubuntu Oval database
- Debian Security Tracker
- Red Hat Enterprise Linux (RHEL) Oval database
- SUSE Oval database
- Oracle Oval database
- Alpine SecDB database
- VMWare Photon OS database
- Amazon Web Services (AWS) UpdateInfo
- Open Source Vulnerability (OSV) Database
For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping.
7.1.1. Information about Open Source Vulnerability (OSV) database for Clair
Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software.
OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems.
Clair also reports vulnerability and security information for golang
, java
, and ruby
ecosystems through the Open Source Vulnerability (OSV) database.
By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects.
For more information about OSV, see the OSV website.
7.2. Setting up Clair on standalone Red Hat Quay deployments
For standalone Red Hat Quay deployments, you can set up Clair manually.
Procedure
In your Red Hat Quay installation directory, create a new directory for the Clair database data:
$ mkdir /home/<user-name>/quay-poc/postgres-clairv4
Set the appropriate permissions for the
postgres-clairv4
file by entering the following command:$ setfacl -m u:26:-wx /home/<user-name>/quay-poc/postgres-clairv4
Deploy a Clair Postgres database by entering the following command:
$ sudo podman run -d --name postgresql-clairv4 \ -e POSTGRESQL_USER=clairuser \ -e POSTGRESQL_PASSWORD=clairpass \ -e POSTGRESQL_DATABASE=clair \ -e POSTGRESQL_ADMIN_PASSWORD=adminpass \ -p 5433:5433 \ -v /home/<user-name>/quay-poc/postgres-clairv4:/var/lib/pgsql/data:Z \ registry.redhat.io/rhel8/postgresql-13:1-109
Install the Postgres
uuid-ossp
module for your Clair deployment:$ podman exec -it postgresql-clairv4 /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"" | psql -d clair -U postgres'
Example output
CREATE EXTENSION
NoteClair requires the
uuid-ossp
extension to be added to its Postgres database. For users with proper privileges, creating the extension will automatically be added by Clair. If users do not have the proper privileges, the extension must be added before start Clair.If the extension is not present, the following error will be displayed when Clair attempts to start:
ERROR: Please load the "uuid-ossp" extension. (SQLSTATE 42501)
.Stop the
Quay
container if it is running and restart it in configuration mode, loading the existing configuration as a volume:$ sudo podman run --rm -it --name quay_config \ -p 80:8080 -p 443:8443 \ -v $QUAY/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:{productminv} config secret
- Log in to the configuration tool and click Enable Security Scanning in the Security Scanner section of the UI.
-
Set the HTTP endpoint for Clair using a port that is not already in use on the
quay-server
system, for example,8081
. Create a pre-shared key (PSK) using the Generate PSK button.
Security Scanner UI
-
Validate and download the
config.yaml
file for Red Hat Quay, and then stop theQuay
container that is running the configuration editor. Extract the new configuration bundle into your Red Hat Quay installation directory, for example:
$ tar xvf quay-config.tar.gz -d /home/<user-name>/quay-poc/
Create a folder for your Clair configuration file, for example:
$ mkdir /etc/opt/clairv4/config/
Change into the Clair configuration folder:
$ cd /etc/opt/clairv4/config/
Create a Clair configuration file, for example:
http_listen_addr: :8081 introspection_addr: :8088 log_level: debug indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true matcher: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable max_conn_pool: 100 migrations: true indexer_addr: clair-indexer notifier: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable delivery_interval: 1m poll_interval: 5m migrations: true auth: psk: key: "MTU5YzA4Y2ZkNzJoMQ==" iss: ["quay"] # tracing and metrics trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" metrics: name: "prometheus"
For more information about Clair’s configuration format, see Clair configuration reference.
Start Clair by using the container image, mounting in the configuration from the file you created:
$ sudo podman run -d --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ -v /etc/opt/clairv4/config:/clair:Z \ registry.redhat.io/quay/clair-rhel8:v3.9.7
NoteRunning multiple Clair containers is also possible, but for deployment scenarios beyond a single container the use of a container orchestrator like Kubernetes or OpenShift Container Platform is strongly recommended.
7.3. Clair on OpenShift Container Platform
To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator will install or upgrade a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically.
7.4. Testing Clair
Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment.
Prerequisites
- You have deployed the Clair container image.
Procedure
Pull a sample image by entering the following command:
$ podman pull ubuntu:20.04
Tag the image to your registry by entering the following command:
$ sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04
Push the image to your Red Hat Quay registry by entering the following command:
$ sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04
- Log in to your Red Hat Quay deployment through the UI.
- Click the repository name, for example, quayadmin/ubuntu.
In the navigation pane, click Tags.
Report summary
Click the image report, for example, 45 medium, to show a more detailed report:
Report details
NoteIn some cases, Clair shows duplicate reports on images, for example,
ubi8/nodejs-12
orubi8/nodejs-16
. This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug.
Chapter 8. Repository mirroring
8.1. Repository mirroring
Red Hat Quay repository mirroring lets you mirror images from external container registries, or another local registry, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags.
From your Red Hat Quay cluster with repository mirroring enabled, you can perform the following:
- Choose a repository from an external registry to mirror
- Add credentials to access the external registry
- Identify specific container image repository names and tags to sync
- Set intervals at which a repository is synced
- Check the current state of synchronization
To use the mirroring functionality, you need to perform the following actions:
- Enable repository mirroring in the Red Hat Quay configuration file
- Run a repository mirroring worker
- Create mirrored repositories
All repository mirroring configurations can be performed using the configuration tool UI or by the Red Hat Quay API.
8.2. Repository mirroring compared to geo-replication
Red Hat Quay geo-replication mirrors the entire image storage backend data between 2 or more different storage backends while the database is shared, for example, one Red Hat Quay registry with two different blob storage endpoints. The primary use cases for geo-replication include the following:
- Speeding up access to the binary blobs for geographically dispersed setups
- Guaranteeing that the image content is the same across regions
Repository mirroring synchronizes selected repositories, or subsets of repositories, from one registry to another. The registries are distinct, with each registry having a separate database and separate image storage.
The primary use cases for mirroring include the following:
- Independent registry deployments in different data centers or regions, where a certain subset of the overall content is supposed to be shared across the data centers and regions
- Automatic synchronization or mirroring of selected (allowlisted) upstream repositories from external registries into a local Red Hat Quay deployment
Repository mirroring and geo-replication can be used simultaneously.
Feature / Capability | Geo-replication | Repository mirroring |
---|---|---|
What is the feature designed to do? | A shared, global registry | Distinct, different registries |
What happens if replication or mirroring has not been completed yet? | The remote copy is used (slower) | No image is served |
Is access to all storage backends in both regions required? | Yes (all Red Hat Quay nodes) | No (distinct storage) |
Can users push images from both sites to the same repository? | Yes | No |
Is all registry content and configuration identical across all regions (shared database)? | Yes | No |
Can users select individual namespaces or repositories to be mirrored? | No | Yes |
Can users apply filters to synchronization rules? | No | Yes |
Are individual / different role-base access control configurations allowed in each region | No | Yes |
8.3. Using repository mirroring
The following list shows features and limitations of Red Hat Quay repository mirroring:
- With repository mirroring, you can mirror an entire repository or selectively limit which images are synced. Filters can be based on a comma-separated list of tags, a range of tags, or other means of identifying tags through Unix shell-style wildcards. For more information, see the documentation for wildcards.
- When a repository is set as mirrored, you cannot manually add other images to that repository.
- Because the mirrored repository is based on the repository and tags you set, it will hold only the content represented by the repository and tag pair. For example if you change the tag so that some images in the repository no longer match, those images will be deleted.
- Only the designated robot can push images to a mirrored repository, superseding any role-based access control permissions set on the repository.
- Mirroring can be configured to rollback on failure, or to run on a best-effort basis.
- With a mirrored repository, a user with read permissions can pull images from the repository but cannot push images to the repository.
- Changing settings on your mirrored repository can be performed in the Red Hat Quay user interface, using the Repositories → Mirrors tab for the mirrored repository you create.
- Images are synced at set intervals, but can also be synced on demand.
8.4. Mirroring configuration UI
Start the
Quay
container in configuration mode and select the Enable Repository Mirroring check box. If you want to require HTTPS communications and verify certificates during mirroring, select the HTTPS and cert verification check box.-
Validate and download the
configuration
file, and then restart Quay in registry mode using the updated config file.
8.5. Mirroring configuration fields
Field | Type | Description |
---|---|---|
FEATURE_REPO_MIRROR | Boolean |
Enable or disable repository mirroring |
REPO_MIRROR_INTERVAL | Number |
The number of seconds between checking for repository mirror candidates |
REPO_MIRROR_SERVER_HOSTNAME | String |
Replaces the |
REPO_MIRROR_TLS_VERIFY | Boolean |
Require HTTPS and verify certificates of Quay registry during mirror. |
REPO_MIRROR_ROLLBACK | Boolean |
When set to
Default: |
8.6. Mirroring worker
Use the following procedure to start the repository mirroring worker.
Procedure
If you have not configured TLS communications using a
/root/ca.crt
certificate, enter the following command to start aQuay
pod with therepomirror
option:$ sudo podman run -d --name mirroring-worker \ -v $QUAY/config:/conf/stack:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7 repomirror
If you have configured TLS communications using a
/root/ca.crt
certificate, enter the following command to start the repository mirroring worker:$ sudo podman run -d --name mirroring-worker \ -v $QUAY/config:/conf/stack:Z \ -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7 repomirror
8.7. Creating a mirrored repository
When mirroring a repository from an external container registry, you must create a new private repository. Typically, the same name is used as the target repository, for example, quay-rhel8
.
8.7.1. Repository mirroring settings
Use the following procedure to adjust the settings of your mirrored repository.
Prerequisites
- You have enabled repository mirroring in your Red Hat Quay configuration file.
- You have deployed a mirroring worker.
Procedure
In the Settings tab, set the Repository State to
Mirror
:In the Mirror tab, enter the details for connecting to the external registry, along with the tags, scheduling and access information:
Enter the details as required in the following fields:
-
Registry Location: The external repository you want to mirror, for example,
registry.redhat.io/quay/quay-rhel8
- Tags: This field is required. You may enter a comma-separated list of individual tags or tag patterns. (See Tag Patterns section for details.)
- Start Date: The date on which mirroring begins. The current date and time is used by default.
- Sync Interval: Defaults to syncing every 24 hours. You can change that based on hours or days.
- Robot User: Create a new robot account or choose an existing robot account to do the mirroring.
- Username: The username for accessing the external registry holding the repository you are mirroring.
- Password: The password associated with the Username. Note that the password cannot include characters that require an escape character (\).
-
Registry Location: The external repository you want to mirror, for example,
8.7.2. Advanced settings
In the Advanced Settings section, you can configure SSL/TLS and proxy with the following options:
- Verify TLS: Select this option if you want to require HTTPS and to verify certificates when communicating with the target remote registry.
- Accept Unsigned Images: Selecting this option allows unsigned images to be mirrored.
- HTTP Proxy: Select this option if you want to require HTTPS and to verify certificates when communicating with the target remote registry.
- HTTPS PROXY: Identify the HTTPS proxy server needed to access the remote site, if a proxy server is needed.
- No Proxy: List of locations that do not require proxy.
8.7.3. Synchronize now
Use the following procedure to initiate the mirroring operation.
Procedure
To perform an immediate mirroring operation, press the Sync Now button on the repository’s Mirroring tab. The logs are available on the Usage Logs tab:
When the mirroring is complete, the images will appear in the Tags tab:
Below is an example of a completed Repository Mirroring screen:
8.8. Event notifications for mirroring
There are three notification events for repository mirroring:
- Repository Mirror Started
- Repository Mirror Success
- Repository Mirror Unsuccessful
The events can be configured inside of the Settings tab for each repository, and all existing notification methods such as email, Slack, Quay UI, and webhooks are supported.
8.9. Mirroring tag patterns
At least one tag must be entered. The following table references possible image tag patterns.
8.9.1. Pattern syntax
Pattern | Description |
* | Matches all characters |
? | Matches any single character |
[seq] | Matches any character in seq |
[!seq] | Matches any character not in seq |
8.9.2. Example tag patterns
Example Pattern | Example Matches |
v3* | v32, v3.1, v3.2, v3.2-4beta, v3.3 |
v3.* | v3.1, v3.2, v3.2-4beta |
v3.? | v3.1, v3.2, v3.3 |
v3.[12] | v3.1, v3.2 |
v3.[12]* | v3.1, v3.2, v3.2-4beta |
v3.[!1]* | v3.2, v3.2-4beta, v3.3 |
8.10. Working with mirrored repositories
Once you have created a mirrored repository, there are several ways you can work with that repository. Select your mirrored repository from the Repositories page and do any of the following:
- Enable/disable the repository: Select the Mirroring button in the left column, then toggle the Enabled check box to enable or disable the repository temporarily.
Check mirror logs: To make sure the mirrored repository is working properly, you can check the mirror logs. To do that, select the Usage Logs button in the left column. Here’s an example:
- Sync mirror now: To immediately sync the images in your repository, select the Sync Now button.
- Change credentials: To change the username and password, select DELETE from the Credentials line. Then select None and add the username and password needed to log into the external registry when prompted.
- Cancel mirroring: To stop mirroring, which keeps the current images available but stops new ones from being synced, select the CANCEL button.
Set robot permissions: Red Hat Quay robot accounts are named tokens that hold credentials for accessing external repositories. By assigning credentials to a robot, that robot can be used across multiple mirrored repositories that need to access the same external registry.
You can assign an existing robot to a repository by going to Account Settings, then selecting the Robot Accounts icon in the left column. For the robot account, choose the link under the REPOSITORIES column. From the pop-up window, you can:
- Check which repositories are assigned to that robot.
-
Assign read, write or Admin privileges to that robot from the PERMISSION field shown in this figure:
Change robot credentials: Robots can hold credentials such as Kubernetes secrets, Docker login information, and Mesos bundles. To change robot credentials, select the Options gear on the robot’s account line on the Robot Accounts window and choose View Credentials. Add the appropriate credentials for the external repository the robot needs to access.
- Check and change general setting: Select the Settings button (gear icon) from the left column on the mirrored repository page. On the resulting page, you can change settings associated with the mirrored repository. In particular, you can change User and Robot Permissions, to specify exactly which users and robots can read from or write to the repo.
8.11. Repository mirroring recommendations
Best practices for repository mirroring include the following:
- Repository mirroring pods can run on any node. This means that you can run mirroring on nodes where Red Hat Quay is already running.
- Repository mirroring is scheduled in the database and runs in batches. As a result, repository workers check each repository mirror configuration file and reads when the next sync needs to be. More mirror workers means more repositories can be mirrored at the same time. For example, running 10 mirror workers means that a user can run 10 mirroring operators in parallel. If a user only has 2 workers with 10 mirror configurations, only 2 operators can be performed.
The optimal number of mirroring pods depends on the following conditions:
- The total number of repositories to be mirrored
- The number of images and tags in the repositories and the frequency of changes
Parallel batching
For example, if a user is mirroring a repository that has 100 tags, the mirror will be completed by one worker. Users must consider how many repositories one wants to mirror in parallel, and base the number of workers around that.
Multiple tags in the same repository cannot be mirrored in parallel.
Chapter 9. IPv6 and dual-stack deployments
Your standalone Red Hat Quay deployment can now be served in locations that only support IPv6, such as Telco and Edge environments. Support is also offered for dual-stack networking so your Red Hat Quay deployment can listen on IPv4 and IPv6 simultaneously.
For a list of known limitations, see IPv6 limitations
9.1. Enabling the IPv6 protocol family
Use the following procedure to enable IPv6 support on your standalone Red Hat Quay deployment.
Prerequisites
- You have updated Red Hat Quay to 3.8.
- Your host and container software platform (Docker, Podman) must be configured to support IPv6.
Procedure
In your deployment’s
config.yaml
file, add theFEATURE_LISTEN_IP_VERSION
parameter and set it toIPv6
, for example:--- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: IPv6 FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false ---
- Start, or restart, your Red Hat Quay deployment.
Check that your deployment is listening to IPv6 by entering the following command:
$ curl <quay_endpoint>/health/instance {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200}
After enabling IPv6 in your deployment’s config.yaml
, all Red Hat Quay features can be used as normal, so long as your environment is configured to use IPv6 and is not hindered by the ipv6-limitations[current limitations].
If your environment is configured to IPv4, but the FEATURE_LISTEN_IP_VERSION
configuration field is set to IPv6
, Red Hat Quay will fail to deploy.
9.2. Enabling the dual-stack protocol family
Use the following procedure to enable dual-stack (IPv4 and IPv6) support on your standalone Red Hat Quay deployment.
Prerequisites
- You have updated Red Hat Quay to 3.8.
- Your host and container software platform (Docker, Podman) must be configured to support IPv6.
Procedure
In your deployment’s
config.yaml
file, add theFEATURE_LISTEN_IP_VERSION
parameter and set it todual-stack
, for example:--- FEATURE_GOOGLE_LOGIN: false FEATURE_INVITE_ONLY_USER_CREATION: false FEATURE_LISTEN_IP_VERSION: dual-stack FEATURE_MAILING: false FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false ---
- Start, or restart, your Red Hat Quay deployment.
Check that your deployment is listening to both channels by entering the following command:
For IPv4, enter the following command:
$ curl --ipv4 <quay_endpoint> {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200}
For IPv6, enter the following command:
$ curl --ipv6 <quay_endpoint> {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200}
After enabling dual-stack in your deployment’s config.yaml
, all Red Hat Quay features can be used as normal, so long as your environment is configured for dual-stack.
9.3. IPv6 and dua-stack limitations
Currently, attempting to configure your Red Hat Quay deployment with the common Azure Blob Storage configuration will not work on IPv6 single stack environments. Because the endpoint of Azure Blob Storage does not support IPv6, there is no workaround in place for this issue.
For more information, see PROJQUAY-4433.
Currently, attempting to configure your Red Hat Quay deployment with Amazon S3 CloudFront will not work on IPv6 single stack environments. Because the endpoint of Amazon S3 CloudFront does not support IPv6, there is no workaround in place for this issue.
For more information, see PROJQUAY-4470.
Chapter 10. LDAP Authentication Setup for Red Hat Quay
Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. Red Hat Quay supports using LDAP as an identity provider.
10.1. Considerations when enabling LDAP
Prior to enabling LDAP for your Red Hat Quay deployment, you should consider the following.
Existing Red Hat Quay deployments
Conflicts between usernames can arise when you enable LDAP for an existing Red Hat Quay deployment that already has users configured. For example, one user, alice
, was manually created in Red Hat Quay prior to enabling LDAP. If the username alice
also exists in the LDAP directory, Red Hat Quay automatically creates a new user, alice-1
, when alice
logs in for the first time using LDAP. Red Hat Quay then automatically maps the LDAP credentials to the alice
account. For consistency reasons, this might be erroneous for your Red Hat Quay deployment. It is recommended that you remove any potentially conflicting local account names from Red Hat Quay prior to enabling LDAP.
Manual User Creation and LDAP authentication
When Red Hat Quay is configured for LDAP, LDAP-authenticated users are automatically created in Red Hat Quay’s database on first log in, if the configuration option FEATURE_USER_CREATION
is set to true
. If this option is set to false
, the automatic user creation for LDAP users fails, and the user is not allowed to log in. In this scenario, the superuser needs to create the desired user account first. Conversely, if FEATURE_USER_CREATION
is set to true
, this also means that a user can still create an account from the Red Hat Quay login screen, even if there is an equivalent user in LDAP.
10.2. Configuring LDAP for Red Hat Quay
Use the following procedure to configure LDAP for your Red Hat Quay deployment.
Procedure
You can use the Red Hat Quay config tool to configure LDAP.
Using the Red Hat Quay config tool, locate the Authentication section. Select LDAP from the dropdown menu, and update the LDAP configuration fields as required.
Optional. On the Team synchronization box, and click Enable Team Syncrhonization Support. With team synchronization enabled, Red Hat Quay administrators who are also superusers can set teams to have their membership synchronized with a backing group in LDAP.
-
For Resynchronization duration enter 60m. This option sets the resynchronization duration at which a team must be re-synchronized. This field must be set similar to the following examples:
30m
,1h
,1d
. Optional. For Self-service team syncing setup, you can click Allow non-superusers to enable and manage team syncing to allow superusers the ability to enable and manage team syncing under the organizations that they are administrators for.
Locate the LDAP URI box and provide a full LDAP URI, including the ldap:// or ldaps:// prefix, for example,
ldap://117.17.8.101
.Under Base DN, provide a name which forms the base path for looking up all LDAP records, for example,
o=<organization_id>
,dc=<example_domain_component>
,dc=com
.Under User Relative DN, provide a list of Distinguished Name path(s), which form the secondary base path(s) for looking up all user LDAP records relative to the Base DN defined above. For example,
uid=<name>
,ou=Users
,o=<organization_id>
,dc=<example_domain_component>
,dc=com
. This path, or these paths, is tried if the user is not found through the primary relative DN.NoteUser Relative DN is relative to Base DN, for example,
ou=Users
and notou=Users,dc=<example_domain_component>,dc=com
.Optional. Provide Secondary User Relative DNs if there are multiple Organizational Units where user objects are located. You can type in the Organizational Units and click Add to add multiple RDNs. For example,
ou=Users,ou=NYC and ou=Users,ou=SFO
.The User Relative DN searches with subtree scope. For example, if your organization has Organization Units
NYC
andSFO
under the Users OU (that is,ou=SFO,ou=Users
andou=NYC,ou=Users
), Red Hat Quay can authenticate users from both theNYC
andSFO
Organizational Units if the User Relative DN is set toUsers
(ou=Users
).Optional. Fill in the Additional User Filter Expression field for all user lookup queries if desired. Distinguished Names used in the filter must be full based. The Base DN is not added automatically added to this field, and you must wrap the text in parentheses, for example,
(memberOf=cn=developers,ou=groups,dc=<example_domain_component>,dc=com)
.Fill in the Administrator DN field for the Red Hat Quay administrator account. This account must be able to login and view the records for all users accounts. For example:
uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com
.Fill in the Administrator DN Password field. This is the password for the administrator distinguished name.
ImportantThe password for this field is stored in plaintext inside of the
config.yaml
file. Setting up a dedicated account of using a password hash is highly recommended.Optional. Fill in the UID Attribute field. This is the name of the property field in the LDAP user records that stores your user’s username. Most commonly, uid is entered for this field. This field can be used to log into your Red Hat Quay deployment.
Optional. Fill in the Mail Attribute field. This is the name of the property field in your LDAP user records that stores your user’s e-mail addresses. Most commonly, mail is entered for this field. This field can be used to log into your Red Hat Quay deployment.
Note- The username to log in must exist in the User Relative DN.
-
If you are using Microsoft Active Directory to setup your LDAP deployment, you must use
sAMAccountName
for your UID attribute.
Optional. You can add a custom SSL/TLS certificate by clicking Choose File under the Custom TLS Certificate optionl. Additionally, you can enable fallbacks to insecure, non-TLS connections by checking the Allow fallback to non-TLS connections box.
If you upload an SSl/TLS certificate, you must provide an ldaps:// prefix, for example,
LDAP_URI: ldaps://ldap_provider.example.org
.
Alternatively, you can update your
config.yaml
file directly to include all relevant information. For example:--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,dc=<domain_name>,dc=com) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com
- After you have added all required LDAP fields, click the Save Configuration Changes button to validate the configuration. All validation must succeed before proceeding. Additional configuration can be performed by selecting the Continue Editing button.
10.3. Enabling the LDAP_RESTRICTED_USER_FILTER configuration field
The LDAP_RESTRICTED_USER_FILTER
configuration field is a subset of the LDAP_USER_FILTER
configuration field. When configured, this option allows Red Hat Quay administrators the ability to configure LDAP users as restricted users when Red Hat Quay uses LDAP as its authentication provider.
Use the following procedure to enable LDAP restricted users on your Red Hat Quay deployment.
Prerequisites
- Your Red Hat Quay deployment uses LDAP as its authentication provider.
-
You have configured the
LDAP_USER_FILTER
field in yourconfig.yaml
file.
Procedure
In your deployment’s
config.yaml
file, add theLDAP_RESTRICTED_USER_FILTER
parameter and specify the group of restricted users, for example,members
:--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_RESTRICTED_USER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com
- Start, or restart, your Red Hat Quay deployment.
After enabling the LDAP_RESTRICTED_USER_FILTER
feature, your LDAP Red Hat Quay users are restricted from reading and writing content, and creating organizations.
10.4. Enabling the LDAP_SUPERUSER_FILTER configuration field
With the LDAP_SUPERUSER_FILTER
field configured, Red Hat Quay administrators can configure Lightweight Directory Access Protocol (LDAP) users as superusers if Red Hat Quay uses LDAP as its authentication provider.
Use the following procedure to enable LDAP superusers on your Red Hat Quay deployment.
Prerequisites
- Your Red Hat Quay deployment uses LDAP as its authentication provider.
-
You have configured the
LDAP_USER_FILTER
field field in yourconfig.yaml
file.
Procedure
In your deployment’s
config.yaml
file, add theLDAP_SUPERUSER_FILTER
parameter and add the group of users you want configured as super users, for example,root
:--- AUTHENTICATION_TYPE: LDAP --- LDAP_ADMIN_DN: uid=<name>,ou=Users,o=<organization_id>,dc=<example_domain_component>,dc=com LDAP_ADMIN_PASSWD: ABC123 LDAP_ALLOW_INSECURE_FALLBACK: false LDAP_BASE_DN: - o=<organization_id> - dc=<example_domain_component> - dc=com LDAP_EMAIL_ATTR: mail LDAP_UID_ATTR: uid LDAP_URI: ldap://<example_url>.com LDAP_USER_FILTER: (memberof=cn=developers,ou=Users,o=<example_organization_unit>,dc=<example_domain_component>,dc=com) LDAP_SUPERUSER_FILTER: (<filterField>=<value>) LDAP_USER_RDN: - ou=<example_organization_unit> - o=<organization_id> - dc=<example_domain_component> - dc=com
- Start, or restart, your Red Hat Quay deployment.
After enabling the LDAP_SUPERUSER_FILTER
feature, your LDAP Red Hat Quay users have superuser privileges. The following options are available to superusers:
- Manage users
- Manage organizations
- Manage service keys
- View the change log
- Query the usage logs
- Create globally visible user messages
10.5. Common LDAP configuration issues
The following errors might be returned with an invalid configuration.
- Invalid credentials. If you receive this error, the Administrator DN or Administrator DN password values are incorrect. Ensure that you are providing accurate Administrator DN and password values.
*Verification of superuser %USERNAME% failed. This error is returned for the following reasons:
- The username has not been found.
- The user does not exist in the remote authentication system.
- LDAP authorization is configured improperly.
Cannot find the current logged in user. When configuring LDAP for Red Hat Quay, there may be situations where the LDAP connection is established successfully using the username and password provided in the Administrator DN fields. However, if the current logged-in user cannot be found within the specified User Relative DN path using the UID Attribute or Mail Attribute fields, there are typically two potential reasons for this:
- The current logged in user does not exist in the User Relative DN path.
The Administrator DN does not have rights to search or read the specified LDAP path.
To fix this issue, ensure that the logged in user is included in the User Relative DN path, or provide the correct permissions to the Administrator DN account.
10.6. LDAP configuration fields
For a full list of LDAP configuration fields, see LDAP configuration fields
Chapter 11. Configuring OIDC for Red Hat Quay
Configuring OpenID Connect (OIDC) for Red Hat Quay can provide several benefits to your Red Hat Quay deployment. For example, OIDC allows users to authenticate to Red Hat Quay using their existing credentials from an OIDC provider, such as Red Hat Single Sign-On, Google, Github, Microsoft, or others. Other benefits of OIDC include centralized user management, enhanced security, and single sign-on (SSO). Overall, OIDC configuration can simplify user authentication and management, enhance security, and provide a seamless user experience for Red Hat Quay users.
The following procedures show you how to configure Red Hat Single Sign-On and Azure AD. Collectively, these procedures include configuring OIDC on the Red Hat Quay Operator, and on standalone deployments by using the Red Hat Quay config tool.
By following these procedures, you will be able to add any OIDC provider to Red Hat Quay, regardless of which identity provider you choose to use.
11.1. Configuring Red Hat Single Sign-On for Red Hat Quay
Based on the Keycloak project, Red Hat Single Sign-On (RH-SSO) is an open source identity and access management (IAM) solution provided by Red Hat. RH-SSO allows organizations to manage user identities, secure applications, and enforce access control policies across their systems and applications. It also provides a unified authentication and authorization framework, which allows users to log in one time and gain access to multiple applications and resources without needing to re-authenticate. For more information, see Red Hat Single Sign-On.
By configuring Red Hat Single Sign-On on Red Hat Quay, you can create a seamless authentication integration between Red Hat Quay and other application platforms like OpenShift Container Platform.
11.1.1. Configuring the Red Hat Single Sign-On Operator for the Red Hat Quay Operator
Use the following procedure to configure Red Hat Single Sign-On for the Red Hat Quay Operator on OpenShift Container Platform.
Prerequisites
- You have configured Red Hat Single Sign-On for the Red Hat Quay Operator. For more information, see Red Hat Single Sign-On Operator.
- You have configured TLS/SSL for your Red Hat Quay deployment and for Red Hat Single Sign-On.
- You have generated a single Certificate Authority (CA) and uploaded it to your Red Hat Single Sign-On Operator and to your Red Hat Quay configuration.
- You are logged into your OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Navigate to the Red Hat Single Sign-On Admin Console.
- On the OpenShift Container Platform Web Console, navigate to Network → Route.
- Select the Red Hat Single Sign-On project from the drop-down list.
- Find the Red Hat Single Sign-On Admin Console in the Routes table.
- Select the Realm that you will use to configure Red Hat Quay.
- Click Clients under the Configure section of the navigation panel, and then click the Create button to add a new OIDC for Red Hat Quay.
Enter the following information.
-
Client ID:
quay-enterprise
-
Client Protocol:
openid-connect
-
Root URL:
https://<quay endpoint>/
-
Client ID:
- Click Save. This results in a redirect to the Clients setting panel.
- Navigate to Access Type and select Confidential.
Navigate to Valid Redirect URIs. You must provide three redirect URIs. The value should be the fully qualified domain name of the Red Hat Quay registry appended with
/oauth2/redhatsso/callback
. For example:- Click Save and navigate to the new Credentials setting.
- Copy the value of the Secret.
11.1.2. Configuring the Red Hat Quay Operator to use Red Hat Single Sign-On
Use the following procedure to configure Red Hat Single Sign-On with the Red Hat Quay Operator.
Prerequisites
- You have configured the Red Hat Single Sign-On Operator for the Red Hat Quay Operator.
Procedure
- Enter the Red Hat Quay config editor tool by navigating to Operators → Installed Operators. Click Red Hat Quay → Quay Registry. Then, click the name of your Red Hat Quay registry, and the URL listed with Config Editor Endpoint.
Upload a custom SSL/TLS certificate to your OpenShift Container Platform deployment.
- Navigate to the Red Hat Quay config tool UI.
- Under Custom SSL Certificates, click Select file and upload your custom SSL/TLS certificates.
- Reconfigure your Red Hat Quay deployment.
- Scroll down to the External Authorization (OAuth) section.
- Click Add OIDC Provider.
-
When prompted, enter
redhatsso
. Enter the following information:
-
OIDC Server: The fully qualified domain name (FQDN) of the Red Hat Single Sign-On instance, appended with
/auth/realms/
and the Realm name. You must include the forward slash at the end, for example,https://sso-redhat.example.com//auth/realms/<keycloak_realm_name>/
. -
Client ID: The client ID of the application that is being reistered with the identity provider, for example,
quay-enterprise
. -
Client Secret: The Secret from the Credentials tab of the
quay-enterprise
OIDC client settings. -
Service Name: The name that is displayed on the Red Hat Quay login page, for example,
Red hat Single Sign On
. - Verified Email Address Claim: The name of the claim that is used to verify the email address of the user.
-
Login Scopes: The scopes to send to the OIDC provider when performing the login flow, for example,
openid
. After configuration, you must click Add.
-
OIDC Server: The fully qualified domain name (FQDN) of the Red Hat Single Sign-On instance, appended with
- Scroll down and click Validate Configuration Changes. Then, click Restart Now to deploy the Red Hat Quay Operator with OIDC enabled.
11.2. Configuring Azure AD OIDC for Red Hat Quay
By integrating Azure AD authentication with Red Hat Quay, your organization can take advantage of the centralized user management and security features offered by Azure AD. Some features include the ability to manage user access to Red Hat Quay repositories based on their Azure AD roles and permissions, and the ability to enable multi-factor authentication and other security features provided by Azure AD.
Azure Active Directory (Azure AD) authentication for Red Hat Quay allows users to authenticate and access Red Hat Quay using their Azure AD credentials.
11.2.1. Configuring Azure AD by using the Red Hat Quay config tool
The following procedure configures Azure AD for Red Hat Quay using the config tool.
Procedure
Enter the Red Hat Quay config editor tool.
If you are running a standalone Red Hat Quay deployment, you can enter the following command:
$ sudo podman run --rm -it --name quay_config -p 80:8080 -p 443:8443 registry.redhat.io/quay/quay-rhel8:v3.9.7 config secret
Use your browser to navigate to the user interface for the configuration tool and log in.
- If you are on the Red Hat Quay Operator, navigate to Operators → Installed Operators. Click Red Hat Quay → Quay Registry. Then, click the name of your Red Hat Quay registry, and the URL listed with Config Editor Endpoint.
- Scroll down to the External Authorization (OAuth) section.
- Click Add OIDC Provider.
When prompted, enter the ID for the ODIC provider.
NoteYour OIDC server must end with
/
.After the ODIC provider has been added, Red Hat Quay lists three callback URLs that must be registered on Azure. These addresses allow Azure to direct back to Red Hat Quay after authentication is confirmed. For example:
-
https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback
-
https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/attach
-
https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/cli
-
- After all required fields have been set, validate your settings by clicking Validate Configuration Changes. If any errors are reported, continue editing your configuration until the settings are valid and Red Hat Quay can connect to your database and Redis servers.
11.2.2. Configuring Azure AD by updating the Red Hat Quay config.yaml file
Use the following procedure to configure Azure AD by updating the Red Hat Quay config.yaml
file directly.
- Using the following procedure, you can add any ODIC provider to Red Hat Quay, regardless of which identity provider is being added.
-
If your system has a firewall in use, or proxy enabled, you must whitelist all Azure API endpoints for each Oauth application that is created. Otherwise, the following error is returned:
x509: certificate signed by unknown authority
.
Add the following information to your Red Hat Quay
config.yaml
file:AZURE_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_address_> 4 SERVICE_NAME: Azure AD 5 VERIFIED_EMAIL_CLAIM_NAME: <verified_email> 6
- 1
- The parent key that holds the OIDC configuration settings. In this example, the parent key used is
AZURE_LOGIN_CONFIG
, however, the stringAZURE
can be replaced with any arbitrary string based on your specific needs, for exampleABC123
.However, the following strings are not accepted:GOOGLE
,GITHUB
. These strings are reserved for their respecitve identity platforms and require a specificconfig.yaml
entry contingent upon when platform you are using. - 2
- The client ID of the application that is being reistered with the identity provider.
- 3
- The client secret of the application that is being registered with the identity provider.
- 4
- The address of the OIDC server that is being used for authentication. In this example, you must use
sts.windows.net
as the issuer identifier. Usinghttps://login.microsoftonline.com
results in the following error:Could not create provider for AzureAD. Error: oidc: issuer did not match the issuer returned by provider, expected "https://login.microsoftonline.com/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5" got "https://sts.windows.net/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5/"
. - 5
- The name of the service that is being authenticated.
- 6
- The name of the claim that is used to verify the email address of the user.
Proper configuration of Azure AD results three redirects with the following format:
-
https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback
-
https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/attach
-
https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/cli
-
- Restart your Red Hat Quay deployment.
Chapter 12. Prometheus and Grafana metrics under Red Hat Quay
Red Hat Quay exports a Prometheus- and Grafana-compatible endpoint on each instance to allow for easy monitoring and alerting.
12.1. Exposing the Prometheus endpoint
12.1.1. Standalone Red Hat Quay
When using podman run
to start the Quay
container, expose the metrics port 9091
:
$ sudo podman run -d --rm -p 80:8080 -p 443:8443 -p 9091:9091\ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -v $QUAY/storage:/datastorage:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7
The metrics will now be available:
$ curl quay.example.com:9091/metrics
See Monitoring Quay with Prometheus and Grafana for details on configuring Prometheus and Grafana to monitor Quay repository counts.
12.1.2. Red Hat Quay Operator
Determine the cluster IP for the quay-metrics
service:
$ oc get services -n quay-enterprise NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.61.161 <none> 80/TCP,8089/TCP 18h example-registry-clair-postgres ClusterIP 172.30.122.136 <none> 5432/TCP 18h example-registry-quay-app ClusterIP 172.30.72.79 <none> 443/TCP,80/TCP,8081/TCP,55443/TCP 18h example-registry-quay-config-editor ClusterIP 172.30.185.61 <none> 80/TCP 18h example-registry-quay-database ClusterIP 172.30.114.192 <none> 5432/TCP 18h example-registry-quay-metrics ClusterIP 172.30.37.76 <none> 9091/TCP 18h example-registry-quay-redis ClusterIP 172.30.157.248 <none> 6379/TCP 18h
Connect to your cluster and access the metrics using the cluster IP and port for the quay-metrics
service:
$ oc debug node/master-0 sh-4.4# curl 172.30.37.76:9091/metrics # HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 4.0447e-05 go_gc_duration_seconds{quantile="0.25"} 6.2203e-05 ...
12.1.3. Setting up Prometheus to consume metrics
Prometheus needs a way to access all Red Hat Quay instances running in a cluster. In the typical setup, this is done by listing all the Red Hat Quay instances in a single named DNS entry, which is then given to Prometheus.
12.1.4. DNS configuration under Kubernetes
A simple Kubernetes service can be configured to provide the DNS entry for Prometheus.
12.1.5. DNS configuration for a manual cluster
SkyDNS is a simple solution for managing this DNS record when not using Kubernetes. SkyDNS can run on an etcd cluster. Entries for each Red Hat Quay instance in the cluster can be added and removed in the etcd store. SkyDNS will regularly read them from there and update the list of Quay instances in the DNS record accordingly.
12.2. Introduction to metrics
Red Hat Quay provides metrics to help monitor the registry, including metrics for general registry usage, uploads, downloads, garbage collection, and authentication.
12.2.1. General registry statistics
General registry statistics can indicate how large the registry has grown.
Metric name | Description |
---|---|
quay_user_rows | Number of users in the database |
quay_robot_rows | Number of robot accounts in the database |
quay_org_rows | Number of organizations in the database |
quay_repository_rows | Number of repositories in the database |
quay_security_scanning_unscanned_images_remaining_total | Number of images that are not scanned by the latest security scanner |
Sample metrics output
# HELP quay_user_rows number of users in the database # TYPE quay_user_rows gauge quay_user_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 3 # HELP quay_robot_rows number of robot accounts in the database # TYPE quay_robot_rows gauge quay_robot_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2 # HELP quay_org_rows number of organizations in the database # TYPE quay_org_rows gauge quay_org_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 2 # HELP quay_repository_rows number of repositories in the database # TYPE quay_repository_rows gauge quay_repository_rows{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="65",process_name="globalpromstats.py"} 4 # HELP quay_security_scanning_unscanned_images_remaining number of images that are not scanned by the latest security scanner # TYPE quay_security_scanning_unscanned_images_remaining gauge quay_security_scanning_unscanned_images_remaining{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 5
12.2.2. Queue items
The queue items metrics provide information on the multiple queues used by Quay for managing work.
Metric name | Description |
---|---|
quay_queue_items_available | Number of items in a specific queue |
quay_queue_items_locked | Number of items that are running |
quay_queue_items_available_unlocked | Number of items that are waiting to be processed |
Metric labels
queue_name: The name of the queue. One of:
- exportactionlogs: Queued requests to export action logs. These logs are then processed and put in storage. A link is then sent to the requester via email.
- namespacegc: Queued namespaces to be garbage collected
- notification: Queue for repository notifications to be sent out
- repositorygc: Queued repositories to be garbage collected
- secscanv4: Notification queue specific for Clair V4
- dockerfilebuild: Queue for Quay docker builds
- imagestoragereplication: Queued blob to be replicated across multiple storages
- chunk_cleanup: Queued blob segments that needs to be deleted. This is only used by some storage implementations, for example, Swift.
For example, the queue labelled repositorygc contains the repositories marked for deletion by the repository garbage collection worker. For metrics with a queue_name label of repositorygc:
- quay_queue_items_locked is the number of repositories currently being deleted.
- quay_queue_items_available_unlocked is the number of repositories waiting to get processed by the worker.
Sample metrics output
# HELP quay_queue_items_available number of queue items that have not expired # TYPE quay_queue_items_available gauge quay_queue_items_available{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 ... # HELP quay_queue_items_available_unlocked number of queue items that have not expired and are not locked # TYPE quay_queue_items_available_unlocked gauge quay_queue_items_available_unlocked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0 ... # HELP quay_queue_items_locked number of queue items that have been acquired # TYPE quay_queue_items_locked gauge quay_queue_items_locked{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="63",process_name="exportactionlogsworker.py",queue_name="exportactionlogs"} 0
12.2.3. Garbage collection metrics
These metrics show you how many resources have been removed from garbage collection (gc). They show many times the gc workers have run and how many namespaces, repositories, and blobs were removed.
Metric name | Description |
---|---|
quay_gc_iterations_total | Number of iterations by the GCWorker |
quay_gc_namespaces_purged_total | Number of namespaces purged by the NamespaceGCWorker |
quay_gc_repos_purged_total | Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker |
quay_gc_storage_blobs_deleted_total | Number of storage blobs deleted |
Sample metrics output
# TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09 ... # HELP quay_gc_iterations_total number of iterations by the GCWorker # TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09 ... # HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker # TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 .... # TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09 ... # HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker # TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09 ... # HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted # TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ...
12.2.3.1. Multipart uploads metrics
The multipart uploads metrics show the number of blobs uploads to storage (S3, Rados, GoogleCloudStorage, RHOCS). These can help identify issues when Quay is unable to correctly upload blobs to storage.
Metric name | Description |
---|---|
quay_multipart_uploads_started_total | Number of multipart uploads to Quay storage that started |
quay_multipart_uploads_completed_total | Number of multipart uploads to Quay storage that completed |
Sample metrics output
# TYPE quay_multipart_uploads_completed_created gauge quay_multipart_uploads_completed_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284895e+09 ... # HELP quay_multipart_uploads_completed_total number of multipart uploads to Quay storage that completed # TYPE quay_multipart_uploads_completed_total counter quay_multipart_uploads_completed_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 # TYPE quay_multipart_uploads_started_created gauge quay_multipart_uploads_started_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823308284352e+09 ... # HELP quay_multipart_uploads_started_total number of multipart uploads to Quay storage that started # TYPE quay_multipart_uploads_started_total counter quay_multipart_uploads_started_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ...
12.2.4. Image push / pull metrics
A number of metrics are available related to pushing and pulling images.
12.2.4.1. Image pulls total
Metric name | Description |
---|---|
quay_registry_image_pulls_total | The number of images downloaded from the registry. |
Metric labels
- protocol: the registry protocol used (should always be v2)
- ref: ref used to pull - tag, manifest
- status: http return code of the request
12.2.4.2. Image bytes pulled
Metric name | Description |
---|---|
quay_registry_image_pulled_estimated_bytes_total | The number of bytes downloaded from the registry |
Metric labels
- protocol: the registry protocol used (should always be v2)
12.2.4.3. Image pushes total
Metric name | Description |
---|---|
quay_registry_image_pushes_total | The number of images uploaded from the registry. |
Metric labels
- protocol: the registry protocol used (should always be v2)
- pstatus: http return code of the request
- pmedia_type: the uploaded manifest type
12.2.4.4. Image bytes pushed
Metric name | Description |
---|---|
quay_registry_image_pushed_bytes_total | The number of bytes uploaded to the registry |
Sample metrics output
# HELP quay_registry_image_pushed_bytes_total number of bytes pushed to the registry # TYPE quay_registry_image_pushed_bytes_total counter quay_registry_image_pushed_bytes_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application"} 0 ...
12.2.5. Authentication metrics
The authentication metrics provide the number of authentication requests, labeled by type and whether it succeeded or not. For example, this metric could be used to monitor failed basic authentication requests.
Metric name | Description |
---|---|
quay_authentication_attempts_total | Number of authentication attempts across the registry and API |
Metric labels
auth_kind: The type of auth used, including:
- basic
- oauth
- credentials
- success: true or false
Sample metrics output
# TYPE quay_authentication_attempts_created gauge quay_authentication_attempts_created{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 1.6317843039374158e+09 ... # HELP quay_authentication_attempts_total number of authentication attempts across the registry and API # TYPE quay_authentication_attempts_total counter quay_authentication_attempts_total{auth_kind="basic",host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="221",process_name="registry:application",success="True"} 2 ...
Chapter 13. Red Hat Quay quota management and enforcement overview
With Red Hat Quay, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. On-premise Red Hat Quay users are now equipped with the following capabilities to manage the capacity limits of their environment:
- Quota reporting: With this feature, a superuser can track the storage consumption of all their organizations. Additionally, users can track the storage consumption of their assigned organization.
- Quota management: With this feature, a superuser can define soft and hard checks for Red Hat Quay users. Soft checks tell users if the storage consumption of an organization reaches their configured threshold. Hard checks prevent users from pushing to the registry when storage consumption reaches the configured limit.
Together, these features allow service owners of a Red Hat Quay registry to define service level agreements and support a healthy resource budget.
13.1. Quota management architecture
With the quota management feature enabled, individual blob sizes are summed at the repository and namespace level. For example, if two tags in the same repository reference the same blob, the size of that blob is only counted once towards the repository total. Additionally, manifest list totals are counted toward the repository total.
Because manifest list totals are counted toward the repository total, the total quota consumed when upgrading from a previous version of Red Hat Quay might be reportedly differently in Red Hat Quay 3.9. In some cases, the new total might go over a repository’s previously-set limit. Red Hat Quay administrators might have to adjust the allotted quota of a repository to account for these changes.
The quota management feature works by calculating the size of existing repositories and namespace with a backfill worker, and then adding or subtracting from the total for every image that is pushed or garbage collected afterwords. Additionally, the subtraction from the total happens when the manifest is garbage collected.
Because subtraction occurs from the total when the manifest is garbage collected, there is a delay in the size calculation until it is able to be garbage collected. For more information about garbage collection, see Red Hat Quay garbage collection.
The following database tables hold the quota repository size, quota namespace size, and quota registry size, in bytes, of a Red Hat Quay repository within an organization:
-
QuotaRepositorySize
-
QuotaNameSpaceSize
-
QuotaRegistrySize
The organization size is calculated by the backfill worker to ensure that it is not duplicated. When an image push is initialized, the user’s organization storage is validated to check if it is beyond the configured quota limits. If an image push exceeds defined quota limitations, a soft or hard check occurs:
- For a soft check, users are notified.
- For a hard check, the push is stopped.
If storage consumption is within configured quota limits, the push is allowed to proceed.
Image manifest deletion follows a similar flow, whereby the links between associated image tags and the manifest are deleted. Additionally, after the image manifest is deleted, the repository size is recalculated and updated in the QuotaRepositorySize
, QuotaNameSpaceSize
, and QuotaRegistrySize
tables.
13.2. Quota management limitations
Quota management helps organizations to maintain resource consumption. One limitation of quota management is that calculating resource consumption on push results in the calculation becoming part of the push’s critical path. Without this, usage data might drift.
The maximum storage quota size is dependent on the selected database:
Variable | Description |
---|---|
Postgres | 8388608 TB |
MySQL | 8388608 TB |
SQL Server | 16777216 TB |
13.3. Quota management for Red Hat Quay 3.9
If you are upgrading to Red Hat Quay 3.9, you must reconfigure the quota management feature. This is because with Red Hat Quay 3.9, calculation is done differently. As a result, totals prior to Red Hat Quay 3.9 are no longer valid. There are two methods for configuring quota management in Red Hat Quay 3.9, which are detailed in the following sections.
- This is a one time calculation that must be done after you have upgraded to Red Hat Quay 3.9.
- Superuser privileges are required to create, update and delete quotas. While quotas can be set for users as well as organizations, you cannot reconfigure the user quota using the Red Hat Quay UI and you must use the API instead.
13.3.1. Option A: Configuring quota management for Red Hat Quay 3.9 by adjusting the QUOTA_TOTAL_DELAY feature flag
Use the following procedure to recalculate Red Hat Quay 3.9 quota management by adjusting the QUOTA_TOTAL_DELAY
feature flag.
With this recalculation option, the totals appear as 0.00 KB until the allotted time designated for QUOTA_TOTAL_DELAY
.
Prerequisites
- You have upgraded to Red Hat Quay 3.9.
- You are logged into Red Hat Quay 3.9 as a superuser.
Procedure
Deploy Red Hat Quay 3.9 with the following
config.yaml
settings:FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 1 RESET_CHILD_MANIFEST_EXPIRATION: true
- 1
- The
QUOTA_TOTAL_DELAY_SECONDS
flag defaults to1800
seconds, or 30 minutes. This allows Red Hat Quay 3.9 to successfully deploy before the quota management feature begins calculating storage consumption for every blob that has been pushed. Setting this flag to a lower number might result in miscalculation; it must be set to a number that is greater than the time it takes your Red Hat Quay deployment to start.1800
is the recommended setting, however larger deployments that take longer than 30 minutes to start might require a longer duration than1800
.
- Navigate to the Red Hat Quay UI and click the name of your Organization.
- The Total Quota Consumed should read 0.00 KB. Additionally, the Backfill Queued indicator should be present.
- After the allotted time, for example, 30 minutes, refresh your Red Hat Quay deployment page and return to your Organization. Now, the Total Quota Consumed should be present.
13.3.2. Option B: Configuring quota management for Red Hat Quay 3.9 by setting QUOTA_TOTAL_DELAY_SECONDS to 0
Use the following procedure to recalculate Red Hat Quay 3.9 quota management by setting QUOTA_TOTAL_DELAY_SECONDS
to 0
.
Using this option prevents the possibility of miscalculations, however is more time intensive. Use the following procedure for when your Red Hat Quay deployment swaps the FEATURE_QUOTA_MANAGEMENT
parameter from false
to true
. Most users will find xref:
Prerequisites
- You have upgraded to Red Hat Quay 3.9.
- You are logged into Red Hat Quay 3.9 as a superuser.
Procedure
Deploy Red Hat Quay 3.9 with the following
config.yaml
settings:FEATURE_GARBAGE_COLLECTION: true FEATURE_QUOTA_MANAGEMENT: true QUOTA_BACKFILL: false QUOTA_TOTAL_DELAY_SECONDS: 0 PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true
- Navigate to the Red Hat Quay UI and click the name of your Organization.
- The Total Quota Consumed should read 0.00 KB.
Redeploy Red Hat Quay and set the
QUOTA_BACKFILL
flag set totrue
. For example:QUOTA_BACKFILL: true
NoteIf you choose to disable quota management after it has calculated totals, Red Hat Quay marks those totals as stale. If you re-enable the quota management feature again in the future, those namespaces and repositories are recalculated by the backfill worker.
13.4. Testing quota management for Red Hat Quay 3.9
With quota management configured for Red Hat Quay 3.9, duplicative images are now only counted once towards the repository total.
Use the following procedure to test that a duplicative image is only counted once toward the repository total.
Prerequisites
- You have configured quota management for Red Hat Quay 3.9.
Procedure
Pull a sample image, for example,
ubuntu:18.04
, by entering the following command:$ podman pull ubuntu:18.04
Tag the same image twice by entering the following command:
$ podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag1
$ podman tag docker.io/library/ubuntu:18.04 quay-server.example.com/quota-test/ubuntu:tag2
Push the sample image to your organization by entering the following commands:
$ podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag1
$ podman push --tls-verify=false quay-server.example.com/quota-test/ubuntu:tag2
On the Red Hat Quay UI, navigate to Organization and click the Repository Name, for example, quota-test/ubuntu. Then, click Tags. There should be two repository tags,
tag1
andtag2
, each with the same manifest. For example:However, by clicking on the Organization link, we can see that the Total Quota Consumed is 27.94 MB, meaning that the Ubuntu image has only been accounted for once:
If you delete one of the Ubuntu tags, the Total Quota Consumed remains the same.
NoteIf you have configured the Red Hat Quay time machine to be longer than
0
seconds, subtraction will not happen until those tags pass the time machine window. If you want to expedite permanent deletion, see Permanently deleting an image tag in Red Hat Quay 3.9.
13.5. Setting default quota
To specify a system-wide default storage quota that is applied to every organization and user, you can use the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES configuration flag.
If you configure a specific quota for an organization or user, and then delete that quota, the system-wide default quota will apply if one has been set. Similarly, if you have configured a specific quota for an organization or user, and then modify the system-wide default quota, the updated system-wide default will override any specific settings.
For more information about the DEFAULT_SYSTEM_REJECT_QUOTA_BYTES
flag,
see link:
13.6. Establishing quota in Red Hat Quay UI
The following procedure describes how you can report storage consumption and establish storage quota limits.
Prerequisites
- A Red Hat Quay registry.
- A superuser account.
- Enough storage to meet the demands of quota limitations.
Procedure
Create a new organization or choose an existing one. Initially, no quota is configured, as can be seen on the Organization Settings tab:
Log in to the registry as a superuser and navigate to the Manage Organizations tab on the Super User Admin Panel. Click the Options icon of the organization for which you want to create storage quota limits:
Click Configure Quota and enter the initial quota, for example, 10 MB. Then click Apply and Close:
Check that the quota consumed shows 0 of 10 MB on the Manage Organizations tab of the superuser panel:
The consumed quota information is also available directly on the Organization page:
Initial consumed quota
To increase the quota to 100MB, navigate to the Manage Organizations tab on the superuser panel. Click the Options icon and select Configure Quota, setting the quota to 100 MB. Click Apply and then Close:
Pull a sample image by entering the following command:
$ podman pull ubuntu:18.04
Tag the sample image by entering the following command:
$ podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04
Push the sample image to the organization by entering the following command:
$ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04
On the superuser panel, the quota consumed per organization is displayed:
The Organization page shows the total proportion of the quota used by the image:
Total Quota Consumed for first image
Pull a second sample image by entering the following command:
$ podman pull nginx
Tag the second image by entering the following command:
$ podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx
Push the second image to the organization by entering the following command:
$ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx
The Organization page shows the total proportion of the quota used by each repository in that organization:
Total Quota Consumed for each repository
Create reject and warning limits:
From the superuser panel, navigate to the Manage Organizations tab. Click the Options icon for the organization and select Configure Quota. In the Quota Policy section, with the Action type set to Reject, set the Quota Threshold to 80 and click Add Limit:
To create a warning limit, select Warning as the Action type, set the Quota Threshold to 70 and click Add Limit:
Click Close on the quota popup. The limits are viewable, but not editable, on the Settings tab of the Organization page:
Push an image where the reject limit is exceeded:
Because the reject limit (80%) has been set to below the current repository size (~83%), the next pushed image is rejected automatically.
Sample image push
$ podman pull ubuntu:20.04 $ podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04
Sample output when quota exceeded
Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace
When limits are exceeded, notifications are displayed in the UI:
Quota notifications
13.7. Establishing quota with the Red Hat Quay API
When an organization is first created, it does not have a quota applied. Use the /api/v1/organization/{organization}/quota endpoint:
Sample command
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq
Sample output
[]
13.7.1. Setting the quota
To set a quota for an organization, POST data to the /api/v1/organization/{orgname}/quota endpoint: .Sample command
$ curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 10485760}' https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/api/v1/organization/testorg/quota | jq
Sample output
"Created"
13.7.2. Viewing the quota
To see the applied quota, GET
data from the /api/v1/organization/{orgname}/quota endpoint:
Sample command
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq
Sample output
[ { "id": 1, "limit_bytes": 10485760, "default_config": false, "limits": [], "default_config_exists": false } ]
13.7.3. Modifying the quota
To change the existing quota, in this instance from 10 MB to 100 MB, PUT data to the /api/v1/organization/{orgname}/quota/{quota_id} endpoint:
Sample command
$ curl -k -X PUT -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"limit_bytes": 104857600}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1 | jq
Sample output
{ "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [], "default_config_exists": false }
13.7.4. Pushing images
To see the storage consumed, push various images to the organization.
13.7.4.1. Pushing ubuntu:18.04
Push ubuntu:18.04 to the organization from the command line:
Sample commands
$ podman pull ubuntu:18.04 $ podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04
13.7.4.2. Using the API to view quota usage
To view the storage consumed, GET
data from the /api/v1/repository endpoint:
Sample command
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true' | jq
Sample output
{ "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false } ] }
13.7.4.3. Pushing another image
Pull, tag, and push a second image, for example,
nginx
:Sample commands
$ podman pull nginx $ podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx
To view the quota report for the repositories in the organization, use the /api/v1/repository endpoint:
Sample command
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true"a=true'
Sample output
{ "repositories": [ { "namespace": "testorg", "name": "ubuntu", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 27959066, "configured_quota": 104857600 }, "last_modified": 1651225630, "popularity": 0, "is_starred": false }, { "namespace": "testorg", "name": "nginx", "description": null, "is_public": false, "kind": "image", "state": "NORMAL", "quota_report": { "quota_bytes": 59231659, "configured_quota": 104857600 }, "last_modified": 1651229507, "popularity": 0, "is_starred": false } ] }
To view the quota information in the organization details, use the /api/v1/organization/{orgname} endpoint:
Sample command
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq
Sample output
{ "name": "testorg", ... "quotas": [ { "id": 1, "limit_bytes": 104857600, "limits": [] } ], "quota_report": { "quota_bytes": 87190725, "configured_quota": 104857600 } }
13.7.5. Rejecting pushes using quota limits
If an image push exceeds defined quota limitations, a soft or hard check occurs:
- For a soft check, or warning, users are notified.
- For a hard check, or reject, the push is terminated.
13.7.5.1. Setting reject and warning limits
To set reject and warning limits, POST data to the /api/v1/organization/{orgname}/quota/{quota_id}/limit endpoint:
Sample reject limit command
$ curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Reject","threshold_percent":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit
Sample warning limit command
$ curl -k -X POST -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' -d '{"type":"Warning","threshold_percent":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit
13.7.5.2. Viewing reject and warning limits
To view the reject and warning limits, use the /api/v1/organization/{orgname}/quota endpoint:
View quota limits
$ curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq
Sample output for quota limits
[ { "id": 1, "limit_bytes": 104857600, "default_config": false, "limits": [ { "id": 2, "type": "Warning", "limit_percent": 50 }, { "id": 1, "type": "Reject", "limit_percent": 80 } ], "default_config_exists": false } ]
13.7.5.3. Pushing an image when the reject limit is exceeded
In this example, the reject limit (80%) has been set to below the current repository size (~83%), so the next push should automatically be rejected.
Push a sample image to the organization from the command line:
Sample image push
$ podman pull ubuntu:20.04 $ podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 $ podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04
Sample output when quota exceeded
Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace
13.7.5.4. Notifications for limits exceeded
When limits are exceeded, a notification appears:
Quota notifications
13.8. Calculating the total registry size in Red Hat Quay 3.9
Use the following procedure to queue a registry total calculation.
This feature is done on-demand, and calculating a registry total is database intensive. Use with caution.
Prerequisites
- You have upgraded to Red Hat Quay 3.9.
- You are logged in as a Red Hat Quay superuser.
Procedure
- On the Red Hat Quay UI, click your username → Super User Admin Panel.
- In the navigation pane, click Manage Organizations.
- Click Calculate, next to Total Registry Size: 0.00 KB, Updated: Never , Calculation required. Then, click Ok.
After a few minutes, depending on the size of your registry, refresh the page. Now, the Total Registry Size should be calculated. For example:
13.9. Permanently deleting an image tag
In some cases, users might want to delete an image tag outside of the time machine window. Use the following procedure to manually delete an image tag permanently.
The results of the following procedure cannot be undone. Use with caution.
13.9.1. Permanently deleting an image tag using the Red Hat Quay v2 UI
Use the following procedure to permanently delete an image tag using the Red Hat Quay v2 UI.
Prerequisites
-
You have set
FEATURE_UI_V2
totrue
in yourconfig.yaml
file.
Procedure
Ensure that the
PERMANENTLY_DELETE_TAGS
andRESET_CHILD_MANIFEST_EXPIRATION
parameters are set totrue
in yourconfig.yaml
file. For example:PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true
- In the navigation pane, click Repositories.
- Click the name of the repository, for example, quayadmin/busybox.
- Check the box of the image tag that will be deleted, for example, test.
Click Actions → Permanently Delete.
ImportantThis action is permanent and cannot be undone.
13.9.2. Permanently deleting an image tag using the Red Hat Quay legacy UI
Use the following procedure to permanently delete an image tag using the Red Hat Quay legacy UI.
Procedure
Ensure that the
PERMANENTLY_DELETE_TAGS
andRESET_CHILD_MANIFEST_EXPIRATION
parameters are set totrue
in yourconfig.yaml
file. For example:PERMANENTLY_DELETE_TAGS: true RESET_CHILD_MANIFEST_EXPIRATION: true
- On the Red Hat Quay UI, click Repositories and the name of the repository that contains the image tag you will delete, for example, quayadmin/busybox.
- In the navigation pane, click Tags.
- Check the box of the name of the tag you want to delete, for example, test.
- Click the Actions drop down menu and select Delete Tags → Delete Tag.
- Click Tag History in the navigation pane.
On the name of the tag that was just deleted, for example,
test
, click Delete test under the Permanently Delete category. For example:Permanently delete image tag
ImportantThis action is permanent and cannot be undone.
Chapter 14. Geo-replication
Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients.
Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments.
14.1. Geo-replication features
- When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region.
- After the initial push, image data will be replicated in the background to other storage engines.
- The list of replication locations is configurable and those can be different storage backends.
- An image pull will always use the closest available storage engine, to maximize pull performance.
- If replication has not been completed yet, the pull will use the source storage backend instead.
14.2. Geo-replication requirements and constraints
- In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region’s object storage. Object storage must be geographically accessible by all other regions.
- In case of an object storage system failure of one geo-replicating site, that site’s Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures.
- Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status.
-
To check the status of your geo-replication deployment, you must use the
/health/endtoend
checkpoint, which is used for global health monitoring. You must configure the redirect manually using the/health/endtoend
endpoint. The/health/instance
end point only checks local instance health. - If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites.
- Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure.
A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions.
Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database.
- A single Redis cache is shared across the entire Red Hat Quay setup and needs to accessible by all Red Hat Quay pods.
-
The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the
QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable. - Geo-replication requires object storage in each region. It does not work with local storage.
- Each region must be able to access every storage engine in each region, which requires a network path.
- Alternatively, the storage proxy option can be used.
- The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image.
- All Red Hat Quay instances must share the same entrypoint, typically through a load balancer.
- All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file.
-
Geo-replication requires your Clair configuration to be set to
unmanaged
. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration. - Geo-Replication requires SSL/TLS certificates and keys. For more information, see Using SSL/TLS to protect connections to Red Hat Quay.
If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions.
14.3. Geo-replication using standalone Red Hat Quay
In the following image, Red Hat Quay is running standalone in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Red Hat Quay instance, and will then be replicated, in the background, to the other storage engines.
If Clair fails in one cluster, for example, the US cluster, US users would not see vulnerability reports in Red Hat Quay for the second cluster (EU). This is because all Clair instances have the same state. When Clair fails, it is usually because of a problem within the cluster.
Geo-replication architecture
14.3.1. Enable storage replication - standalone Quay
Use the following procedure to enable storage replication on Red Hat Quay.
Procedure
- In your Red Hat Quay config editor, locate the Registry Storage section.
- Click Enable Storage Replication.
- Add each of the storage engines to which data will be replicated. All storage engines to be used must be listed.
If complete replication of all images to all storage engines is required, click Replicate to storage engine by default under each storage engine configuration. This ensures that all images are replicated to that storage engine.
NoteTo enable per-namespace replication, contact Red Hat Quay support.
- When finished, click Save Configuration Changes. The configuration changes will take effect after Red Hat Quay restarts.
After adding storage and enabling Replicate to storage engine by default for geo-replication, you must sync existing image data across all storage. To do this, you must
oc exec
(alternatively,docker exec
orkubectl exec
) into the container and enter the following commands:# scl enable python27 bash # python -m util.backfillreplication
NoteThis is a one time operation to sync content after adding new storage.
14.3.2. Run Red Hat Quay with storage preferences
- Copy the config.yaml to all machines running Red Hat Quay
For each machine in each region, add a
QUAY_DISTRIBUTED_STORAGE_PREFERENCE
environment variable with the preferred storage engine for the region in which the machine is running.For example, for a machine running in Europe with the config directory on the host available from
$QUAY/config
:$ sudo podman run -d --rm -p 80:8080 -p 443:8443 \ --name=quay \ -v $QUAY/config:/conf/stack:Z \ -e QUAY_DISTRIBUTED_STORAGE_PREFERENCE=europestorage \ registry.redhat.io/quay/quay-rhel8:v3.9.7
NoteThe value of the environment variable specified must match the name of a Location ID as defined in the config panel.
- Restart all Red Hat Quay containers
14.3.3. Removing a geo-replicated site from your standalone Red Hat Quay deployment
By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup.
Prerequisites
-
You have configured Red Hat Quay geo-replication with at least two sites, for example,
usstorage
andeustorage
. - Each site has its own Organization, Repository, and image tags.
Procedure
Sync the blobs between all of your defined sites by running the following command:
$ python -m util.backfillreplication
WarningPrior to removing storage engines from your Red Hat Quay
config.yaml
file, you must ensure that all blobs are synced between all defined sites. Complete this step before proceeding.-
In your Red Hat Quay
config.yaml
file for siteusstorage
, remove theDISTRIBUTED_STORAGE_CONFIG
entry for theeustorage
site. Enter the following command to obtain a list of running containers:
$ podman ps
Example output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92c5321cde38 registry.redhat.io/rhel8/redis-5:1 run-redis 11 days ago Up 11 days ago 0.0.0.0:6379->6379/tcp redis 4e6d1ecd3811 registry.redhat.io/rhel8/postgresql-13:1-109 run-postgresql 33 seconds ago Up 34 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay d2eadac74fda registry-proxy.engineering.redhat.com/rh-osbs/quay-quay-rhel8:v3.9.0-131 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay
Enter the following command to execute a shell inside of the PostgreSQL container:
$ podman exec -it postgresql-quay -- /bin/bash
Enter psql by running the following command:
bash-4.4$ psql
Enter the following command to reveal a list of sites in your geo-replicated deployment:
quay=# select * from imagestoragelocation;
Example output
id | name ----+------------------- 1 | usstorage 2 | eustorage
Enter the following command to exit the postgres CLI to re-enter bash-4.4:
\q
Enter the following command to permanently remove the
eustorage
site:ImportantThe following action cannot be undone. Use with caution.
bash-4.4$ python -m util.removelocation eustorage
Example output
WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage
14.4. Geo-replication using the Red Hat Quay Operator
In the example shown above, the Red Hat Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines.
Because the Operator now manages the Clair security scanner and its database separately, geo-replication setups can be leveraged so that they do not manage the Clair database. Instead, an external shared database would be used. Red Hat Quay and Clair support several providers and vendors of PostgreSQL, which can be found in the Red Hat Quay 3.x test matrix. Additionally, the Operator also supports custom Clair configurations that can be injected into the deployment, which allows users to configure Clair with the connection credentials for the external database.
14.4.1. Setting up geo-replication on OpenShift Container Platform
Use the following procedure to set up geo-replication on OpenShift Container Platform.
Procedure
- Deploy a postgres instance for Red Hat Quay.
Login to the database by entering the following command:
psql -U <username> -h <hostname> -p <port> -d <database_name>
Create a database for Red Hat Quay named
quay
. For example:CREATE DATABASE quay;
Enable pg_trm extension inside the database
\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;
Deploy a Redis instance:
Note- Deploying a Redis instance might be unnecessary if your cloud provider has its own service.
- Deploying a Redis instance is required if you are leveraging Builders.
- Deploy a VM for Redis
- Verify that it is accessible from the clusters where Red Hat Quay is running
- Port 6379/TCP must be open
Run Redis inside the instance
sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis
- Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster.
- Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster.
- Configure a load balancer to provide a single entry point to the clusters.
14.4.1.1. Configuring geo-replication for the Red Hat Quay Operator on OpenShift Container Platform
Use the following procedure to configure geo-replication for the Red Hat Quay Operator.
Procedure
Create a
config.yaml
file that is shared between clusters. Thisconfig.yaml
file contains the details for the common PostgreSQL, Redis and storage backends:Geo-replication
config.yaml
fileSERVER_HOSTNAME: <georep.quayteam.org or any other name> 1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:password@10.19.0.1:5432/quay 2 BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: true
- 1
- A proper
SERVER_HOSTNAME
must be used for the route and must match the hostname of the global load balancer. - 2
- To retrieve the configuration file for a Clair instance deployed using the OpenShift Container Platform Operator, see Retrieving the Clair config.
Create the
configBundleSecret
by entering the following command:$ oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle
In each of the clusters, set the
configBundleSecret
and use theQUAY_DISTRIBUTED_STORAGE_PREFERENCE
environmental variable override to configure the appropriate storage for that cluster. For example:NoteThe
config.yaml
file between both deployments must match. If making a change to one cluster, it must also be changed in the other.US cluster
QuayRegistry
exampleapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage
NoteBecause SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes.
European cluster
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage
NoteBecause SSL/TLS is unmanaged, and the route is managed, you must supply the certificates with either with the config tool or directly in the config bundle. For more information, see Configuring TLS and routes.
14.4.2. Removing a geo-replicated site from your Red Hat Quay Operator deployment
By using the following procedure, Red Hat Quay administrators can remove sites in a geo-replicated setup.
Prerequisites
- You are logged into OpenShift Container Platform.
-
You have configured Red Hat Quay geo-replication with at least two sites, for example,
usstorage
andeustorage
. - Each site has its own Organization, Repository, and image tags.
Procedure
Sync the blobs between all of your defined sites by running the following command:
$ python -m util.backfillreplication
WarningPrior to removing storage engines from your Red Hat Quay
config.yaml
file, you must ensure that all blobs are synced between all defined sites.When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected.
Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status.
This step must be completed before proceeding.
-
In your Red Hat Quay
config.yaml
file for siteusstorage
, remove theDISTRIBUTED_STORAGE_CONFIG
entry for theeustorage
site. Enter the following command to identify your
Quay
application pods:$ oc get pod -n <quay_namespace>
Example output
quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssm
Enter the following command to open an interactive shell session in the
usstorage
pod:$ oc rsh quay390usstorage-quay-app-5779ddc886-2drh2
Enter the following command to permanently remove the
eustorage
site:ImportantThe following action cannot be undone. Use with caution.
sh-4.4$ python -m util.removelocation eustorage
Example output
WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage
14.5. Mixed storage for geo-replication
Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following:
- A VPN to prevent visibility of the internal storage, or
- A token pair that only allows access to the specified bucket used by Red Hat Quay
This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements.
If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication.
Chapter 15. Backing up and restoring Red Hat Quay on a standalone deployment
Use the content within this section to back up and restore Red Hat Quay in standalone deployments.
15.1. Backing up Red Hat Quay on standalone deployments
This procedure describes how to create a backup of Red Hat Quay on standalone deployments.
Procedure
Create a temporary backup directory, for example,
quay-backup
:$ mkdir /tmp/quay-backup
The following example command denotes the local directory that the Red Hat Quay was started in, for example,
/opt/quay-install
:$ podman run --name quay-app \ -v /opt/quay-install/config:/conf/stack:Z \ -v /opt/quay-install/storage:/datastorage:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7
Change into the directory that bind-mounts to
/conf/stack
inside of the container, for example,/opt/quay-install
, by running the following command:$ cd /opt/quay-install
Compress the contents of your Red Hat Quay deployment into an archive in the
quay-backup
directory by entering the following command:$ tar cvf /tmp/quay-backup/quay-backup.tar.gz *
Example output:
config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key
Back up the Quay container service by entering the following command:
$ podman inspect quay-app | jq -r '.[0].Config.CreateCommand | .[]' | paste -s -d ' ' - /usr/bin/podman run --name quay-app \ -v /opt/quay-install/config:/conf/stack:Z \ -v /opt/quay-install/storage:/datastorage:Z \ registry.redhat.io/quay/quay-rhel8:v3.9.7
Redirect the contents of your
conf/stack/config.yaml
file to your temporaryquay-config.yaml
file by entering the following command:$ podman exec -it quay cat /conf/stack/config.yaml > /tmp/quay-backup/quay-config.yaml
Obtain the
DB_URI
located in your temporaryquay-config.yaml
by entering the following command:$ grep DB_URI /tmp/quay-backup/quay-config.yaml
Example output:
$ postgresql://<username>:test123@172.24.10.50/quay
Extract the PostgreSQL contents to your temporary backup directory in a backup .sql file by entering the following command:
$ pg_dump -h 172.24.10.50 -p 5432 -d quay -U <username> -W -O > /tmp/quay-backup/quay-backup.sql
Print the contents of your
DISTRIBUTED_STORAGE_CONFIG
by entering the following command:DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>
Export the
AWS_ACCESS_KEY_ID
by using theaccess_key
credential obtained in Step 7:$ export AWS_ACCESS_KEY_ID=<access_key>
Export the
AWS_SECRET_ACCESS_KEY
by using thesecret_key
obtained in Step 7:$ export AWS_SECRET_ACCESS_KEY=<secret_key>
Sync the
quay
bucket to the/tmp/quay-backup/blob-backup/
directory from thehostname
of yourDISTRIBUTED_STORAGE_CONFIG
:$ aws s3 sync s3://<bucket_name> /tmp/quay-backup/blob-backup/ --source-region us-east-2
Example output:
download: s3://<user_name>/registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a to registry/sha256/9c/9c3181779a868e09698b567a3c42f3744584ddb1398efe2c4ba569a99b823f7a download: s3://<user_name>/registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d to registry/sha256/e9/e9c5463f15f0fd62df3898b36ace8d15386a6813ffb470f332698ecb34af5b0d
It is recommended that you delete the quay-config.yaml
file after syncing the quay
bucket because it contains sensitive information. The quay-config.yaml
file will not be lost because it is backed up in the quay-backup.tar.gz
file.
15.2. Restoring Red Hat Quay on standalone deployments
This procedure describes how to restore Red Hat Quay on standalone deployments.
Prerequisites
- You have backed up your Red Hat Quay deployment.
Procedure
Create a new directory that will bind-mount to
/conf/stack
inside of the Red Hat Quay container:$ mkdir /opt/new-quay-install
Copy the contents of your temporary backup directory created in Backing up Red Hat Quay on standalone deployments to the
new-quay-install1
directory created in Step 1:$ cp /tmp/quay-backup/quay-backup.tar.gz /opt/new-quay-install/
Change into the
new-quay-install
directory by entering the following command:$ cd /opt/new-quay-install/
Extract the contents of your Red Hat Quay directory:
$ tar xvf /tmp/quay-backup/quay-backup.tar.gz *
Example output:
config.yaml config.yaml.bak extra_ca_certs/ extra_ca_certs/ca.crt ssl.cert ssl.key
Recall the
DB_URI
from your backed-upconfig.yaml
file by entering the following command:$ grep DB_URI config.yaml
Example output:
postgresql://<username>:test123@172.24.10.50/quay
Run the following command to enter the PostgreSQL database server:
$ sudo postgres
Enter psql and create a new database in 172.24.10.50 to restore the quay databases, for example,
example_restore_registry_quay_database
, by entering the following command:$ psql "host=172.24.10.50 port=5432 dbname=postgres user=<username> password=test123" postgres=> CREATE DATABASE example_restore_registry_quay_database;
Example output:
CREATE DATABASE
Connect to the database by running the following command:
postgres=# \c "example-restore-registry-quay-database";
Example output:
You are now connected to database "example-restore-registry-quay-database" as user "postgres".
Create a
pg_trmg
extension of your Quay database by running the following command:example_restore_registry_quay_database=> CREATE EXTENSION IF NOT EXISTS pg_trgm;
Example output:
CREATE EXTENSION
Exit the postgres CLI by entering the following command:
\q
Import the database backup to your new database by running the following command:
$ psql "host=172.24.10.50 port=5432 dbname=example_restore_registry_quay_database user=<username> password=test123" -W < /tmp/quay-backup/quay-backup.sql
Example output:
SET SET SET SET SET
Update the value of
DB_URI
in yourconfig.yaml
frompostgresql://<username>:test123@172.24.10.50/quay
topostgresql://<username>:test123@172.24.10.50/example-restore-registry-quay-database
before restarting the Red Hat Quay deployment.NoteThe DB_URI format is
DB_URI postgresql://<login_user_name>:<login_user_password>@<postgresql_host>/<quay_database>
. If you are moving from one PostgreSQL server to another PostgreSQL server, update the value of<login_user_name>
,<login_user_password>
and<postgresql_host>
at the same time.In the
/opt/new-quay-install
directory, print the contents of yourDISTRIBUTED_STORAGE_CONFIG
bundle:$ cat config.yaml | grep DISTRIBUTED_STORAGE_CONFIG -A10
Example output:
DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>
NoteYour
DISTRIBUTED_STORAGE_CONFIG
in/opt/new-quay-install
must be updated before restarting your Red Hat Quay deployment.Export the
AWS_ACCESS_KEY_ID
by using theaccess_key
credential obtained in Step 13:$ export AWS_ACCESS_KEY_ID=<access_key>
Export the
AWS_SECRET_ACCESS_KEY
by using thesecret_key
obtained in Step 13:$ export AWS_SECRET_ACCESS_KEY=<secret_key>
Create a new s3 bucket by entering the following command:
$ aws s3 mb s3://<new_bucket_name> --region us-east-2
Example output:
$ make_bucket: quay
Upload all blobs to the new s3 bucket by entering the following command:
$ aws s3 sync --no-verify-ssl \ --endpoint-url <example_endpoint_url> 1 /tmp/quay-backup/blob-backup/. s3://quay/
- 1
- The Red Hat Quay registry endpoint must be the same before backup and after restore.
Example output:
upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d to s3://quay/datastorage/registry/sha256/50/505edb46ea5d32b5cbe275eb766d960842a52ee77ac225e4dc8abb12f409a30d upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 to s3://quay/datastorage/registry/sha256/27/27930dc06c2ee27ac6f543ba0e93640dd21eea458eac47355e8e5989dea087d0 upload: ../../tmp/quay-backup/blob-backup/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec to s3://quay/datastorage/registry/sha256/8c/8c7daf5e20eee45ffe4b36761c4bb6729fb3ee60d4f588f712989939323110ec ...
Before restarting your Red Hat Quay deployment, update the storage settings in your config.yaml:
DISTRIBUTED_STORAGE_CONFIG: default: DISTRIBUTED_STORAGE_CONFIG: default: - S3Storage - s3_bucket: <new_bucket_name> storage_path: /registry s3_access_key: <s3_access_key> s3_secret_key: <s3_secret_key> host: <host_name>
Chapter 16. Migrating a standalone Red Hat Quay deployment to a Red Hat Quay Operator deployment
The following procedures allow you to back up a standalone Red Hat Quay deployment and migrate it to the Red Hat Quay Operator on OpenShift Container Platform.
16.1. Backing up a standalone deployment of Red Hat Quay
Procedure
Back up the
config.yaml
of your standalone Red Hat Quay deployment:$ mkdir /tmp/quay-backup $ cp /path/to/Quay/config/directory/config.yaml /tmp/quay-backup
Create a backup of the database that your standalone Red Hat Quay deployment is using:
$ pg_dump -h DB_HOST -p 5432 -d QUAY_DATABASE_NAME -U QUAY_DATABASE_USER -W -O > /tmp/quay-backup/quay-database-backup.sql
- Install the AWS CLI if you do not have it already.
Create an
~/.aws/
directory:$ mkdir ~/.aws/
Obtain the
access_key
andsecret_key
from theconfig.yaml
of your standalone deployment:$ grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/config.yaml
Example output:
DISTRIBUTED_STORAGE_CONFIG: minio-1: - RadosGWStorage - access_key: ########## bucket_name: quay hostname: 172.24.10.50 is_secure: false port: "9000" secret_key: ########## storage_path: /datastorage/registry
Store the
access_key
andsecret_key
from theconfig.yaml
file in your~/.aws
directory:$ touch ~/.aws/credentials
Optional: Check that your
access_key
andsecret_key
are stored:$ cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF
Example output:
aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG
NoteIf the
aws cli
does not automatically collect theaccess_key
andsecret_key
from the`~/.aws/credentials file
, you can, you can configure these by runningaws configure
and manually inputting the credentials.In your
quay-backup
directory, create abucket_backup
directory:$ mkdir /tmp/quay-backup/bucket-backup
Backup all blobs from the S3 storage:
$ aws s3 sync --no-verify-ssl --endpoint-url https://PUBLIC_S3_ENDPOINT:PORT s3://QUAY_BUCKET/ /tmp/quay-backup/bucket-backup/
NoteThe
PUBLIC_S3_ENDPOINT
can be read from the Red Hat Quayconfig.yaml
file underhostname
in theDISTRIBUTED_STORAGE_CONFIG
. If the endpoint is insecure, usehttp
instead ofhttps
in the endpoint URL.
Up to this point, you should have a complete backup of all Red Hat Quay data, blobs, the database, and the config.yaml
file stored locally. In the following section, you will migrate the standalone deployment backup to Red Hat Quay on OpenShift Container Platform.
16.2. Using backed up standalone content to migrate to OpenShift Container Platform.
Prerequisites
-
Your standalone Red Hat Quay data, blobs, database, and
config.yaml
have been backed up. - Red Hat Quay is deployed on OpenShift Container Platform using the Red Hat Quay Operator.
-
A
QuayRegistry
with all components set tomanaged
.
The procedure in this documents uses the following namespace: quay-enterprise
.
Scale down the Red Hat Quay Operator:
$ oc scale --replicas=0 deployment quay-operator.v3.6.2 -n openshift-operators
Scale down the application and mirror deployments:
$ oc scale --replicas=0 deployment QUAY_MAIN_APP_DEPLOYMENT QUAY_MIRROR_DEPLOYMENT
Copy the database SQL backup to the
Quay
PostgreSQL database instance:$ oc cp /tmp/user/quay-backup/quay-database-backup.sql quay-enterprise/quayregistry-quay-database-54956cdd54-p7b2w:/var/lib/pgsql/data/userdata
Obtain the database password from the Operator-created
config.yaml
file:$ oc get deployment quay-quay-app -o json | jq '.spec.template.spec.volumes[].projected.sources' | grep -i config-secret
Example output:
"name": "QUAY_CONFIG_SECRET_NAME"
$ oc get secret quay-quay-config-secret-9t77hb84tb -o json | jq '.data."config.yaml"' | cut -d '"' -f2 | base64 -d -w0 > /tmp/quay-backup/operator-quay-config-yaml-backup.yaml
cat /tmp/quay-backup/operator-quay-config-yaml-backup.yaml | grep -i DB_URI
Example output:
postgresql://QUAY_DATABASE_OWNER:PASSWORD@DATABASE_HOST/QUAY_DATABASE_NAME
Execute a shell inside of the database pod:
# oc exec -it quay-postgresql-database-pod -- /bin/bash
Enter psql:
bash-4.4$ psql
Drop the database:
postgres=# DROP DATABASE "example-restore-registry-quay-database";
Example output:
DROP DATABASE
Create a new database and set the owner as the same name:
postgres=# CREATE DATABASE "example-restore-registry-quay-database" OWNER "example-restore-registry-quay-database";
Example output:
CREATE DATABASE
Connect to the database:
postgres=# \c "example-restore-registry-quay-database";
Example output:
You are now connected to database "example-restore-registry-quay-database" as user "postgres".
Create a
pg_trmg
extension of yourQuay
database:example-restore-registry-quay-database=# create extension pg_trgm ;
Example output:
CREATE EXTENSION
Exit the postgres CLI to re-enter bash-4.4:
\q
Set the password for your PostgreSQL deployment:
bash-4.4$ psql -h localhost -d "QUAY_DATABASE_NAME" -U QUAY_DATABASE_OWNER -W < /var/lib/pgsql/data/userdata/quay-database-backup.sql
Example output:
SET SET SET SET SET
Exit bash mode:
bash-4.4$ exit
Create a new configuration bundle for the Red Hat Quay Operator.
$ touch config-bundle.yaml
In your new
config-bundle.yaml
, include all of the information that the registry requires, such as LDAP configuration, keys, and other modifications that your old registry had. Run the following command to move thesecret_key
to yourconfig-bundle.yaml
:$ cat /tmp/quay-backup/config.yaml | grep SECRET_KEY > /tmp/quay-backup/config-bundle.yaml
NoteYou must manually copy all the LDAP, OIDC and other information and add it to the /tmp/quay-backup/config-bundle.yaml file.
Create a configuration bundle secret inside of your OpenShift cluster:
$ oc create secret generic new-custom-config-bundle --from-file=config.yaml=/tmp/quay-backup/config-bundle.yaml
Scale up the
Quay
pods:$ oc scale --replicas=1 deployment quayregistry-quay-app deployment.apps/quayregistry-quay-app scaled
Scale up the mirror pods:
$ oc scale --replicas=1 deployment quayregistry-quay-mirror deployment.apps/quayregistry-quay-mirror scaled
Patch the
QuayRegistry
CRD so that it contains the reference to the new custom configuration bundle:$ oc patch quayregistry QUAY_REGISTRY_NAME --type=merge -p '{"spec":{"configBundleSecret":"new-custom-config-bundle"}}'
NoteIf Red Hat Quay returns a
500
internal server error, you might have to update thelocation
of yourDISTRIBUTED_STORAGE_CONFIG
todefault
.Create a new AWS
credentials.yaml
in your/.aws/
directory and include theaccess_key
andsecret_key
from the Operator-createdconfig.yaml
file:$ touch credentials.yaml
$ grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/operator-quay-config-yaml-backup.yaml
$ cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF
NoteIf the
aws cli
does not automatically collect theaccess_key
andsecret_key
from the`~/.aws/credentials file
, you can configure these by runningaws configure
and manually inputting the credentials.Record the NooBaa’s publicly available endpoint:
$ oc get route s3 -n openshift-storage -o yaml -o jsonpath="{.spec.host}{'\n'}"
Sync the backup data to the NooBaa backend storage:
$ aws s3 sync --no-verify-ssl --endpoint-url https://NOOBAA_PUBLIC_S3_ROUTE /tmp/quay-backup/bucket-backup/* s3://QUAY_DATASTORE_BUCKET_NAME
Scale the Operator back up to 1 pod:
$ oc scale –replicas=1 deployment quay-operator.v3.6.4 -n openshift-operators
The Operator uses the custom configuration bundle provided and reconciles all secrets and deployments. Your new Red Hat Quay deployment on OpenShift Container Platform should contain all of the information that the old deployment had. You should be able to pull all images.
Chapter 17. Configuring artifact types
As a Red Hat Quay administrator, you can configure Open Container Initiative (OCI) artifact types and other experimental artifact types through the FEATURE_GENERAL_OCI_SUPPORT
, ALLOWED_OCI_ARTIFACT_TYPES
, and IGNORE_UNKNOWN_MEDIATYPES
configuration fields.
The following Open Container Initiative (OCI) artifact types are built into Red Hat Quay by default and are enabled through the FEATURE_GENERAL_OCI_SUPPORT configuration field:
Field | Media Type | Supported content types |
---|---|---|
Helm |
|
|
Cosign |
|
|
SPDX |
|
|
Syft |
|
|
CycloneDX |
|
|
In-toto |
|
|
Unknown |
|
|
Additionally, Red Hat Quay uses the ZStandard, or zstd, to reduce the size of container images or other related artifacts. Zstd helps optimize storage and improve transfer speeds when working with container images.
Use the following procedures to configure support for the default and experimental OCI media types.
17.1. Configuring OCI artifact types
Use the following procedure to configure artifact types that are embedded in Red Hat Quay by default.
Prerequisites
- You have Red Hat Quay administrator privileges.
Procedure
In your Red Hat Quay
config.yaml
file, enable support for general OCI support by setting theFEATURE_GENERAL_OCI_SUPPORT
field totrue
. For example:FEATURE_GENERAL_OCI_SUPPORT: true
With
FEATURE_GENERAL_OCI_SUPPORT
set to true, Red Hat Quay users can now push and pull charts of the default artifact types to their Red Hat Quay deployment.
17.2. Configuring additional artifact types
Use the following procedure to configure additional, and specific, artifact types for your Red Hat Quay deployment.
Using the ALLOWED_OCI_ARTIFACT_TYPES
configuration field, you can restrict which artifact types are accepted by your Red Hat Quay registry. If you want your Red Hat Quay deployment to accept all artifact types, see "Configuring unknown media types".
Prerequistes
- You have Red Hat Quay administrator privileges.
Procedure
Add the
ALLOWED_OCI_ARTIFACT_TYPES
configuration field, along with the configuration and layer types:FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4>
For example, you can add Singularity Image Format (SIF) support by adding the following to your
config.yaml
file:ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar
NoteWhen adding OCI artifact types that are not configured by default, Red Hat Quay administrators will also need to manually add support for Cosign and Helm if desired.
Now, users can tag SIF images for their Red Hat Quay registry.
17.3. Configuring unknown media types
Use the following procedure to enable all artifact types for your Red Hat Quay deployment.
With this field enabled, your Red Hat Quay deployment accepts all artifact types.
Prerequistes
- You have Red Hat Quay administrator privileges.
Procedure
Add the
IGNORE_UNKNOWN_MEDIATYPES
configuration field to your Red Hat Quayconfig.yaml
file:IGNORE_UNKNOWN_MEDIATYPES: true
With this field enabled, your Red Hat Quay deployment accepts unknown and unrecognized artifact types.
Chapter 18. Red Hat Quay garbage collection
Red Hat Quay includes automatic and continuous image garbage collection. Garbage collection ensures efficient use of resources for active objects by removing objects that occupy sizeable amounts of disk space, such as dangling or untagged images, repositories, and blobs, including layers and manifests. Garbage collection performed by Red Hat Quay can reduce downtime in your organization’s environment.
18.1. Red Hat Quay garbage collection in practice
Currently, all garbage collection happens discreetly, and there are no commands to manually run garbage collection. Red Hat Quay provides metrics that track the status of the different garbage collection workers.
For namespace and repository garbage collection, the progress is tracked based on the size of their respective queues. Namespace and repository garbage collection workers require a global lock to work. As a result, and for performance reasons, only one worker runs at a time.
Red Hat Quay shares blobs between namespaces and repositories in order to conserve disk space. For example, if the same image is pushed 10 times, only one copy of that image will be stored.
It is possible that tags can share their layers with different images already stored somewhere in Red Hat Quay. In that case, blobs will stay in storage, because deleting shared blobs would make other images unusable.
Blob expiration is independent of the time machine. If you push a tag to Red Hat Quay and the time machine is set to 0 seconds, and then you delete a tag immediately, garbage collection deletes the tag and everything related to that tag, but will not delete the blob storage until the blob expiration time is reached.
Garbage collecting tagged images works differently than garbage collection on namespaces or repositories. Rather than having a queue of items to work with, the garbage collection workers for tagged images actively search for a repository with inactive or expired tags to clean up. Each instance of garbage collection workers will grab a repository lock, which results in one worker per repository.
- In Red Hat Quay, inactive or expired tags are manifests without tags because the last tag was deleted or it expired. The manifest stores information about how the image is composed and stored in the database for each individual tag. When a tag is deleted and the allotted time from Time Machine has been met, Red Hat Quay garbage collects the blobs that are not connected to any other manifests in the registry. If a particular blob is connected to a manifest, then it is preserved in storage and only its connection to the manifest that is being deleted is removed.
- Expired images will disappear after the allotted time, but are still stored in Red Hat Quay. The time in which an image is completely deleted, or collected, depends on the Time Machine setting of your organization. The default time for garbage collection is 14 days unless otherwise specified. Until that time, tags can be pointed to an expired or deleted images.
For each type of garbage collection, Red Hat Quay provides metrics for the number of rows per table deleted by each garbage collection worker. The following image shows an example of how Red Hat Quay monitors garbage collection with the same metrics:
18.1.1. Measuring storage reclamation
Red Hat Quay does not have a way to track how much space is freed up by garbage collection. Currently, the best indicator of this is by checking how many blobs have been deleted in the provided metrics.
The UploadedBlob
table in the Red Hat Quay metrics tracks the various blobs that are associated with a repository. When a blob is uploaded, it will not be garbage collected before the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC
parameter. This is to avoid prematurely deleting blobs that are part of an ongoing push. For example, if garbage collection is set to run often, and a tag is deleted in the span of less than one hour, then it is possible that the associated blobs will not get cleaned up immediately. Instead, and assuming that the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC
parameter has passed, the associated blobs will be removed the next time garbage collection runs on that same repository.
18.2. Garbage collection configuration fields
The following configuration fields are available to customize what is garbage collected, and the frequency at which garbage collection occurs:
Name | Description | Schema |
---|---|---|
FEATURE_GARBAGE_COLLECTION |
Whether garbage collection is enabled for image tags. Defaults to | Boolean |
FEATURE_NAMESPACE_GARBAGE_COLLECTION |
Whether garbage collection is enabled for namespaces. Defaults to | Boolean |
FEATURE_REPOSITORY_GARBAGE_COLLECTION |
Whether garbage collection is enabled for repositories. Defaults to | Boolean |
GARBAGE_COLLECTION_FREQUENCY | The frequency, in seconds, at which the garbage collection worker runs. Affects only garbage collection workers. Defaults to 30 seconds. | String |
PUSH_TEMP_TAG_EXPIRATION_SEC | The number of seconds that blobs will not be garbage collected after being uploaded. This feature prevents garbage collection from cleaning up blobs that are not referenced yet, but still used as part of an ongoing push. | String |
TAG_EXPIRATION_OPTIONS | List of valid tag expiration values. | String |
DEFAULT_TAG_EXPIRATION | Tag expiration time for time machine. | String |
CLEAN_BLOB_UPLOAD_FOLDER | Automatically cleans stale blobs left over from an S3 multipart upload. By default, blob files older than two days are cleaned up every hour. | Boolean
+ Default: |
18.3. Disabling garbage collection
The garbage collection features for image tags, namespaces, and repositories are stored in the config.yaml
file. These features default to true
.
In rare cases, you might want to disable garbage collection, for example, to control when garbage collection is performed. You can disable garbage collection by setting the GARBAGE_COLLECTION
features to false
. When disabled, dangling or untagged images, repositories, namespaces, layers, and manifests are not removed. This might increase the downtime of your environment.
There is no command to manually run garbage collection. Instead, you would disable, and then re-enable, the garbage collection feature.
18.4. Garbage collection and quota management
Red Hat Quay introduced quota management in 3.7. With quota management, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits.
As of Red Hat Quay 3.7, garbage collection reclaims memory that was allocated to images, repositories, and blobs after deletion. Because the garbage collection feature reclaims memory after deletion, there is a discrepancy between what is stored in an environment’s disk space and what quota management is reporting as the total consumption. There is currently no workaround for this issue.
18.5. Garbage collection in practice
Use the following procedure to check your Red Hat Quay logs to ensure that garbage collection is working.
Procedure
Enter the following command to ensure that garbage collection is properly working:
$ sudo podman logs <container_id>
Example output:
gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2022-11-14 18:47:22 UTC)" executed successfully
- Delete an image tag.
Enter the following command to ensure that the tag was deleted:
$ podman logs quay-app
Example output:
gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] "DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0" 204 0 "http://quay-server.example.com/repository/quayadmin/busybox?tab=tags" "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0"
18.6. Red Hat Quay garbage collection metrics
The following metrics show how many resources have been removed by garbage collection. These metrics show how many times the garbage collection workers have run and how many namespaces, repositories, and blobs were removed.
Metric name | Description |
---|---|
quay_gc_iterations_total | Number of iterations by the GCWorker |
quay_gc_namespaces_purged_total | Number of namespaces purged by the NamespaceGCWorker |
quay_gc_repos_purged_total | Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker |
quay_gc_storage_blobs_deleted_total | Number of storage blobs deleted |
Sample metrics output
# TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09 ... # HELP quay_gc_iterations_total number of iterations by the GCWorker # TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09 ... # HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker # TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 .... # TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09 ... # HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker # TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09 ... # HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted # TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ...
Chapter 19. Using the Red Hat Quay v2 UI
Use the following procedures to configure, and use, the Red Hat Quay v2 UI.
19.1. v2 user interface configuration
With FEATURE_UI_V2
enabled, you can toggle between the current version of the user interface and the new version of the user interface.
- This UI is currently in beta and subject to change. In its current state, users can only create, view, and delete organizations, repositories, and image tags.
- When running Red Hat Quay in the old UI, timed-out sessions would require that the user input their password again in the pop-up window. With the new UI, users are returned to the main page and required to input their username and password credentials. This is a known issue and will be fixed in a future version of the new UI.
- There is a discrepancy in how image manifest sizes are reported between the legacy UI and the new UI. In the legacy UI, image manifests were reported in mebibytes. In the new UI, Red Hat Quay uses the standard definition of megabyte (MB) to report image manifest sizes.
Procedure
In your deployment’s
config.yaml
file, add theFEATURE_UI_V2
parameter and set it totrue
, for example:--- FEATURE_TEAM_SYNCING: false FEATURE_UI_V2: true FEATURE_USER_CREATION: true ---
- Log in to your Red Hat Quay deployment.
In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI. Click the toggle button to set it to new UI, and then click Use Beta Environment, for example:
19.1.1. Creating a new organization in the Red Hat Quay v2 UI
Prerequisites
- You have toggled your Red Hat Quay deployment to use the v2 UI.
Use the following procedure to create an organization using the Red Hat Quay v2 UI.
Procedure
- Click Organization in the navigation pane.
- Click Create Organization.
-
Enter an Organization Name, for example,
testorg
. - Click Create.
Now, your example organization should populate under the Organizations page.
19.1.2. Deleting an organization using the Red Hat Quay v2 UI
Use the following procedure to delete an organization using the Red Hat Quay v2 UI.
Procedure
-
On the Organizations page, select the name of the organization you want to delete, for example,
testorg
. - Click the More Actions drop down menu.
Click Delete.
NoteOn the Delete page, there is a Search input box. With this box, users can search for specific organizations to ensure that they are properly scheduled for deletion. For example, if a user is deleting 10 organizations and they want to ensure that a specific organization was deleted, they can use the Search input box to confirm said organization is marked for deletion.
- Confirm that you want to permanently delete the organization by typing confirm in the box.
Click Delete.
After deletion, you are returned to the Organizations page.
NoteYou can delete more than one organization at a time by selecting multiple organizations, and then clicking More Actions → Delete.
19.1.3. Creating a new repository using the Red Hat Quay v2 UI
Use the following procedure to create a repository using the Red Hat Quay v2 UI.
Procedure
- Click Repositories on the navigation pane.
- Click Create Repository.
-
Select a namespace, for example, quayadmin, and then enter a Repository name, for example,
testrepo
. Click Create.
Now, your example repository should populate under the Repositories page.
19.1.4. Deleting a repository using the Red Hat Quay v2 UI
Prerequisites
- You have created a repository.
Procedure
-
On the Repositories page of the Red Hat Quay v2 UI, click the name of the image you want to delete, for example,
quay/admin/busybox
. - Click the More Actions drop-down menu.
Click Delete.
NoteIf desired, you could click Make Public or Make Private.
- Type confirm in the box, and then click Delete.
- After deletion, you are returned to the Repositories page.
19.1.5. Pushing an image to the Red Hat Quay v2 UI
Use the following procedure to push an image to the Red Hat Quay v2 UI.
Procedure
Pull a sample image from an external registry:
$ podman pull busybox
Tag the image:
$ podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test
Push the image to your Red Hat Quay registry:
$ podman push quay-server.example.com/quayadmin/busybox:test
- Navigate to the Repositories page on the Red Hat Quay UI and ensure that your image has been properly pushed.
- You can check the security details by selecting your image tag, and then navigating to the Security Report page.
19.1.6. Deleting an image using the Red Hat Quay v2 UI
Use the following procedure to delete an image using theRed Hat Quay v2 UI.
Prerequisites
- You have pushed an image to your Red Hat Quay registry.
Procedure
-
On the Repositories page of the Red Hat Quay v2 UI, click the name of the image you want to delete, for example,
quay/admin/busybox
. - Click the More Actions drop-down menu.
Click Delete.
NoteIf desired, you could click Make Public or Make Private.
- Type confirm in the box, and then click Delete.
- After deletion, you are returned to the Repositories page.
19.1.7. Creating a robot account using the Red Hat Quay v2 UI
Use the following procedure to create a robot account using the Red Hat Quay v2 UI.
Procedure
- On the Red Hat Quay v2 UI, click Organizations.
-
Click the name of the organization that you will create the robot account for, for example,
test-org
. - Click the Robot accounts tab → Create robot account.
-
In the Provide a name for your robot account box, enter a name, for example,
robot1
. Optional. The following options are available if desired:
- Add the robot to a team.
- Add the robot to a repository.
- Adjust the robot’s permissions.
- On the Review and finish page, review the information you have provided, then click Review and finish.
- Optional. You can click Expand or Collapse to reveal descriptive information about the robot account.
- Optional. You can change permissions of the robot account by clicking the kebab menu → Set repository permissions.
- Optional. To delete your robot account, check the box of the robot account and click the trash can icon. A popup box appears. Type confirm in the text box, then, click Delete. Alternatively, you can click the kebab menu → Delete.
19.1.8. Organization settings for the Red Hat Quay v2 UI
Use the following procedure to alter your organization settings using the Red Hat Quay v2 UI.
Procedure
- On the Red Hat Quay v2 UI, click Organizations.
-
Click the name of the organization that you will create the robot account for, for example,
test-org
. - Click the Settings tab.
- Optional. Enter the email address associated with the organization.
Optional. Set the allotted time for the Time Machine feature to one of the following:
- 1 week
- 1 month
- 1 year
- Never
- Click Save.
19.1.9. Viewing image tag information using the Red Hat Quay v2 UI
Use the following procedure to view image tag information using the Red Hat Quay v2 UI.
Procedure
- On the Red Hat Quay v2 UI, click Repositories.
-
Click the name of a repository, for example,
quayadmin/busybox
. Click the name of the tag, for example,
test
. You are taken to the Details page of the tag. The page reveals the following information:- Name
- Repository
- Digest
- Vulnerabilities
- Creation
- Modified
- Size
- Labels
- How to fetch the image tag
- Optional. Click Security Report to view the tag’s vulnerabilities. You can expand an advisory column to open up CVE data.
- Optional. Click Packages to view the tag’s packages.
-
Click the name of the repository, for example,
busybox
, to return to the Tags page. - Optional. Hover over the Pull icon to reveal the ways to fetch the tag.
- Check the box of the tag, or multiple tags, click the Actions drop down menu, and then Delete to delete the tag. Confirm deletion by clicking Delete in the popup box.
19.1.10. Adjusting repository settings using the Red Hat Quay v2 UI
Use the following procedure to adjust various settings for a repository using the Red Hat Quay v2 UI.
Procedure
- On the Red Hat Quay v2 UI, click Repositories.
-
Click the name of a repository, for example,
quayadmin/busybox
. - Click the Settings tab.
- Optional. Click User and robot permissions. You can adjust the settings for a user or robot account by clicking the dropdown menu option under Permissions. You can change the settings to Read, Write, or Admin.
Optional. Click Events and notifications. You can create an event and notification by clicking Create Notification. The following event options are available:
- Push to Repository
- Package Vulnerability Found
- Image build failed
- Image build queued
- Image build started
- Image build success
Image build cancelled
Then, issue a notification. The following options are available:
- Email Notification
- Flowdock Team Notification
- HipChat Room Notification
- Slack Notification
Webhook POST
After selecting an event option and the method of notification, include a Room ID #, a Room Notification Token, then, click Submit.
- Optional. Click Repository visibility. You can make the repository private, or public, by clicking Make Public.
- Optional. Click Delete repository. You can delete the repository by clicking Delete Repository.
19.2. Enabling the Red Hat Quay legacy UI
In the navigation pane of your Red Hat Quay deployment, you are given the option to toggle between Current UI and New UI. Click the toggle button to set it to Current UI.
Chapter 20. Performing health checks on Red Hat Quay deployments
Health check mechanisms are designed to assess the health and functionality of a system, service, or component. Health checks help ensure that everything is working correctly, and can be used to identify potential issues before they become critical problems. By monitoring the health of a system, Red Hat Quay administrators can address abnormalities or potential failures for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Performing health checks can also help reduce the likelihood of encountering troubleshooting scenarios.
Health check mechanisms can play a role in diagnosing issues by providing valuable information about the system’s current state. By comparing health check results with expected benchmarks or predefined thresholds, deviations or anomalies can be identified quicker.
20.1. Red Hat Quay health check endpoints
Links contained herein to any external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or its entities, products, or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
Red Hat Quay has several health check endpoints. The following table shows you the health check, a description, an endpoint, and an example output.
Health check | Description | Endpoint | Example output |
---|---|---|---|
|
The |
|
|
|
The |
| |
|
The |
|
20.2. Navigating to a Red Hat Quay health check endpoint
Use the following procedure to navigate to the instance
endpoint. This procedure can be repeated for endtoend
and warning
endpoints.
Procedure
-
On your web browser, navigate to
https://{quay-ip-endpoint}/health/instance
. You are taken to the health instance page, which returns information like the following:
{"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200}
For Red Hat Quay,
"status_code": 200
means that the instance is health. Conversely, if you receive"status_code": 503
, there is an issue with your deployment.
Chapter 21. Branding a Red Hat Quay deployment on the legacy UI
You can brand the UI of your Red Hat Quay deployment by changing the registry title, logo, footer image, and by directing users to a website embedded in the footer image.
Procedure
Update your Red Hat Quay
config.yaml
file to add the following parameters:BRANDING: logo: 1 footer_img: 2 footer_url: 3 --- REGISTRY_TITLE: 4 REGISTRY_TITLE_SHORT: 5
- 1
- The URL of the image that will appear at the top of your Red Hat Quay deployment.
- 2
- The URL of the image that will appear at the bottom of your Red Hat Quay deployment.
- 3
- The URL of the website that users will be directed to when clicking the footer image.
- 4
- The the long-form title for the registry. This is displayed in frontend of your Red Hat Quay deployment, for example, at the sign in page of your organization.
- 5
- The short-form title for the registry. The title is displayed on various pages of your organization, for example, as the title of the tutorial on your organization’s Tutorial page.
- Restart your Red Hat Quay deployment. After restarting, your Red Hat Quay deployment is updated with a new logo, footer image, and footer image URL.
Chapter 22. Schema for Red Hat Quay configuration
Most Red Hat Quay configuration information is stored in the config.yaml
file that is created using the browser-based config tool when Red Hat Quay is first deployed.
The configuration options are described in the Red Hat Quay Configuration Guide.