Red Hat Quay Operator features


Red Hat Quay 3

Advanced Red Hat Quay Operator features

Red Hat OpenShift Documentation Team

Abstract

Advanced Red Hat Quay Operator features

Preface

To configure and manage advanced capabilities in Red Hat Quay on OpenShift Container Platform, you can use this guide. It covers features such as vulnerability scanning with Clair, geo-replication, backup and restore operations, FIPS compliance, monitoring and alerting, and custom SSL certificates.

The Federal Information Processing Standard (FIPS) developed by the National Institute of Standards and Technology (NIST) is regarded as the highly regarded for securing and encrypting sensitive data, notably in highly regulated areas such as banking, healthcare, and the public sector. Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform support FIPS by providing a FIPS mode, in which the system only allows usage of specific FIPS-validated cryptographic modules like openssl. This ensures FIPS compliance.

1.1. Enabling FIPS compliance

To enable FIPS compliance for your Red Hat Quay deployment, you can set the FEATURE_FIPS configuration field to True in your config.yaml file. This ensures that Red Hat Quay uses only FIPS-validated cryptographic modules for securing sensitive data.

Prerequisite

  • If you are running a standalone deployment of Red Hat Quay, your Red Hat Enterprise Linux (RHEL) deployment is version 8 or later and FIPS-enabled.
  • If you are deploying Red Hat Quay on OpenShift Container Platform, OpenShift Container Platform is version 4.10 or later.
  • Your Red Hat Quay version is 3.5.0 or later.
  • If you are using the Red Hat Quay on OpenShift Container Platform on an IBM Power or IBM Z cluster:

    • OpenShift Container Platform version 4.14 or later is required
    • Red Hat Quay version 3.10 or later is required
  • You have administrative privileges for your Red Hat Quay deployment.

Procedure

  • In your Red Hat Quay config.yaml file, set the FEATURE_FIPS configuration field to True. For example:

    # ...
    FEATURE_FIPS = true
    # ...
    Copy to Clipboard Toggle word wrap

    With FEATURE_FIPS set to True, Red Hat Quay runs using FIPS-compliant hash functions.

Chapter 2. Console monitoring and alerting

Red Hat Quay provides monitoring and alerting features in the OpenShift Container Platform console for instances deployed by the Operator. You can use Grafana dashboards, individual metrics, and alerts to monitor registry performance and receive notifications when Quay pods restart frequently.

Note

To enable the monitoring features, you must select All namespaces on the cluster as the installation mode when installing the Red Hat Quay Operator.

2.1. Dashboard

On the OpenShift Container Platform console, click MonitoringDashboards and search for the dashboard of your desired Red Hat Quay registry instance:

Choose Quay dashboard

The dashboard shows various statistics including the following:

  • The number of Organizations, Repositories, Users, and Robot accounts
  • CPU Usage
  • Max memory usage
  • Rates of pulls and pushes, and authentication requests
  • API request rate
  • Latencies

Console dashboard

2.2. Metrics

You can see the underlying metrics behind the Red Hat Quay dashboard by accessing MonitoringMetrics in the UI. In the Expression field, enter the text quay_ to see the list of metrics available:

Quay metrics

Select a sample metric, for example, quay_org_rows:

Number of Quay organizations

This metric shows the number of organizations in the registry. It is also directly surfaced in the dashboard.

2.3. Alerting

An alert is raised if the Quay pods restart too often. The alert can be configured by accessing the Alerting rules tab from MonitoringAlerting in the console UI and searching for the Quay-specific alert:

Alerting rules

Select the QuayPodFrequentlyRestarting rule detail to configure the alert:

Alerting rule details

Chapter 3. Clair security scanner

Clair is an open source security scanner that analyzes container images and reports vulnerabilities. You can use Clair to automatically scan images and identify security issues in your container registry.

3.1. Clair vulnerability databases

Clair uses multiple vulnerability databases to identify security issues in container images. These databases provide comprehensive coverage across different operating systems and programming languages.

Clair uses the following vulnerability databases to report for issues in your images:

  • Ubuntu Oval database
  • Debian Security Tracker
  • Red Hat Enterprise Linux (RHEL) Oval database
  • SUSE Oval database
  • Oracle Oval database
  • Alpine SecDB database
  • VMware Photon OS database
  • Amazon Web Services (AWS) UpdateInfo
  • Open Source Vulnerability (OSV) Database

For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping.

Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software.

OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems.

Clair also reports vulnerability and security information for golang, java, and ruby ecosystems through the Open Source Vulnerability (OSV) database.

By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects.

For more information about OSV, see the OSV website.

3.2. Clair on OpenShift Container Platform

The Red Hat Quay Operator automatically installs and configures Clair when you deploy Red Hat Quay on OpenShift Container Platform. This simplifies setup by eliminating the need for manual Clair configuration.

3.3. Testing Clair

To verify that Clair is working correctly on your Red Hat Quay deployment, you can pull, tag, and push a sample image to your registry, then view the vulnerability report in the UI.

Prerequisites

  • You have deployed the Clair container image.

Procedure

  1. Pull a sample image by entering the following command:

    $ podman pull ubuntu:20.04
    Copy to Clipboard Toggle word wrap
  2. Tag the image to your registry by entering the following command:

    $ sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04
    Copy to Clipboard Toggle word wrap
  3. Push the image to your Red Hat Quay registry by entering the following command:

    $ sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04
    Copy to Clipboard Toggle word wrap
  4. Log in to your Red Hat Quay deployment through the UI.
  5. Click the repository name, for example, quayadmin/ubuntu.
  6. In the navigation pane, click Tags.

    Security scan information appears for scanned repository images

  7. Click the image report, for example, 45 medium, to show a more detailed report:

    See all vulnerabilities or only those that are fixable

    Note

    In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16. This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug.

3.4. Advanced Clair configuration

Advanced Clair configuration lets you customize Clair settings beyond the default installation. You can use these options to adjust scanning behavior, database connections, and other advanced features to meet specific security and performance requirements.

3.4.1. Unmanaged Clair configuration

Unmanaged Clair configuration lets you run a custom Clair setup or use an external Clair database with the Red Hat Quay Operator. You can use this configuration for geo-replicated environments where multiple Operator instances share the same database, or when you need a highly available database outside your cluster.

To run a custom Clair configuration with an unmanaged Clair database, you can set the clairpostgres component to unmanaged in your QuayRegistry custom resource. This lets you use an external database for geo-replicated environments or highly available setups outside your cluster.

Important

You must not use the same externally managed PostgreSQL database for both Red Hat Quay and Clair deployments. Your PostgreSQL database must also not be shared with other workloads, as it might exhaust the natural connection limit on the PostgreSQL side when connection-intensive workloads, like Red Hat Quay or Clair, contend for resources. Additionally, pgBouncer is not supported with Red Hat Quay or Clair, so it is not an option to resolve this issue.

Procedure

  • In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: false:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: quay370
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: objectstorage
          managed: false
        - kind: route
          managed: true
        - kind: tls
          managed: false
        - kind: clairpostgres
          managed: false
    Copy to Clipboard Toggle word wrap

To configure a custom Clair database with SSL/TLS certificates for your Red Hat Quay deployment, you can create a Quay configuration bundle secret that includes the clair-config.yaml file. This lets you use your own external database with secure connections for Clair vulnerability scanning.

Note

The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TLS certifications, see "Configuring a custom Clair database with a managed Clair configuration".

Procedure

  1. Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command:

    $ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
    Copy to Clipboard Toggle word wrap

    Example Clair config.yaml file

    indexer:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        layer_scan_concurrency: 6
        migrations: true
        scanlock_retry: 11
    log_level: debug
    matcher:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        migrations: true
    metrics:
        name: prometheus
    notifier:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        migrations: true
    Copy to Clipboard Toggle word wrap

    Note
    • The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml. It must be specified when configuring your clair-config.yaml.
    • An example clair-config.yaml can be found at Clair on OpenShift config.
  2. Add the clair-config.yaml file to your bundle secret, for example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: config-bundle-secret
      namespace: quay-enterprise
    data:
      config.yaml: <base64 encoded Quay config>
      clair-config.yaml: <base64 encoded Clair config>
      extra_ca_cert_<name>: <base64 encoded ca cert>
      ssl.crt: <base64 encoded SSL certificate>
      ssl.key: <base64 encoded SSL private key>
    Copy to Clipboard Toggle word wrap
    Note

    When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.

  3. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace>. For example:

    $ oc get pods -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                               READY   STATUS    RESTARTS   AGE
    f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2   1/1     Running   0          7s
    Copy to Clipboard Toggle word wrap

Running a custom Clair configuration with a managed Clair database lets you customize Clair settings while the Operator manages the database. You can use this approach to disable specific updater resources or configure Clair for disconnected environments.

Note
  • If you are running Red Hat Quay in an disconnected environment, the airgap parameter of your clair-config.yaml must be set to True.
  • If you are running Red Hat Quay in an disconnected environment, you should disable all updater components.
3.4.2.1. Setting a Clair database to managed

To have the Red Hat Quay Operator manage your Clair database, you can set the clairpostgres component to managed in your QuayRegistry custom resource. This simplifies deployment and maintenance by letting the Operator handle database provisioning and configuration.

Procedure

  • In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: true:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: quay370
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: objectstorage
          managed: false
        - kind: route
          managed: true
        - kind: tls
          managed: false
        - kind: clairpostgres
          managed: true
    Copy to Clipboard Toggle word wrap

To configure a custom Clair database while keeping the Clair configuration managed by the Operator, you can create a Quay configuration bundle secret that includes the clair-config.yaml file. This lets you use your own external database while the Operator continues to manage Clair settings.

Procedure

  1. Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command:

    $ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret
    Copy to Clipboard Toggle word wrap

    Example Clair config.yaml file

    indexer:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable
        layer_scan_concurrency: 6
        migrations: true
        scanlock_retry: 11
    log_level: debug
    matcher:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable
        migrations: true
    metrics:
        name: prometheus
    notifier:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable
        migrations: true
    Copy to Clipboard Toggle word wrap

    Note
    • The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml. It must be specified when configuring your clair-config.yaml.
    • An example clair-config.yaml can be found at Clair on OpenShift config.
  2. Add the clair-config.yaml file to your bundle secret, for example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: config-bundle-secret
      namespace: quay-enterprise
    data:
      config.yaml: <base64 encoded Quay config>
      clair-config.yaml: <base64 encoded Clair config>
    Copy to Clipboard Toggle word wrap
    Note
    • When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.
  3. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace>. For example:

    $ oc get pods -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                               READY   STATUS    RESTARTS   AGE
    f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2   1/1     Running   0          7s
    Copy to Clipboard Toggle word wrap

3.4.3. Clair in disconnected environments

Clair supports disconnected environments where your Red Hat Quay deployment has no direct internet access. You can use the clairctl tool to transfer vulnerability database updates from an open host to your isolated environment, enabling Clair to scan images without internet connectivity.

Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use.

Note

Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments.

For more information about Clair updaters, see "Clair updaters".

To install the clairctl command line utility for disconnected OpenShift Container Platform deployments, you can extract the tool from a running Clair pod and set its execution permissions. This lets you use clairctl to manage vulnerability database updates in disconnected environments.

Procedure

  1. Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command:

    $ oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl
    Copy to Clipboard Toggle word wrap
    Note

    Unofficially, the clairctl tool can be downloaded

  2. Set the permissions of the clairctl file so that it can be executed and run by the user, for example:

    $ chmod u+x ./clairctl
    Copy to Clipboard Toggle word wrap

To configure Clair for disconnected environments on OpenShift Container Platform, you can retrieve and decode the Clair configuration secret, then update the clair-config.yaml file to set disable_updaters and airgap parameters to True. This prepares Clair to work without direct internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.

Procedure

  1. Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML:

    $ oc get secret -n quay-enterprise example-registry-clair-config-secret  -o "jsonpath={$.data['config\.yaml']}" | base64 -d > clair-config.yaml
    Copy to Clipboard Toggle word wrap
  2. Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to True, for example:

    # ...
    indexer:
      airgap: true
    # ...
    matcher:
      disable_updaters: true
    # ...
    Copy to Clipboard Toggle word wrap

To export vulnerability database updates from a connected Clair instance for use in disconnected environments, you can use the clairctl tool with your configuration file to export the updaters bundle. This creates a bundle file that you can transfer to your isolated environment.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.

Procedure

  • From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example:

    $ ./clairctl --config ./config.yaml export-updaters updates.gz
    Copy to Clipboard Toggle word wrap

To configure access to the Clair database in your disconnected OpenShift Container Platform cluster, you can determine the database service, forward the database port, and update your Clair config.yaml file to use localhost. This lets you import the updaters bundle into the database using the clairctl tool.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.

Procedure

  1. Determine your Clair database service by using the oc CLI tool, for example:

    $ oc get svc -n quay-enterprise
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                             AGE
    example-registry-clair-app            ClusterIP      172.30.224.93    <none>        80/TCP,8089/TCP                     4d21h
    example-registry-clair-postgres       ClusterIP      172.30.246.88    <none>        5432/TCP                            4d21h
    ...
    Copy to Clipboard Toggle word wrap

  2. Forward the Clair database port so that it is accessible from the local machine. For example:

    $ oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
    Copy to Clipboard Toggle word wrap
  3. Update your Clair config.yaml file, for example:

    indexer:
        connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
        layer_scan_concurrency: 5
        migrations: true
        scanlock_retry: 10
        airgap: true
        scanner:
          repo:
            rhel-repository-scanner:
              repo2cpe_mapping_file: /data/repository-to-cpe.json
          package:
            rhel_containerscanner:
              name2repos_mapping_file: /data/container-name-repos-map.json
    Copy to Clipboard Toggle word wrap

    where:

    connstring:: Specifies the connection string for the database.

    rhel-repository-scanner:: Specifies the repository scanner configuration.

    rhel_containerscanner:: Specifies the container scanner configuration.

To import vulnerability database updates into your disconnected OpenShift Container Platform cluster, you can use the clairctl tool with your Clair configuration file to import the updaters bundle. This populates the Clair database with vulnerability data so Clair can scan images without internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.
  • You have transferred the updaters bundle into your disconnected environment.

Procedure

  • Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example:

    $ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
    Copy to Clipboard Toggle word wrap

To install the clairctl command line utility for a self-managed Clair deployment on OpenShift Container Platform, you can copy the tool from a Clair container using podman and set its execution permissions. This lets you use clairctl to manage vulnerability database updates in disconnected environments.

Procedure

  1. Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example:

    $ sudo podman cp clairv4:/usr/bin/clairctl ./clairctl
    Copy to Clipboard Toggle word wrap
  2. Set the permissions of the clairctl file so that it can be executed and run by the user, for example:

    $ chmod u+x ./clairctl
    Copy to Clipboard Toggle word wrap

To deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters, you can create a configuration directory, configure a Clair configuration file with disable_updaters enabled, and start the container using podman. This lets you run Clair independently in environments without direct internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.

Procedure

  1. Create a folder for your Clair configuration file, for example:

    $ mkdir /etc/clairv4/config/
    Copy to Clipboard Toggle word wrap
  2. Create a Clair configuration file with the disable_updaters parameter set to True, for example:

    ---
    indexer:
      airgap: true
    ---
    matcher:
      disable_updaters: true
    ---
    Copy to Clipboard Toggle word wrap
  3. Start Clair by using the container image, mounting in the configuration from the file you created:

    $ sudo podman run -it --rm --name clairv4 \
    -p 8081:8081 -p 8088:8088 \
    -e CLAIR_CONF=/clair/config.yaml \
    -e CLAIR_MODE=combo \
    -v /etc/clairv4/config:/clair:Z \
    registry.redhat.io/quay/clair-rhel9:v3.16.1
    Copy to Clipboard Toggle word wrap

To export vulnerability database updates from a connected self-managed Clair instance for use in disconnected environments, you can use the clairctl tool with your configuration file to export the updaters bundle. This creates a bundle file that you can transfer to your isolated environment.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have deployed Clair.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.

Procedure

  • From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example:

    $ ./clairctl --config ./config.yaml export-updaters updates.gz
    Copy to Clipboard Toggle word wrap

To configure access to the Clair database in your disconnected OpenShift Container Platform cluster for a self-managed deployment, you can determine the database service, forward the database port, and update your Clair config.yaml file to use localhost. This lets you import the updaters bundle into the database using the clairctl tool.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have deployed Clair.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.

Procedure

  1. Determine your Clair database service by using the oc CLI tool, for example:

    $ oc get svc -n quay-enterprise
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                             AGE
    example-registry-clair-app            ClusterIP      172.30.224.93    <none>        80/TCP,8089/TCP                     4d21h
    example-registry-clair-postgres       ClusterIP      172.30.246.88    <none>        5432/TCP                            4d21h
    ...
    Copy to Clipboard Toggle word wrap

  2. Forward the Clair database port so that it is accessible from the local machine. For example:

    $ oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
    Copy to Clipboard Toggle word wrap
  3. Update your Clair config.yaml file, for example:

    indexer:
        connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
        layer_scan_concurrency: 5
        migrations: true
        scanlock_retry: 10
        airgap: true
        scanner:
          repo:
            rhel-repository-scanner:
              repo2cpe_mapping_file: /data/repository-to-cpe.json
          package:
            rhel_containerscanner:
              name2repos_mapping_file: /data/container-name-repos-map.json
    Copy to Clipboard Toggle word wrap

    where:

    connstring:: Specifies the connection string for the database.

    rhel-repository-scanner:: Specifies the repository scanner configuration.

    rhel_containerscanner:: Specifies the container scanner configuration.

To import vulnerability database updates into your disconnected OpenShift Container Platform cluster for a self-managed deployment, you can use the clairctl tool with your Clair configuration file to import the updaters bundle. This populates the Clair database with vulnerability data so Clair can scan images without internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have deployed Clair.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.
  • You have transferred the updaters bundle into your disconnected environment.

Procedure

  • Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform:

    $ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
    Copy to Clipboard Toggle word wrap

3.4.4. Common Product Enumeration mapping in Clair

Clair uses Common Product Enumeration (CPE) mapping files to map RPM packages to security data for accurate vulnerability scanning of Red Hat Enterprise Linux (RHEL) container images. Understanding how Clair utilizes these files ensures that your vulnerability reports remain accurate and comprehensive.

The scanner requires the CPE file to be present and accessible to process RPM packages properly. If these files are missing or inaccessible, RPM packages installed in the container image are skipped during the scanning process.

By default, the Clair indexer includes the repos2cpe and names2repos data files within the Clair container. This allows you to reference local paths such as /data/repository-to-cpe.json without additional external configuration.

Important

While Red Hat Product Security updates CPE files regularly, the versions bundled within the Clair container are only updated during Red Hat Quay releases. This can lead to temporary discrepancies between the latest security data and the versions bundled with your current installation.

3.4.4.1. CPE mapping configuration reference

Common Product Enumeration (CPE) mapping configuration defines the fields and file paths used by Clair to associate packages with standardized product identifiers.

Expand
Table 3.1. Clair CPE mapping files
CPE TypeLink to JSON mapping file

repos2cpe

Red Hat Repository-to-CPE JSON

names2repos

Red Hat Name-to-Repos JSON

Example configuration

indexer:
  scanner:
    repo:
      rhel-repository-scanner:
        repo2cpe_mapping_file: /data/repository-to-cpe.json
    package:
      rhel_containerscanner:
        name2repos_mapping_file: /data/container-name-repos-map.json
Copy to Clipboard Toggle word wrap

where:

repo2cpe_mapping_file
Specifies the path to the JSON file mapping Red Hat repositories to CPEs.
name2repos_mapping_file
Specifies the path to the JSON file mapping container names to repositories.

3.5. Resizing Managed Storage

To expand storage capacity for your Red Hat Quay on OpenShift Container Platform deployment, you can use the OpenShift Container Platform console to resize the PostgreSQL and Clair PostgreSQL persistent volume claims. This lets you increase storage beyond the default 50 GiB allocation when your registry needs more space.

When deploying Red Hat Quay on OpenShift Container Platform, three distinct persistent volume claims (PVCs) are deployed:

  • One for the PostgreSQL 15 registry.
  • One for the Clair PostgreSQL 15 registry.
  • One that uses NooBaa as a backend storage.
Note

The connection between Red Hat Quay and NooBaa is done through the S3 API and ObjectBucketClaim API in OpenShift Container Platform. Red Hat Quay leverages that API group to create a bucket in NooBaa, obtain access keys, and automatically set everything up. On the backend, or NooBaa, side, that bucket is creating inside of the backing store. As a result, NooBaa PVCs are not mounted or connected to Red Hat Quay pods.

Prerequisites

  • You have cluster admin privileges on OpenShift Container Platform.

Procedure

  1. Log into the OpenShift Container Platform console and select StoragePersistent Volume Claims.
  2. Select the desired PersistentVolumeClaim for either PostgreSQL 13 or Clair PostgreSQL 13, for example, example-registry-quay-postgres-13.
  3. From the Action menu, select Expand PVC.
  4. Enter the new size of the Persistent Volume Claim and select Expand.

    After a few minutes, the expanded size should reflect in the PVC’s Capacity field.

3.6. Customizing Default Operator Images

Note

Currently, customizing default Operator images is not supported on IBM Power and IBM Z.

Customizing default Operator images lets you override the default container images used by the Red Hat Quay Operator by setting environment variables in the ClusterServiceVersion object.

Important

Customizing default Operator images is not supported for production Red Hat Quay environments and is only recommended for development or testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Red Hat Quay Operator.

3.6.1. Environment Variables

The Red Hat Quay Operator uses environment variables to override default container images for components such as base, clair, postgres, and redis. You can set these variables in the ClusterServiceVersion object to customize which images the Operator uses for each component.

Expand
Table 3.2. ClusterServiceVersion environment variables

Environment Variable

Component

RELATED_IMAGE_COMPONENT_QUAY

base

RELATED_IMAGE_COMPONENT_CLAIR

clair

RELATED_IMAGE_COMPONENT_POSTGRES

postgres and clair databases

RELATED_IMAGE_COMPONENT_REDIS

redis

Note

Overridden images must be referenced by manifest (@sha256:) and not by tag (:latest).

3.6.2. Applying overrides to a running Operator

To override container images for a running Red Hat Quay Operator, you can modify the ClusterServiceVersion object to add environment variables that point to your custom images. This applies the overrides at the Operator level, so all QuayRegistry instances use the same custom images.

Procedure

  1. The ClusterServiceVersion object is Operator Lifecycle Manager’s representation of a running Operator in the cluster. Find the Red Hat Quay Operator’s ClusterServiceVersion by using a Kubernetes UI or the kubectl/oc CLI tool. For example:

    $ oc get clusterserviceversions -n <namespace>
    Copy to Clipboard Toggle word wrap
  2. Using the UI, oc edit, or another method, modify the ClusterServiceVersion object to include the environment variables outlined above to point to the override images:

    JSONPath: spec.install.spec.deployments[0].spec.template.spec.containers[0].env

    - name: RELATED_IMAGE_COMPONENT_QUAY
      value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d
    - name: RELATED_IMAGE_COMPONENT_CLAIR
      value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6
    - name: RELATED_IMAGE_COMPONENT_POSTGRES
      value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
    - name: RELATED_IMAGE_COMPONENT_REDIS
      value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
    Copy to Clipboard Toggle word wrap

3.7. AWS S3 CloudFront

To configure AWS S3 CloudFront for your Red Hat Quay backend registry storage, you can create a secret that includes your config.yaml file and the CloudFront signing key. This enables CloudFront content delivery for your registry storage.

Procedure

  • Create a secret that includes your config.yaml file and the CloudFront signing key by entering the following command:

    $ oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
    Copy to Clipboard Toggle word wrap

Chapter 4. Geo-replication

Geo-replication lets multiple geographically distributed Red Hat Quay deployments work as a single registry from the client perspective. This improves push and pull performance in globally distributed setups and provides transparent failover and redirect for clients.

Geo-replication is supported on standalone and Operator-based deployments.

4.1. Geo-replication features

Geo-replication features optimize image push and pull operations by routing pushes to the nearest storage backend and replicating data in the background to other locations. Pulls automatically use the closest available storage engine to maximize performance, with fallback to the source storage if replication is incomplete.

The following are the key features of geo-replication:

  • When geo-replication is configured, container image pushes are written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region.
  • After the initial push, image data is replicated in the background to other storage engines.
  • The list of replication locations is configurable and those can be different storage backends.
  • An image pull always uses the closest available storage engine to maximize pull performance.
  • If replication has not been completed yet, the pull uses the source storage backend instead.

4.2. Geo-replication requirements and constraints

To run geo-replication reliably in Red Hat Quay, you must meet strict requirements for shared object storage, networking, load balancing, and external health monitoring. Geo-replication does not provide automatic failover, database replication, or storage awareness, so you must design and manage these behaviors outside of Red Hat Quay.

The following are the requirements and constraints for geo-replication:

  • In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region’s object storage. Object storage must be geographically accessible by all other regions.
  • In case of an object storage system failure of one geo-replicating site, that site’s Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures.
  • Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status.
  • To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health.
  • If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites.
  • Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure.
  • A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions.

    Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database.

  • A single Redis cache is shared across the entire Red Hat Quay setup and needs to be accessible by all Red Hat Quay pods.
  • The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable.
  • Geo-replication requires object storage in each region. It does not work with local storage.
  • Each region must be able to access every storage engine in each region, which requires a network path.
  • Alternatively, the storage proxy option can be used.
  • The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image.
  • All Red Hat Quay instances must share the same entrypoint, typically through a load balancer.
  • All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file.
  • In geo-replication environments, your Clair configuration can be set to unmanaged. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment where multiple instances of the Operator must communicate with the same database. For more information, see Advanced Clair configuration.

    If you keep your Clair configuration managed, you must retrieve the configuration file for the deployed Clair instance that is deployed by the Operator. For more information, see Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform.

  • Geo-Replication requires SSL/TLS certificates and keys. For more information, see * Geo-Replication requires SSL/TLS certificates and keys. For more information, see Proof of concept deployment using SSL/TLS certificates. .

If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions.

To prepare your OpenShift Container Platform environment for geo-replication, you must deploy shared PostgreSQL and Redis instances, create object storage backends for each cluster, and configure a load balancer. This sets up the infrastructure needed for multiple Red Hat Quay deployments to work as a single registry.

Procedure

  1. Deploy a PostgreSQL instance for Red Hat Quay.
  2. Login to the database by entering the following command:

    psql -U <username> -h <hostname> -p <port> -d <database_name>
    Copy to Clipboard Toggle word wrap
  3. Create a database for Red Hat Quay named quay. For example:

    CREATE DATABASE quay;
    Copy to Clipboard Toggle word wrap
  4. Enable pg_trm extension inside the database

    \c quay;
    CREATE EXTENSION IF NOT EXISTS pg_trgm;
    Copy to Clipboard Toggle word wrap
  5. Deploy a Redis instance:

    Note
    • Deploying a Redis instance might be unnecessary if your cloud provider has its own service.
    • Deploying a Redis instance is required if you are leveraging Builders.
    1. Deploy a VM for Redis
    2. Verify that it is accessible from the clusters where Red Hat Quay is running
    3. Port 6379/TCP must be open
    4. Run Redis inside the instance

      sudo dnf install -y podman
      podman run -d --name redis -p 6379:6379 redis
      Copy to Clipboard Toggle word wrap
  6. Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster.
  7. Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster.
  8. Configure a load balancer to provide a single entry point to the clusters.

To configure geo-replication for Red Hat Quay on OpenShift Container Platform, you can create a shared config.yaml file with PostgreSQL, Redis, and storage backend details, create a configBundleSecret, and deploy QuayRegistry resources in each cluster with storage preference overrides. This enables multiple Red Hat Quay deployments to work as a single registry with improved performance across geographic regions.

Prerequisites

  • You have prepared your OpenShift Container Platform environment for geo-replication by following the "Preparing your OpenShift Container Platform environment for geo-replication" procedure.

Procedure

  1. Create a config.yaml file that is shared between clusters. This config.yaml file contains the details for the common PostgreSQL, Redis and storage backends:

    Geo-replication config.yaml file

    SERVER_HOSTNAME: <georep.quayteam.org or any other name> 
    1
    
    DB_CONNECTION_ARGS:
      autorollback: true
      threadlocals: true
    DB_URI: postgresql://postgres:password@10.19.0.1:5432/quay
    BUILDLOGS_REDIS:
      host: 10.19.0.2
      port: 6379
    USER_EVENTS_REDIS:
      host: 10.19.0.2
      port: 6379
    DATABASE_SECRET_KEY: 0ce4f796-c295-415b-bf9d-b315114704b8
    DISTRIBUTED_STORAGE_CONFIG:
      usstorage:
        - GoogleCloudStorage
        - access_key: GOOGQGPGVMASAAMQABCDEFG
          bucket_name: georep-test-bucket-0
          secret_key: AYWfEaxX/u84XRA2vUX5C987654321
          storage_path: /quaygcp
      eustorage:
        - GoogleCloudStorage
        - access_key: GOOGQGPGVMASAAMQWERTYUIOP
          bucket_name: georep-test-bucket-1
          secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678
          storage_path: /quaygcp
    DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS:
      - usstorage
      - eustorage
    DISTRIBUTED_STORAGE_PREFERENCE:
      - usstorage
      - eustorage
    FEATURE_STORAGE_REPLICATION: true
    Copy to Clipboard Toggle word wrap

    where:

    SERVER_HOSTNAME:: Specifies the hostname of the global load balancer. Must match the hostname of the global load balancer.

  2. Create the configBundleSecret custom resource (CR) by entering the following command:

    $ oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundle
    Copy to Clipboard Toggle word wrap
  3. In each of the clusters, set the configBundleSecret and use the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environmental variable override to configure the appropriate storage for that cluster. For example:

    Note

    The config.yaml file between both deployments must match. If making a change to one cluster, it must also be changed in the other.

    US cluster QuayRegistry example

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: example-registry
      namespace: quay-enterprise
    spec:
      configBundleSecret: georep-config-bundle
      components:
        - kind: objectstorage
          managed: false
        - kind: route
          managed: true
        - kind: tls
          managed: false
        - kind: postgres
          managed: false
        - kind: clairpostgres
          managed: false
        - kind: redis
          managed: false
        - kind: quay
          managed: true
          overrides:
            env:
            - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE
              value: usstorage
        - kind: mirror
          managed: true
          overrides:
            env:
            - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE
              value: usstorage
    Copy to Clipboard Toggle word wrap

    Note

    Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring SSL/TLS and Routes.

    European cluster

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: example-registry
      namespace: quay-enterprise
    spec:
      configBundleSecret: georep-config-bundle
      components:
        - kind: objectstorage
          managed: false
        - kind: route
          managed: true
        - kind: tls
          managed: false
        - kind: postgres
          managed: false
        - kind: clairpostgres
          managed: false
        - kind: redis
          managed: false
        - kind: quay
          managed: true
          overrides:
            env:
            - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE
              value: eustorage
        - kind: mirror
          managed: true
          overrides:
            env:
            - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE
              value: eustorage
    Copy to Clipboard Toggle word wrap

    Note

    Because SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring SSL and TLS for Red Hat Quay.

4.2.3. Mixed storage for geo-replication

Geo-replication supports using different storage backends, such as AWS S3 in public cloud and Ceph on-premise, for replication targets. You must grant access to all storage backends from all Red Hat Quay pods and cluster nodes, so use a VPN or token pair with bucket-specific access to meet security requirements.

Because geo-replication supports multiple replication targets, it is recommended to use a VPN or token pair with bucket-specific access to meet security requirements. This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network is encrypted, protected, and uses ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication.

To upgrade your geo-replicated Red Hat Quay on OpenShift Container Platform deployment, you must stop operations, scale down secondary systems, upgrade the primary system, then upgrade secondary systems. This ensures a safe upgrade process with minimal downtime across your geo-replicated registry.

Important
  • When upgrading geo-replicated Red Hat Quay on OpenShift Container Platform deployment to the next y-stream release (for example, Red Hat Quay 3.15 → Red Hat Quay 3), you must stop operations before upgrading.
  • There is intermittent downtime down upgrading from one y-stream release to the next.
  • It is highly recommended to back up your Red Hat Quay on OpenShift Container Platform deployment before upgrading.

The following procedure assumes that you are running the Red Hat Quay registry on three or more systems. For this procedure, three systems named System A, System B, and System C are used. System A serves as the primary system in which the Red Hat Quay Operator is deployed.

Procedure

  1. On System B and System C, scale down your Red Hat Quay registry. This is done by disabling auto scaling and overriding the replica county for Red Hat Quay, mirror workers, and Clair if it is managed. Use the following quayregistry.yaml file as a reference:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: registry
      namespace: ns
    spec:
      components:- kind: horizontalpodautoscaler
          managed: false
        - kind: quay
          managed: true
          overrides:
            replicas: 0
        - kind: clair
          managed: true
          overrides:
            replicas: 0
        - kind: mirror
          managed: true
          overrides:
            replicas: 0
    Copy to Clipboard Toggle word wrap

    where:

    managed: false:: Disables auto scaling of Quay, Clair and Mirroring workers overrides:: Sets the replica count to 0 for components accessing the database and objectstorage

    Note

    You must keep the Red Hat Quay registry running on System A. Do not update the quayregistry.yaml file on System A.

  2. Wait for the registry-quay-app, registry-quay-mirror, and registry-clair-app pods to disappear. Enter the following command to check their status:

    oc get pods -n <quay-namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    quay-operator.v3.7.1-6f9d859bd-p5ftc               1/1     Running     0             12m
    quayregistry-clair-postgres-7487f5bd86-xnxpr       1/1     Running     1 (12m ago)   12m
    quayregistry-quay-app-upgrade-xq2v6                0/1     Completed   0             12m
    quayregistry-quay-redis-84f888776f-hhgms           1/1     Running     0             12m
    Copy to Clipboard Toggle word wrap

  3. On System A, initiate a Red Hat Quay upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators. For more information about Red Hat Quay upgrade paths, see Upgrading the Red Hat Quay Operator.
  4. After the new Red Hat Quay registry is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Red Hat Quay pods are started with the latest y-stream version. Additionally, new Quay pods are scheduled and started.
  5. Confirm that the update has properly worked by navigating to the Red Hat Quay UI:

    1. In the OpenShift console, navigate to OperatorsInstalled Operators, and click the Registry Endpoint link.

      Important

      Do not execute the following step until the Red Hat Quay UI is available. Do not upgrade the Red Hat Quay registry on System B and on System C until the UI is available on System A.

  6. Confirm that the update has properly worked on System A, initiate the Red Hat Quay upgrade on System B and on System C. The Operator upgrade results in an upgraded Red Hat Quay installation, and the pods are restarted.

    Note

    Because the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start.

  7. After updating, revert the changes made in step 1 of this procedure by removing overrides for the components. For example:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: registry
      namespace: ns
    spec:
      components:- kind: horizontalpodautoscaler
          managed: true
        - kind: quay
          managed: true
        - kind: clair
          managed: true
        - kind: mirror
          managed: true
    Copy to Clipboard Toggle word wrap

    where:

    kind: horizontalpodautoscaler:: Set this resource to True if the horizontalpodautoscaler resource was set to True before the upgrade procedure, or if you want Red Hat Quay to scale in case of a resource shortage.

To remove a geo-replicated site from your Red Hat Quay on OpenShift Container Platform deployment, you must sync all blobs between sites, remove the storage configuration entry, and use the removelocation utility to permanently delete the site. This action cannot be undone, so ensure that all data is synchronized before proceeding.

Prerequisites

  • You are logged into OpenShift Container Platform.
  • You have configured Red Hat Quay geo-replication with at least two sites, for example, usstorage and eustorage.
  • Each site has its own Organization, Repository, and image tags.

Procedure

  1. Sync the blobs between all of your defined sites by running the following command:

    $ python -m util.backfillreplication
    Copy to Clipboard Toggle word wrap
    Warning

    Prior to removing storage engines from your Red Hat Quay config.yaml file, you must ensure that all blobs are synced between all defined sites.

    When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected.

    Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status.

    This step must be completed before proceeding.

  2. In your Red Hat Quay config.yaml file for site usstorage, remove the DISTRIBUTED_STORAGE_CONFIG entry for the eustorage site.
  3. Identify your Red Hat Quay application pods by entering the following command:

    $ oc get pod -n <quay_namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    quay390usstorage-quay-app-5779ddc886-2drh2
    quay390eustorage-quay-app-66969cd859-n2ssm
    Copy to Clipboard Toggle word wrap

  4. Open an interactive shell session in the usstorage pod by entering the following command:

    $ oc rsh quay390usstorage-quay-app-5779ddc886-2drh2
    Copy to Clipboard Toggle word wrap
  5. Permanently remove the eustorage site by entering the following command:

    Important

    The following action cannot be undone. Use with caution.

    sh-4.4$ python -m util.removelocation eustorage
    Copy to Clipboard Toggle word wrap

    Example output

    WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y
    Deleted placement 30
    Deleted placement 31
    Deleted placement 32
    Deleted placement 33
    Deleted location eustorage
    Copy to Clipboard Toggle word wrap

To protect Red Hat Quay data and enable recovery from failures, you can back up and restore Red Hat Quay instances managed by the Red Hat Quay Operator on OpenShift Container Platform.

To maintain service availability during backup and restore operations, you can enable read-only mode for your Red Hat Quay deployment on OpenShift Container Platform. Read-only mode restricts write access to ensure data integrity while keeping the registry online.

When backing up and restoring, you are required to scale down your Red Hat Quay on OpenShift Container Platform deployment. This results in service unavailability during the backup period which, in some cases, might be unacceptable. Enabling read-only mode ensures service availability during the backup and restore procedure for Red Hat Quay on OpenShift Container Platform deployments.

Note

In some cases, a read-only option for Red Hat Quay is not possible since it requires inserting a service key and other manual configuration changes. As an alternative to read-only mode, Red Hat Quay administrators might consider enabling the DISABLE_PUSHES feature. When this field is set to True, users are unable to push images or image tags to the registry when using the CLI. Enabling DISABLE_PUSHES differs from read-only mode because the database is not set as read-only when it is enabled.

This field might be useful in some situations such as when Red Hat Quay administrators want to calculate their registry’s quota and disable image pushing until after calculation has completed. With this method, administrators can avoid putting putting the whole registry in read-only mode, which affects the database, so that most operations can still be done.

For information about enabling this configuration field, see Miscellaneous configuration fields.

5.1.1. Prerequisites for enabling read-only mode

You must meet the following prerequisites to enable read-only mode for Red Hat Quay on OpenShift Container Platform:

  • If you are using Red Hat Enterprise Linux (RHEL) 7.x:

    • You have enabled the Red Hat Software Collections List (RHSCL).
    • You have installed Python 3.6.
    • You have downloaded the virtualenv package.
    • You have installed the git CLI.
  • If you are using Red Hat Enterprise Linux (RHEL) 8:

    • You have installed Python 3 on your machine.
    • You have downloaded the python3-virtualenv package.
    • You have installed the git CLI.
  • You have cloned the https://github.com/quay/quay.git repository.
  • You have installed the oc CLI.
  • You have access to the cluster with cluster-admin privileges.

To enable Red Hat Quay to communicate with components and sign completed requests such as image scanning and login, you can create service keys. Access the Quay container pod and run the keypair generation script to create the necessary keys.

Procedure

  1. Enter the following command to obtain a list of Red Hat Quay pods:

    $ oc get pods -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    example-registry-clair-app-7dc7ff5844-4skw5           0/1     Error                    0             70d
    example-registry-clair-app-7dc7ff5844-nvn4f           1/1     Running                  0             31d
    example-registry-clair-app-7dc7ff5844-x4smw           0/1     ContainerStatusUnknown   6 (70d ago)   70d
    example-registry-clair-app-7dc7ff5844-xjnvt           1/1     Running                  0             60d
    example-registry-clair-postgres-547d75759-75c49       1/1     Running                  0             70d
    example-registry-quay-app-76c8f55467-52wjz            1/1     Running                  0             70d
    example-registry-quay-app-76c8f55467-hwz4c            1/1     Running                  0             70d
    example-registry-quay-app-upgrade-57ghs               0/1     Completed                1             70d
    example-registry-quay-database-7c55899f89-hmnm6       1/1     Running                  0             70d
    example-registry-quay-mirror-6cccbd76d-btsnb          1/1     Running                  0             70d
    example-registry-quay-mirror-6cccbd76d-x8g42          1/1     Running                  0             70d
    example-registry-quay-redis-85cbdf96bf-4vk5m          1/1     Running                  0             70d
    Copy to Clipboard Toggle word wrap

  2. Open a remote shell session to the Quay container by entering the following command:

    $ oc rsh example-registry-quay-app-76c8f55467-52wjz
    Copy to Clipboard Toggle word wrap
  3. Create the necessary service keys by entering the following command:

    sh-4.4$ python3 tools/generatekeypair.py quay-readonly
    Copy to Clipboard Toggle word wrap

    Example output

    Writing public key to quay-readonly.jwk
    Writing key ID to quay-readonly.kid
    Writing private key to quay-readonly.pem
    Copy to Clipboard Toggle word wrap

5.1.3. Adding keys to the PostgreSQL database

To enable read-only mode configuration in Red Hat Quay, you can add service keys to the PostgreSQL database. Use SQL INSERT statements to store the keys and their approval information.

Prerequisites

  • You have created the service keys.

Procedure

  1. Enter the following command to enter your Red Hat Quay database environment:

    $ oc rsh example-registry-quay-database-76c8f55467-52wjz psql -U <database_username> -d <database_name>
    Copy to Clipboard Toggle word wrap
  2. Display the approval types and associated notes of the servicekeyapproval by entering the following command:

    quay=# select * from servicekeyapproval;
    Copy to Clipboard Toggle word wrap

    Example output

     id | approver_id |          approval_type           |       approved_date        | notes
    ----+-------------+----------------------------------+----------------------------+-------
      1 |             | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:48.181347 |
      2 |             | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:47:55.808087 |
      3 |             | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:04.27095  |
      4 |             | ServiceKeyApprovalType.AUTOMATIC | 2024-05-07 03:49:05.46235  |
      5 |           1 | ServiceKeyApprovalType.SUPERUSER | 2024-05-07 04:05:10.296796 |
    ...
    Copy to Clipboard Toggle word wrap

  3. Add the service key to your Red Hat Quay database by entering the following query:

    quay=# INSERT INTO servicekey
      (name, service, metadata, kid, jwk, created_date, expiration_date)
      VALUES ('quay-readonly',
               'quay',
               '{}',
               '{<contents_of_.kid_file>}',
               '{<contents_of_.jwk_file>}',
               '{<created_date_of_read-only>}',
               '{<expiration_date_of_read-only>}');
    Copy to Clipboard Toggle word wrap

    Example output

    INSERT 0 1
    Copy to Clipboard Toggle word wrap

  4. Next, add the key approval with the following query:

    quay=# INSERT INTO servicekeyapproval ('approval_type', 'approved_date', 'notes')
      VALUES ("ServiceKeyApprovalType.SUPERUSER", "CURRENT_DATE",
               {include_notes_here_on_why_this_is_being_added});
    Copy to Clipboard Toggle word wrap

    Example output

    INSERT 0 1
    Copy to Clipboard Toggle word wrap

  5. Set the approval_id field on the created service key row to the id field from the created service key approval. You can use the following SELECT statements to get the necessary IDs:

    UPDATE servicekey
    SET approval_id = (SELECT id FROM servicekeyapproval WHERE approval_type = 'ServiceKeyApprovalType.SUPERUSER')
    WHERE name = 'quay-readonly';
    Copy to Clipboard Toggle word wrap

    Example output

    UPDATE 1
    Copy to Clipboard Toggle word wrap

To enable read-only mode in Red Hat Quay and safely manage registry operations such as backup and restore, you can modify the configuration secret and restart the Quay container.

Important

Deploying Red Hat Quay on OpenShift Container Platform in read-only mode requires you to modify the secrets stored inside of your OpenShift Container Platform cluster. It is highly recommended that you create a backup of the secret prior to making changes to it.

Prerequisites

  • You have created the service keys and added them to your PostgreSQL database.

Procedure

  1. Read the secret name of your Red Hat Quay on OpenShift Container Platform deployment by entering the following command:

    $ oc get deployment -o yaml <quay_main_app_deployment_name>
    Copy to Clipboard Toggle word wrap
  2. Use the base64 command to encode the quay-readonly.kid and quay-readonly.pem files by entering the following commands:

    $ base64 -w0 quay-readonly.kid
    Copy to Clipboard Toggle word wrap

    Example output

    ZjUyNDFm...
    Copy to Clipboard Toggle word wrap

    $ base64 -w0 quay-readonly.pem
    Copy to Clipboard Toggle word wrap

    Example output

    LS0tLS1CRUdJTiBSU0E...
    Copy to Clipboard Toggle word wrap

  3. Obtain the current configuration bundle and secret by entering the following command. Save the output to a file called config.yaml:

    $ oc get secret quay-config-secret-name -o json | jq '.data."config.yaml"' | cut -d '"' -f2 | base64 -d -w0 > config.yaml
    Copy to Clipboard Toggle word wrap
  4. Edit the config.yaml file and add the following information to enable read-only mode:

    # ...
    REGISTRY_STATE: readonly
    INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid'
    INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'
    # ...
    Copy to Clipboard Toggle word wrap
  5. Save the file and base64 encode it by entering the following command:

    $ base64 -w0 quay-config.yaml
    Copy to Clipboard Toggle word wrap
  6. Scale down the Red Hat Quay Operator pods to 0 by entering the following command. This ensures that the Operator does not reconcile the secret after editing it.

    $ oc scale --replicas=0 deployment quay-operator -n openshift-operators
    Copy to Clipboard Toggle word wrap
  7. Edit the secret to include the new content by entering the following command:

    $ oc edit secret quay-config-secret-name -n quay-namespace
    Copy to Clipboard Toggle word wrap
    # ...
    data:
      "quay-readonly.kid": "ZjUyNDFm..."
      "quay-readonly.pem": "LS0tLS1CRUdJTiBSU0E..."
      "config.yaml": "QUNUSU9OX0xPR19..."
    # ...
    Copy to Clipboard Toggle word wrap

    With your Red Hat Quay on OpenShift Container Platform deployment on read-only mode, you can safely manage your registry’s operations and perform such actions as backup and restore.

To exit read-only mode and restore normal operations in Red Hat Quay, you can remove the read-only settings from the config.yaml file and scale the Operator deployment back up.

Note

Depending on your needs, you might wait to scale up the Red Hat Quay deployment after backing up and restoring your regisry.

Procedure

  1. Edit the config.yaml file and remove the following information:

    # ...
    REGISTRY_STATE: readonly
    INSTANCE_SERVICE_KEY_KID_LOCATION: 'conf/stack/quay-readonly.kid'
    INSTANCE_SERVICE_KEY_LOCATION: 'conf/stack/quay-readonly.pem'
    # ...
    Copy to Clipboard Toggle word wrap
  2. Scale the Red Hat Quay Operator back up by entering the following command:

    $ oc scale --replicas=1 deployment quay-operator -n openshift-operators
    Copy to Clipboard Toggle word wrap

To create backups of your Red Hat Quay on OpenShift Container Platform deployment for disaster recovery, you can back up the configuration, PostgreSQL database, and object storage. Regular backups ensure you can restore your registry to a previous state if needed.

Database backups should be performed regularly using either the supplied tools on the PostgreSQL image or your own backup infrastructure. The Red Hat Quay Operator does not ensure that the PostgreSQL database is backed up.

Note

This procedure covers backing up your Red Hat Quay PostgreSQL database. It does not cover backing up the Clair PostgreSQL database. Backing up the Clair PostgreSQL database is not needed because it can be recreated. If you opt to recreate it from scratch, you wait for the information to be repopulated after all images inside of your Red Hat Quay deployment are scanned. During this downtime, security reports are unavailable.

If you are considering backing up the Clair PostgreSQL database, you must consider that its size is dependent upon the number of images stored inside of Red Hat Quay. As a result, the database can be extremely large.

  • A healthy Red Hat Quay deployment on OpenShift Container Platform using the Red Hat Quay Operator. The status condition Available is set to True.
  • The components quay, postgres and objectstorage are set to managed: true
  • If the component clair is set to managed: true the component clairpostgres is also set to managed: true (starting with Red Hat Quay v3.7 or later)
Note

If your deployment contains partially unmanaged database or storage components and you are using external services for PostgreSQL or S3-compatible object storage to run your Red Hat Quay deployment, you must refer to the service provider or vendor documentation to create a backup of the data. You can refer to the tools described in this guide as a starting point on how to backup your external PostgreSQL database or object storage.

5.2.2. Red Hat Quay configuration backup

To back up your Red Hat Quay configuration for disaster recovery, you can export the QuayRegistry custom resource, back up the managed secret keys, and save the config bundle and config.yaml files. This procedure creates backup files that you can use to restore your registry configuration.

Procedure

  1. To back the QuayRegistry custom resource by exporting it, enter the following command:

    $ oc get quayregistry <quay_registry_name> -n <quay_namespace> -o yaml > quay-registry.yaml
    Copy to Clipboard Toggle word wrap
  2. Edit the resulting quayregistry.yaml and remove the status section and the following metadata fields:

      metadata.creationTimestamp
      metadata.finalizers
      metadata.generation
      metadata.resourceVersion
      metadata.uid
    Copy to Clipboard Toggle word wrap
  3. Backup the managed keys secret by entering the following command:

    Note

    If you are running a version older than Red Hat Quay 3.7.0, this step can be skipped. Some secrets are automatically generated while deploying Red Hat Quay for the first time. These are stored in a secret called <quay_registry_name>-quay-registry-managed-secret-keys in the namespace of the QuayRegistry resource.

    $ oc get secret -n <quay_namespace> <quay_registry_name>-quay-registry-managed-secret-keys -o yaml > managed_secret_keys.yaml
    Copy to Clipboard Toggle word wrap
  4. Edit the resulting managed_secret_keys.yaml file and remove the entry metadata.ownerReferences. Your managed_secret_keys.yaml file should look similar to the following:

    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: <quay_registry_name>-quay-registry-managed-secret-keys
      namespace: <quay_namespace>
    data:
      CONFIG_EDITOR_PW: <redacted>
      DATABASE_SECRET_KEY: <redacted>
      DB_ROOT_PW: <redacted>
      DB_URI: <redacted>
      SECRET_KEY: <redacted>
      SECURITY_SCANNER_V4_PSK: <redacted>
    Copy to Clipboard Toggle word wrap

    All information under the data property should remain the same.

  5. Redirect the current Quay configuration file by entering the following command:

    $ oc get secret -n <quay-namespace>  $(oc get quayregistry <quay_registry_name> -n <quay_namespace>  -o jsonpath='{.spec.configBundleSecret}') -o yaml > config-bundle.yaml
    Copy to Clipboard Toggle word wrap
  6. Backup the /conf/stack/config.yaml file mounted inside of the Quay pods:

    $ oc exec -it quay_pod_name -- cat /conf/stack/config.yaml > quay_config.yaml
    Copy to Clipboard Toggle word wrap
  7. Obtain the Quay database name:

    $ oc -n <quay_namespace> rsh $(oc get pod -l app=quay -o NAME -n <quay_namespace> |head -n 1) cat /conf/stack/config.yaml|awk -F"/" '/^DB_URI/ {print $4}'
    Copy to Clipboard Toggle word wrap

    Example output

    quayregistry-quay-database
    Copy to Clipboard Toggle word wrap

5.2.3. Scaling down the Red Hat Quay deployment

To create a consistent backup of your Red Hat Quay deployment, you must scale down the deployment by disabling auto scaling and setting replica counts to zero. This ensures the registry is in a quiescent state before backing up.

Important

This step is needed to create a consistent backup of the state of your Red Hat Quay deployment. Do not omit this step, including in setups where PostgreSQL databases and/or S3-compatible object storage are provided by external services (unmanaged by the Red Hat Quay Operator).

Procedure

  1. Scale down the Red Hat Quay deployment by disabling auto scaling and overriding the replica count for Red Hat Quay, mirror workers, and Clair (if managed). For example:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: registry
      namespace: ns
    spec:
      components:- kind: horizontalpodautoscaler
          managed: false
        - kind: quay
          managed: true
          overrides:
            replicas: 0
        - kind: clair
          managed: true
          overrides:
            replicas: 0
        - kind: mirror
          managed: true
          overrides:
            replicas: 0
    Copy to Clipboard Toggle word wrap

    where:

    managed: false:: Disables auto scaling of Quay, Clair and Mirroring workers.

    overrides:: Sets the replica count to 0 for components accessing the database and objectstorage.

  2. Wait for the registry-quay-app, registry-quay-mirror and registry-clair-app pods (depending on which components you set to be managed by the Red Hat Quay Operator) to disappear. You can check their status by entering the following command:

    $ oc get pods -n <quay_namespace>
    Copy to Clipboard Toggle word wrap

    Example output:

    $ oc get pod
    Copy to Clipboard Toggle word wrap

    Example output

    quay-operator.v3.7.1-6f9d859bd-p5ftc               1/1     Running     0             12m
    quayregistry-clair-postgres-7487f5bd86-xnxpr       1/1     Running     1 (12m ago)   12m
    quayregistry-quay-app-upgrade-xq2v6                0/1     Completed   0             12m
    quayregistry-quay-database-859d5445ff-cqthr        1/1     Running     0             12m
    quayregistry-quay-redis-84f888776f-hhgms           1/1     Running     0             12m
    Copy to Clipboard Toggle word wrap

To back up your Red Hat Quay managed database for disaster recovery, you can identify the PostgreSQL pod and use pg_dump to create a backup SQL file. This procedure creates a backup that you can use to restore your database.

Note

If your Red Hat Quay deployment is configured with external, or unmanged, PostgreSQL database(s), refer to your vendor’s documentation on how to create a consistent backup of these databases.

Procedure

  1. Identify the Red Hat Quay PostgreSQL pod name by entering the following command:

    $ oc get pod -l quay-component=postgres -n <quay_namespace> -o jsonpath='{.items[0].metadata.name}'
    Copy to Clipboard Toggle word wrap

    Example output:

    quayregistry-quay-database-59f54bb7-58xs7
    Copy to Clipboard Toggle word wrap

  2. Download a backup database by entering the following command:

    $ oc -n <quay_namespace> exec quayregistry-quay-database-59f54bb7-58xs7 -- /usr/bin/pg_dump -C quayregistry-quay-database  > backup.sql
    Copy to Clipboard Toggle word wrap

To back up your Red Hat Quay managed object storage for disaster recovery, you can export AWS credentials from secrets and use the aws s3 sync command to copy all blobs to a local directory. This procedure creates a backup of your registry’s object storage data.

The instructions in this section apply to the following configurations:

  • Standalone, multi-cloud object gateway configurations
  • OpenShift Data Foundations storage requires that the Red Hat Quay Operator provisioned an S3 object storage bucket from, through the ObjectStorageBucketClaim API.
Note

You can also use rclone or sc3md instead of the AWS command line utility.

Procedure

  1. Decode and export the AWS_ACCESS_KEY_ID by entering the following command:

    $ export AWS_ACCESS_KEY_ID=$(oc get secret -l app=noobaa -n <quay-namespace>  -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d)
    Copy to Clipboard Toggle word wrap
  2. Decode and export the AWS_SECRET_ACCESS_KEY_ID by entering the following command:

    $ export AWS_SECRET_ACCESS_KEY=$(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d)
    Copy to Clipboard Toggle word wrap
  3. Create a new directory by entering the following command:

    $ mkdir blobs
    Copy to Clipboard Toggle word wrap
  4. Copy all blobs to the directory by entering the following command:

    $ aws s3 sync --no-verify-ssl --endpoint https://$(oc get route s3 -n openshift-storage  -o jsonpath='{.spec.host}')  s3://$(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}') ./blobs
    Copy to Clipboard Toggle word wrap

5.2.6. Scaling up the Red Hat Quay deployment

To restore your Red Hat Quay deployment to normal operation after scaling down, you can re-enable auto scaling and remove replica overrides for quay, mirror workers, and Clair. This restores your registry to full capacity after completing backup or maintenance tasks.

Procedure

  1. Scale up the Red Hat Quay deployment by re-enabling auto scaling, if desired, and removing the replica overrides for Quay, mirror workers and Clair as applicable. For example:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: registry
      namespace: ns
    spec:
      components:- kind: horizontalpodautoscaler
          managed: true 
    1
    
        - kind: quay 
    2
    
          managed: true
        - kind: clair
          managed: true
        - kind: mirror
          managed: true
    Copy to Clipboard Toggle word wrap

    where:

    managed: true:: Re-enables auto scaling of Quay, Clair and Mirroring workers again.

    overrides:: Replica overrides are removed again to scale the Quay components back up.

  2. Check the status of the Red Hat Quay deployment by entering the following command:

    $ oc wait quayregistry registry --for=condition=Available=true -n <quay_namespace>
    Copy to Clipboard Toggle word wrap

    Example output:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      ...
      name: registry
      namespace: <quay-namespace>
      ...
    spec:
      ...
    status:
      - lastTransitionTime: '2022-06-20T05:31:17Z'
        lastUpdateTime: '2022-06-20T17:31:13Z'
        message: All components reporting as healthy
        reason: HealthChecksPassing
        status: 'True'
        type: Available
    Copy to Clipboard Toggle word wrap

5.3. Restoring Red Hat Quay

To restore your Red Hat Quay registry when the Operator manages the database, you can restore the configuration, database, and object storage from backups. This procedure restores your registry to a previous state after performing the backup process.

5.3.1. Prerequisites for restoring Red Hat Quay

The following prerequisites are required to restore Red Hat Quay:

  • Red Hat Quay is deployed on OpenShift Container Platform using the Red Hat Quay Operator.
  • A backup of the Red Hat Quay configuration managed by the Red Hat Quay Operator has been created following the instructions in the Backing up Red Hat Quay section.
  • The object storage bucket used by Red Hat Quay has been backed up.
  • The components quay, postgres and objectstorage are set to managed: true
  • If the component clair is set to managed: true, the component clairpostgres is also set to managed: true.
  • There is no running Red Hat Quay deployment managed by the Red Hat Quay Operator in the target namespace on your OpenShift Container Platform cluster
Note

If your deployment contains partially unmanaged database or storage components and you are using external services for PostgreSQL or S3-compatible object storage to run your Red Hat Quay deployment, you must refer to the service provider or vendor documentation to restore their data from a backup prior to restore Red Hat Quay

5.3.2. Restoring Red Hat Quay from a backup

To restore your Red Hat Quay registry and configuration from a backup, you can restore the configuration bundle, managed secret keys, and QuayRegistry custom resource. This procedure restores your registry to a previous state using backup files created with the backup process.

Prerequisites

  • You have backed up your Red Hat Quay registry and configuration.
  • You have the backup files config-bundle.yaml, managed-secret-keys.yaml, and quay-registry.yaml.

Procedure

  1. Restore the backed up Red Hat Quay configuration by entering the following command:

    $ oc create -f ./config-bundle.yaml
    Copy to Clipboard Toggle word wrap
    Important

    If you receive the error Error from server (AlreadyExists): error when creating "./config-bundle.yaml": secrets "config-bundle-secret" already exists, you must delete your existing resource with $ oc delete Secret config-bundle-secret -n <quay-namespace> and recreate it with $ oc create -f ./config-bundle.yaml.

  2. Restore the generated keys from the backup by entering the following command:

    $ oc create -f ./managed-secret-keys.yaml
    Copy to Clipboard Toggle word wrap
  3. Restore the QuayRegistry custom resource by entering the following command:

    $ oc create -f ./quay-registry.yaml
    Copy to Clipboard Toggle word wrap
  4. Check the status of the Red Hat Quay deployment by entering the following command. Wait for it to be available:

    $ oc wait quayregistry registry --for=condition=Available=true -n <quay-namespace>
    Copy to Clipboard Toggle word wrap

5.3.3. Scaling down the Red Hat Quay deployment

To scale down your Red Hat Quay deployment, you can disable auto scaling and set replica counts to zero. This reduces resource consumption and stops registry operations temporarily.

Procedure

  1. Scale down the Red Hat Quay deployment by disabling auto scaling and overriding the replica count for Quay, mirror workers and Clair (if managed). For example:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: registry
      namespace: ns
    spec:
      components:- kind: horizontalpodautoscaler
          managed: false 
    1
    
        - kind: quay
          managed: true
          overrides: 
    2
    
            replicas: 0
        - kind: clair
          managed: true
          overrides:
            replicas: 0
        - kind: mirror
          managed: true
          overrides:
            replicas: 0
    Copy to Clipboard Toggle word wrap

    where:

    managed: false:: Specifies that the component is not managed by the Red Hat Quay Operator.

    overrides:: Specifies the replica count for the component.

  2. Wait for the registry-quay-app, registry-quay-mirror and registry-clair-app pods (depending on which components you set to be managed by Red Hat Quay Operator) to disappear. You can check their status by running the following command:

    $ oc get pods -n <quay-namespace>
    Copy to Clipboard Toggle word wrap

    Example output:

    registry-quay-config-editor-77847fc4f5-nsbbv   1/1     Running            0          9m1s
    registry-quay-database-66969cd859-n2ssm        1/1     Running            0          6d1h
    registry-quay-redis-7cc5f6c977-956g8           1/1     Running            0          5d21h
    Copy to Clipboard Toggle word wrap

5.3.4. Restoring your Red Hat Quay database

To restore your Red Hat Quay database from a backup, you can identify the database pod, upload the backup file, drop the existing database, and restore from the backup using psql. This procedure restores your database to a previous state using a backup SQL file.

Procedure

  1. Identify your Quay database pod by entering the following command:

    $ oc get pod -l quay-component=postgres -n  <quay_namespace> -o jsonpath='{.items[0].metadata.name}'
    Copy to Clipboard Toggle word wrap

    Example output:

    quayregistry-quay-database-59f54bb7-58xs7
    Copy to Clipboard Toggle word wrap

  2. Upload the backup by copying it from the local environment and into the pod by entering the following command:

    $ oc cp ./backup.sql -n <quay_namespace> registry-quay-database-66969cd859-n2ssm:/tmp/backup.sql
    Copy to Clipboard Toggle word wrap
  3. Open a remote terminal to the database by entering the following command:

    $ oc rsh -n <quay_namespace> registry-quay-database-66969cd859-n2ssm
    Copy to Clipboard Toggle word wrap
  4. Enter psql by entering the following command:

    bash-4.4$ psql
    Copy to Clipboard Toggle word wrap
  5. You can list the database by entering the following command:

    postgres=# \l
    Copy to Clipboard Toggle word wrap

    Example output

                                                      List of databases
               Name            |           Owner            | Encoding |  Collate   |   Ctype    |   Access privileges
    ----------------------------+----------------------------+----------+------------+------------+-----------------------
    postgres                   | postgres                   | UTF8     | en_US.utf8 | en_US.utf8 |
    quayregistry-quay-database | quayregistry-quay-database | UTF8     | en_US.utf8 | en_US.utf8 |
    Copy to Clipboard Toggle word wrap

  6. Drop the existing database by entering the following command:

    postgres=# DROP DATABASE "quayregistry-quay-database";
    Copy to Clipboard Toggle word wrap

    Example output

    DROP DATABASE
    Copy to Clipboard Toggle word wrap

  7. Exit the postgres CLI by entering the following command:

    \q
    Copy to Clipboard Toggle word wrap
  8. Redirect your PostgreSQL database to your backup database by entering the following command:

    sh-4.4$ psql < /tmp/backup.sql
    Copy to Clipboard Toggle word wrap
  9. Exit bash by entering the following command:

    sh-4.4$ exit
    Copy to Clipboard Toggle word wrap

To restore your Red Hat Quay object storage data from a backup, you can export AWS credentials from secrets and use the aws s3 sync command to upload blobs to your storage bucket. This procedure restores your registry’s object storage data using backup files.

Note

You can also use rclone or sc3md instead of the AWS command line utility.

Procedure

  1. Export the AWS_ACCESS_KEY_ID by entering the following command:

    $ export AWS_ACCESS_KEY_ID=$(oc get secret -l app=noobaa -n <quay-namespace>  -o jsonpath='{.items[0].data.AWS_ACCESS_KEY_ID}' |base64 -d)
    Copy to Clipboard Toggle word wrap
  2. Export the AWS_SECRET_ACCESS_KEY by entering the following command:

    $ export AWS_SECRET_ACCESS_KEY=$(oc get secret -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.AWS_SECRET_ACCESS_KEY}' |base64 -d)
    Copy to Clipboard Toggle word wrap
  3. Upload all blobs to the bucket by running the following command:

    $ aws s3 sync --no-verify-ssl --endpoint https://$(oc get route s3 -n openshift-storage  -o jsonpath='{.spec.host}') ./blobs  s3://$(oc get cm -l app=noobaa -n <quay-namespace> -o jsonpath='{.items[0].data.BUCKET_NAME}')
    Copy to Clipboard Toggle word wrap

5.3.6. Scaling up the Red Hat Quay deployment

To scale up your Red Hat Quay deployment, you can re-enable auto scaling and remove replica overrides. This restores your registry to normal operation after scaling down.

Procedure

  • Scale up the Red Hat Quay deployment by re-enabling auto scaling, if desired, and removing the replica overrides for Quay, mirror workers and Clair as applicable. For example:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: registry
      namespace: ns
    spec:
      components:- kind: horizontalpodautoscaler
          managed: true 
    1
    
        - kind: quay 
    2
    
          managed: true
        - kind: clair
          managed: true
        - kind: mirror
          managed: true
    Copy to Clipboard Toggle word wrap

    where:

    managed: true:: Re-enables auto scaling of Red Hat Quay, Clair and mirroring workers again.

    overrides:: Replica overrides are removed again to scale the Red Hat Quay components back up.

Chapter 6. Volume size overrides

Volume size overrides let you specify the desired size of storage resources for managed components in your Red Hat Quay deployment. You can set larger volumes upfront for performance reasons or when your storage backend does not support resizing.

The default size for Clair and the PostgreSQL databases is 50Gi.

In the following example, the volume size for the Clair and the Quay PostgreSQL databases has been set to 70Gi:

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: quay-example
  namespace: quay-enterprise
spec:
  configBundleSecret: config-bundle-secret
  components:
    - kind: objectstorage
      managed: false
    - kind: route
      managed: true
    - kind: tls
      managed: false
    - kind: clair
      managed: true
      overrides:
        volumeSize: 70Gi
    - kind: postgres
      managed: true
      overrides:
        volumeSize: 70Gi
    - kind: clairpostgres
      managed: true
      overrides:
        volumeSize: 70Gi
Copy to Clipboard Toggle word wrap

Chapter 7. Container Security Operator

Important

The Container Security Operator has been deprecated and planned for removal in a future release of Red Hat Quay and OpenShift Container Platform. The official replacement product of the Container Security Operator is Red Hat Advanced Cluster Security for Kubernetes.

The Container Security Operator (CSO) is an addon for the Clair security scanner that scans container images associated with active pods for known vulnerabilities. You can use CSO to identify security issues in your running containers and expose vulnerability information through the Kubernetes API.

Note

The CSO does not work without Red Hat Quay and Clair.

The Container Security Operator (CSO) includes the following features:

  • Watches containers associated with pods on either specified or all namespaces.
  • Queries the container registry where the containers came from for vulnerability information, provided that an image’s registry supports image scanning, such a a Red Hat Quay registry with Clair scanning.
  • Exposes vulnerabilities through the ImageManifestVuln object in the Kubernetes API.
Note

To see instructions on installing the CSO on Kubernetes, select the Install button from the Container Security OperatorHub.io page.

To scan container images for vulnerabilities in your OpenShift Container Platform cluster, you can install the Container Security Operator from the OpenShift Container Platform OperatorHub and view vulnerability information in the dashboard. This procedure sets up the CSO to monitor pods and expose vulnerability data through the Kubernetes API.

Note

In the following procedure, the CSO is installed in the marketplace-operators namespace. This allows the CSO to be used in all namespaces of your OpenShift Container Platform cluster.

After executing this procedure, you are made aware of what images are vulnerable, what you must do to fix those vulnerabilities, and every namespace that the image was run in. Knowing this, you can perform the following actions:

  • Alert users who are running the image that they need to correct the vulnerability.
  • Stop the images from running by deleting the deployment or the object that started the pod that the image is in.

Procedure

  1. On the OpenShift Container Platform console page, select OperatorsOperatorHub and search for Container Security Operator.
  2. Select the Container Security Operator, then select Install to go to the Create Operator Subscription page.
  3. Check the settings (all namespaces and automatic approval strategy, by default), and select Subscribe. The Container Security appears after a few moments on the Installed Operators screen.
  4. Optional: you can add custom certificates to the CSO. In this example, create a certificate named quay.crt in the current directory. Then, run the following command to add the certificate to the CSO:

    $ oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators
    Copy to Clipboard Toggle word wrap
    Note

    You must restart the Operator pod for the new certificates to take effect.

  5. Navigate to HomeOverview. A link to Image Vulnerabilities appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a security breakdown, as shown in the following image:

    Access CSO scanning data from the OpenShift Container Platform dashboard

    Important

    The Container Security Operator currently provides broken links for Red Hat Security advisories. For example, the following link might be provided: https://access.redhat.com/errata/RHSA-2023:1842%20https://access.redhat.com/security/cve/CVE-2023-23916. The %20 in the URL represents a space character, however it currently results in the combination of the two URLs into one incomplete URL, for example, https://access.redhat.com/errata/RHSA-2023:1842 and https://access.redhat.com/security/cve/CVE-2023-23916. As a temporary workaround, you can copy each URL into your browser to navigate to the proper page. This is a known issue and will be fixed in a future version of Red Hat Quay.

  6. You can do one of two things at this point to follow up on any detected vulnerabilities:

    1. Select the link to the vulnerability. You are taken to the container registry, Red Hat Quay or other registry where the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry:

      The CSO points you to a registry containing the vulnerable image

    2. Select the namespaces link to go to the Image Manifest Vulnerabilities page, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in two namespaces:

      View namespaces a vulnerable image is running in

7.1.1. Querying image vulnerabilities from the CLI

To check for security vulnerabilities in your {product-title} container images, you can query vulnerability information from the command line using the oc get vuln command. You can also view detailed information about specific vulnerabilities by using the oc describe vuln command.

Procedure

  1. Enter the following command to query for detected vulnerabilities:

    $ oc get vuln --all-namespaces
    Copy to Clipboard Toggle word wrap

    Example output

    NAMESPACE     NAME              AGE
    default       sha256.ca90...    6m56s
    skynet        sha256.ca90...    9m37s
    Copy to Clipboard Toggle word wrap

  2. Optional. To display details for a particular vulnerability, identify a specific vulnerability and its namespace, and use the oc describe command. The following example shows an active container whose image includes an RPM package with a vulnerability:

    $ oc describe vuln --namespace <namespace> sha256.ac50e3752...
    Copy to Clipboard Toggle word wrap
Name:         sha256.ac50e3752...
Namespace:    quay-enterprise
...
Spec:
  Features:
    Name:            nss-util
    Namespace Name:  centos:7
    Version:         3.44.0-3.el7
    Versionformat:   rpm
    Vulnerabilities:
      Description: Network Security Services (NSS) is a set of libraries...
Copy to Clipboard Toggle word wrap

7.2. Uninstalling the Container Security Operator

To uninstall the Container Security Operator from your OpenShift Container Platform deployment, you must uninstall the Operator and delete the imagemanifestvulns.secscan.quay.redhat.com custom resource definition (CRD). Without removing the CRD, image vulnerabilities are still reported on the OpenShift Container Platform Overview page.

Procedure

  1. On the OpenShift Container Platform web console, click OperatorsInstalled Operators.
  2. Click the menu kebab of the Container Security Operator.
  3. Click Uninstall Operator. Confirm your decision by clicking Uninstall in the popup window.
  4. Remove the imagemanifestvulns.secscan.quay.redhat.com custom resource definition by entering the following command:

    $ oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com
    Copy to Clipboard Toggle word wrap

    Example output

    customresourcedefinition.apiextensions.k8s.io "imagemanifestvulns.secscan.quay.redhat.com" deleted
    Copy to Clipboard Toggle word wrap

Chapter 8. Configuring AWS STS for Red Hat Quay

AWS Security Token Service (STS) is a web service for requesting temporary, limited-privilege credentials for AWS IAM users. You can configure AWS STS with Red Hat Quay to authenticate with Amazon S3 using temporary credentials.

AWS STS enhances security and ensures proper authentication and authorization for object storage access. It is available for standalone Red Hat Quay deployments, Red Hat Quay on OpenShift Container Platform, and Red Hat Quay on Red Hat OpenShift Service on AWS (ROSA). AWS STS is useful for clusters using Amazon S3 as an object storage. It allows Red Hat Quay to use STS protocols to authenticate with Amazon S3, which can enhance the overall security of the cluster and help to ensure that access to sensitive data is properly authenticated and authorized.

Configuring AWS STS for OpenShift Container Platform or ROSA requires creating an AWS IAM user, creating an S3 role, and configuring your Red Hat Quay config.yaml file to include the proper resources.

8.1. Creating an IAM user

To configure AWS STS authentication for your Red Hat Quay deployment, you can create an IAM user in the AWS console, copy the user ARN, and create access keys. This procedure sets up the IAM user that Red Hat Quay uses to authenticate with Amazon S3 using temporary credentials.

Procedure

  1. Log in to the Amazon Web Services (AWS) console and navigate to the Identity and Access Management (IAM) console.
  2. In the navigation pane, under Access management click Users.
  3. Click Create User and enter the following information:

    1. Enter a valid username, for example, quay-user.
    2. For Permissions options, click Add user to group.
  4. On the review and create page, click Create user. You are redirected to the Users page.
  5. Click the username, for example, quay-user.
  6. Copy the ARN of the user, for example, arn:aws:iam::123456:user/quay-user.
  7. On the same page, click the Security credentials tab.
  8. Navigate to Access keys.
  9. Click Create access key.
  10. On the Access key best practices & alternatives page, click Command Line Interface (CLI), then, check the confirmation box. Then click Next.
  11. Optional. On the Set description tag - optional page, enter a description.
  12. Click Create access key.
  13. Copy and store the access key and the secret access key.

    Important

    This is the only time that the secret access key can be viewed or downloaded. You cannot recover it later. However, you can create a new access key any time.

  14. Click Done.

8.2. Creating an S3 role

To enable AWS STS authentication for your Red Hat Quay deployment, you can create an S3 role in the AWS IAM console with a custom trust policy that allows your IAM user to assume the role. This procedure sets up the role that grants S3 access permissions for temporary credential authentication.

Prerequisites

  • You have created an IAM user and stored the access key and the secret access key.

Procedure

  1. Navigate to the IAM dashboard.
  2. In the navigation pane, click Roles under Access management.
  3. Click Create roleCustom Trust Policy.
  4. Under the Principal configuration field, add your AWS ARN information. For example:

    {
        "Version": "2012-10-17",
        "Statement": [
       	 {
       		 "Sid": "Statement1",
       		 "Effect": "Allow",
       		 "Principal": {
       		 	"AWS": "arn:aws:iam::123456:user/quay-user"
       		 },
       		 "Action": "sts:AssumeRole"
       	 }
        ]
    }
    Copy to Clipboard Toggle word wrap
  5. Click Next.
  6. On the Add permissions page, type AmazonS3FullAccess in the search box. Check the box to add that policy to the S3 role, then click Next.
  7. On the Name, review, and create page, enter the following information:

    1. Enter a role name, for example, example-role.
    2. Optional. Add a description.
  8. Click the Create role button. You are navigated to the Roles page. Under Role name, the newly created S3 should be available.

To configure your Red Hat Quay on OpenShift Container Platform deployment to use AWS STS for S3 authentication, you can edit the config.yaml file through the OpenShift Container Platform UI and update the DISTRIBUTED_STORAGE_CONFIG fields with your role ARN, bucket name, and access keys.

This procedure enables temporary credential authentication for object storage access.

Note

You can also edit and re-deploy your Red Hat Quay on OpenShift Container Platform config.yaml file directly instead of using the OpenShift Container Platform UI.

Prerequisites

  • You have configured a Role ARN.
  • You have generated a User Access Key.
  • You have generated a User Secret Key.

Procedure

  1. On the Home page of your OpenShift Container Platform deployment, click OperatorsInstalled Operators.
  2. Click Red Hat Quay.
  3. Click Quay Registry and then the name of your Red Hat Quay registry.
  4. Under Config Bundle Secret, click the name of your registry configuration bundle, for example, quay-registry-config-bundle-qet56.
  5. On the configuration bundle page, click Actions to reveal a drop-down menu. Then click Edit Secret.
  6. Update your the DISTRIBUTED_STORAGE_CONFIG fields of your config.yaml file with the following information:

    # ...
    DISTRIBUTED_STORAGE_CONFIG:
       default:
        - STSS3Storage
        - sts_role_arn: <role_arn> 
    1
    
          s3_bucket: <s3_bucket_name> 
    2
    
          storage_path: <storage_path> 
    3
    
          s3_region: <region> 
    4
    
          sts_user_access_key: <s3_user_access_key> 
    5
    
          sts_user_secret_key: <s3_user_secret_key> 
    6
    
    # ...
    Copy to Clipboard Toggle word wrap

    where:

    sts_role_arn:: The unique Amazon Resource Name (ARN) required when configuring AWS STS

    s3_bucket:: The name of your s3 bucket.

    storage_path:: The storage path for data. Usually /datastorage.

    s3_region:: The Amazon Web Services region. Defaults to us-east-1.

    sts_user_access_key:: The generated AWS S3 user access key required when configuring AWS STS.

    sts_user_secret_key:: The generated AWS S3 user secret key required when configuring AWS STS.

  7. Click Save.

Verification

  1. Tag a sample image, for example, busybox, that will be pushed to the repository. For example:

    $ podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test
    Copy to Clipboard Toggle word wrap
  2. Push the sample image by running the following command:

    $ podman push <quay-server.example.com>/<organization_name>/busybox:test
    Copy to Clipboard Toggle word wrap
  3. Verify that the push was successful by navigating to the Organization that you pushed the image to in your Red Hat Quay registry → Tags.
  4. Navigate to the Amazon Web Services (AWS) console and locate your s3 bucket.
  5. Click the name of your s3 bucket.
  6. On the Objects page, click datastorage/.
  7. On the datastorage/ page, the following resources should seen:

    • sha256/
    • uploads/

      These resources indicate that the push was successful, and that AWS STS is properly configured.

To configure your Red Hat Quay deployment on Red Hat OpenShift Service on AWS to use AWS STS for S3 authentication, you can update the IAM role trust policy to use federated identity, configure the config.yaml file, and annotate the service account with the role ARN.

This procedure enables web identity federation for temporary credential authentication.

Prerequisites

  • You have created an IAM user.
  • You have created an s3 Role ARN.
  • You have created a Custom Trust Policy that uses the Role ARN.

Procedure

  1. Get the serviceAccountIssuer resource by entering the following command:

    $ oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer | sed -e "s/^https:\/\///"
    Copy to Clipboard Toggle word wrap

    Example output

    oidc.op1.openshiftapps.com/123456
    Copy to Clipboard Toggle word wrap

  2. On the Identity and Access Management (IAM) console of the Amazon Web Services (AWS) console:

    1. Click Roles.
    2. Click the name of the Role to be used with AWS STS, for example, example-role.
    3. Click the Trust relationships tab, which shows the JSON policy created during "Creating an S3 role". Update the JSON policy as follows:

      {
      	"Version": "2012-10-17",
      	"Statement": [
          	{
              	"Sid": "Statement1",
              	"Effect": "Allow",
              	"Principal": {
                  	"Federated": "arn:aws:iam::123456:oidc-provider/oidc.op1.openshiftapps.com/123456" 
      1
      
              	},
              	"Action": "sts:AssumeRoleWithWebIdentity", 
      2
      
              	"Condition": {
                  	"StringEquals": {
                      	"oidc.op1.openshiftapps.com/123456:sub": "system:serviceaccount:quay:registry-quay-app" 
      3
      
                  	}
              	}
          	}
      	]
      }
      Copy to Clipboard Toggle word wrap

      where:

      Federated:: Updates the Principal parameter of the JSON policy to Federated:<your_user_ARN>:<serviceAccountIssuer_domain_path>

      Action:: Updates the Action parameter of the JSON policy to sts:AssumeRoleWithWebIdentity.

      Condition:: Updates the Condition parameter of the JSON policy to StringEquals”: “<serviceAccountIssuer>:sub”: “system:serviceAccount:<quay_namespace>:<quay_registry_using_serviceAccount>

    4. Verify that your User ARN is configured correct, then click Next.
    5. On the Add permissions page, select AmazonS3FullAccess, then click Next.
    6. On the Name, review, and create page, provide your role a name, a description, verify your configuration, add any optional tags. Then, click Create Role.
  3. On the Roles page, click the new role and store the Role ARN resource. For example:

    arn:aws:iam::123456:role/test_s3_access
    Copy to Clipboard Toggle word wrap
  4. On the Red Hat Quay web console:

    1. Click OperatorsInstalled Operators.
    2. Click Red Hat Quay.
    3. Click Quay Registry and then the name of your Red Hat Quay registry.
    4. Under Config Bundle Secret, click the name of your registry configuration bundle, for example, quay-registry-config-bundle-12345.
    5. On the configuration bundle page, click Actions to reveal a drop-down menu. Then click Edit Secret.
    6. Update your the DISTRIBUTED_STORAGE_CONFIG fields of your config.yaml file with the following information:

      # ...
      DISTRIBUTED_STORAGE_CONFIG:
         default:
          - STSS3Storage
            s3_bucket: <s3_bucket_name> 
      1
      
            storage_path: <storage_path> 
      2
      
            s3_region: <region> 
      3
      
      # ...
      Copy to Clipboard Toggle word wrap

      where:

      s3_bucket:: The name of your s3 bucket. storage_path:: The storage path for data. Usually /datastorage. s3_region:: The Amazon Web Services region. Defaults to us-east-1.

  5. Click Save. Your QuayRegistry custom resource (CR) automatically restarts.
  6. Annotate the Service Account (SA) that executes pods with the EKS configuration values. For example:

    $ oc annotate sa registry-quay-app "eks.amazonaws.com/role-arn"="arn:aws:iam::123456:role/test_s3_access" "eks.amazonaws.com/audience"="sts.amazonaws.com" "eks.amazonaws.com/sts-regional-endpoints"="true"
    Copy to Clipboard Toggle word wrap

Verification

  1. Tag a sample image, for example, busybox, that will be pushed to the repository. For example:

    $ podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test
    Copy to Clipboard Toggle word wrap
  2. Push the sample image by running the following command:

    $ podman push <quay-server.example.com>/<organization_name>/busybox:test
    Copy to Clipboard Toggle word wrap
  3. Verify that the push was successful by navigating to the Organization that you pushed the image to in your Red Hat Quay registry → Tags.
  4. Navigate to the Amazon Web Services (AWS) console and locate your s3 bucket.
  5. Click the name of your s3 bucket.
  6. On the Objects page, click datastorage/.
  7. On the datastorage/ page, the following resources should seen:

    • sha256/
    • uploads/

      These resources indicate that the push was successful, and that AWS STS is properly configured.

IPv6 support lets you deploy Red Hat Quay on OpenShift Container Platform in IPv6-only environments such as Telco and Edge deployments. You can enable IPv6 by setting the FEATURE_LISTEN_IP_VERSION parameter to IPv6 in your config.yaml file.

For a list of known limitations, see IPv6 limitations.

Note

Currently, deploying IPv6 on the Red Hat Quay on OpenShift Container Platform is not supported on IBM Power and IBM Z.

9.1. Enabling the IPv6 protocol family

To enable IPv6 support on your {product-title} deployment, you can add the FEATURE_LISTEN_IP_VERSION parameter to your config.yaml file and set it to IPv6. Restart your deployment to apply the change.

Warning

If your environment is configured to IPv4, but the FEATURE_LISTEN_IP_VERSION configuration field is set to IPv6, Red Hat Quay fails to deploy.

Prerequisites

  • Your host and container software platform (Docker, Podman) must be configured to support IPv6.

Procedure

  1. In your deployment’s config.yaml file, add the FEATURE_LISTEN_IP_VERSION parameter and set it to IPv6, for example:

    # ...
    FEATURE_GOOGLE_LOGIN: false
    FEATURE_INVITE_ONLY_USER_CREATION: false
    FEATURE_LISTEN_IP_VERSION: IPv6
    FEATURE_MAILING: false
    FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: false
    # ...
    Copy to Clipboard Toggle word wrap
  2. Start, or restart, your Red Hat Quay deployment.
  3. Check that your deployment is listening to IPv6 by entering the following command:

    $ curl <quay_endpoint>/health/instance
    {"data":{"services":{"auth":true,"database":true,"disk_space":true,"registry_gunicorn":true,"service_key":true,"web_gunicorn":true}},"status_code":200}
    Copy to Clipboard Toggle word wrap

9.2. IPv6 limitations

IPv6 single stack environments have limitations that prevent certain storage backends from working with Red Hat Quay. Microsoft Azure Blob Storage and Amazon S3 CloudFront do not support IPv6 endpoints, so they cannot be used in IPv6-only deployments.

  • Currently, attempting to configure your Red Hat Quay deployment with the common Microsoft Azure Blob Storage configuration will not work on IPv6 single stack environments. Because the endpoint of Microsoft Azure Blob Storage does not support IPv6, there is no workaround in place for this issue.

    For more information, see PROJQUAY-4433.

  • Currently, attempting to configure your Red Hat Quay deployment with Amazon S3 CloudFront will not work on IPv6 single stack environments. Because the endpoint of Amazon S3 CloudFront does not support IPv6, there is no workaround in place for this issue.

    For more information, see PROJQUAY-4470.

To add custom SSL/TLS certificates to your Red Hat Quay deployment on Kubernetes, you can base64 encode the certificate, add it to the config secret, and restart the pods. This procedure works around the limitation where the superuser panel certificate upload function does not work with Kubernetes deployments.

Prerequisites

  • Red Hat Quay has been deployed.
  • You have a custom ca.crt file.

Procedure

  1. Base64 encode the contents of an SSL/TLS certificate by entering the following command:

    $ cat ca.crt | base64 -w 0
    Copy to Clipboard Toggle word wrap

    Example output

    ...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    Copy to Clipboard Toggle word wrap

  2. Enter the following kubectl command to edit the quay-enterprise-config-secret file:

    $ kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret
    Copy to Clipboard Toggle word wrap
  3. Add an entry for the certificate and paste the full base64 encoded stringer under the entry. For example:

      custom-cert.crt:
    c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    Copy to Clipboard Toggle word wrap
  4. Use the kubectl delete command to remove all Red Hat Quay pods. For example:

    $ kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6  quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms
    Copy to Clipboard Toggle word wrap

    Afterwards, the Red Hat Quay deployment automatically schedules replace pods with the new certificate data.

Legal Notice

Copyright © Red Hat.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top