Chapter 3. Clair security scanner


Clair is an open source security scanner that analyzes container images and reports vulnerabilities. You can use Clair to automatically scan images and identify security issues in your container registry.

3.1. Clair vulnerability databases

Clair uses multiple vulnerability databases to identify security issues in container images. These databases provide comprehensive coverage across different operating systems and programming languages.

Clair uses the following vulnerability databases to report for issues in your images:

  • Ubuntu Oval database
  • Debian Security Tracker
  • Red Hat Enterprise Linux (RHEL) Oval database
  • SUSE Oval database
  • Oracle Oval database
  • Alpine SecDB database
  • VMware Photon OS database
  • Amazon Web Services (AWS) UpdateInfo
  • Open Source Vulnerability (OSV) Database

For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping.

Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software.

OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems.

Clair also reports vulnerability and security information for golang, java, and ruby ecosystems through the Open Source Vulnerability (OSV) database.

By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects.

For more information about OSV, see the OSV website.

3.2. Clair on OpenShift Container Platform

The Red Hat Quay Operator automatically installs and configures Clair when you deploy Red Hat Quay on OpenShift Container Platform. This simplifies setup by eliminating the need for manual Clair configuration.

3.3. Testing Clair

To verify that Clair is working correctly on your Red Hat Quay deployment, you can pull, tag, and push a sample image to your registry, then view the vulnerability report in the UI.

Prerequisites

  • You have deployed the Clair container image.

Procedure

  1. Pull a sample image by entering the following command:

    $ podman pull ubuntu:20.04
    Copy to Clipboard Toggle word wrap
  2. Tag the image to your registry by entering the following command:

    $ sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04
    Copy to Clipboard Toggle word wrap
  3. Push the image to your Red Hat Quay registry by entering the following command:

    $ sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04
    Copy to Clipboard Toggle word wrap
  4. Log in to your Red Hat Quay deployment through the UI.
  5. Click the repository name, for example, quayadmin/ubuntu.
  6. In the navigation pane, click Tags.

    Security scan information appears for scanned repository images

  7. Click the image report, for example, 45 medium, to show a more detailed report:

    See all vulnerabilities or only those that are fixable

    Note

    In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16. This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug.

3.4. Advanced Clair configuration

Advanced Clair configuration lets you customize Clair settings beyond the default installation. You can use these options to adjust scanning behavior, database connections, and other advanced features to meet specific security and performance requirements.

3.4.1. Unmanaged Clair configuration

Unmanaged Clair configuration lets you run a custom Clair setup or use an external Clair database with the Red Hat Quay Operator. You can use this configuration for geo-replicated environments where multiple Operator instances share the same database, or when you need a highly available database outside your cluster.

To run a custom Clair configuration with an unmanaged Clair database, you can set the clairpostgres component to unmanaged in your QuayRegistry custom resource. This lets you use an external database for geo-replicated environments or highly available setups outside your cluster.

Important

You must not use the same externally managed PostgreSQL database for both Red Hat Quay and Clair deployments. Your PostgreSQL database must also not be shared with other workloads, as it might exhaust the natural connection limit on the PostgreSQL side when connection-intensive workloads, like Red Hat Quay or Clair, contend for resources. Additionally, pgBouncer is not supported with Red Hat Quay or Clair, so it is not an option to resolve this issue.

Procedure

  • In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: false:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: quay370
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: objectstorage
          managed: false
        - kind: route
          managed: true
        - kind: tls
          managed: false
        - kind: clairpostgres
          managed: false
    Copy to Clipboard Toggle word wrap

To configure a custom Clair database with SSL/TLS certificates for your Red Hat Quay deployment, you can create a Quay configuration bundle secret that includes the clair-config.yaml file. This lets you use your own external database with secure connections for Clair vulnerability scanning.

Note

The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TLS certifications, see "Configuring a custom Clair database with a managed Clair configuration".

Procedure

  1. Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command:

    $ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
    Copy to Clipboard Toggle word wrap

    Example Clair config.yaml file

    indexer:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        layer_scan_concurrency: 6
        migrations: true
        scanlock_retry: 11
    log_level: debug
    matcher:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        migrations: true
    metrics:
        name: prometheus
    notifier:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        migrations: true
    Copy to Clipboard Toggle word wrap

    Note
    • The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml. It must be specified when configuring your clair-config.yaml.
    • An example clair-config.yaml can be found at Clair on OpenShift config.
  2. Add the clair-config.yaml file to your bundle secret, for example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: config-bundle-secret
      namespace: quay-enterprise
    data:
      config.yaml: <base64 encoded Quay config>
      clair-config.yaml: <base64 encoded Clair config>
      extra_ca_cert_<name>: <base64 encoded ca cert>
      ssl.crt: <base64 encoded SSL certificate>
      ssl.key: <base64 encoded SSL private key>
    Copy to Clipboard Toggle word wrap
    Note

    When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.

  3. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace>. For example:

    $ oc get pods -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                               READY   STATUS    RESTARTS   AGE
    f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2   1/1     Running   0          7s
    Copy to Clipboard Toggle word wrap

Running a custom Clair configuration with a managed Clair database lets you customize Clair settings while the Operator manages the database. You can use this approach to disable specific updater resources or configure Clair for disconnected environments.

Note
  • If you are running Red Hat Quay in an disconnected environment, the airgap parameter of your clair-config.yaml must be set to True.
  • If you are running Red Hat Quay in an disconnected environment, you should disable all updater components.

3.4.2.1. Setting a Clair database to managed

To have the Red Hat Quay Operator manage your Clair database, you can set the clairpostgres component to managed in your QuayRegistry custom resource. This simplifies deployment and maintenance by letting the Operator handle database provisioning and configuration.

Procedure

  • In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: true:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: quay370
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: objectstorage
          managed: false
        - kind: route
          managed: true
        - kind: tls
          managed: false
        - kind: clairpostgres
          managed: true
    Copy to Clipboard Toggle word wrap

To configure a custom Clair database while keeping the Clair configuration managed by the Operator, you can create a Quay configuration bundle secret that includes the clair-config.yaml file. This lets you use your own external database while the Operator continues to manage Clair settings.

Procedure

  1. Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command:

    $ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret
    Copy to Clipboard Toggle word wrap

    Example Clair config.yaml file

    indexer:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable
        layer_scan_concurrency: 6
        migrations: true
        scanlock_retry: 11
    log_level: debug
    matcher:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable
        migrations: true
    metrics:
        name: prometheus
    notifier:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable
        migrations: true
    Copy to Clipboard Toggle word wrap

    Note
    • The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml. It must be specified when configuring your clair-config.yaml.
    • An example clair-config.yaml can be found at Clair on OpenShift config.
  2. Add the clair-config.yaml file to your bundle secret, for example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: config-bundle-secret
      namespace: quay-enterprise
    data:
      config.yaml: <base64 encoded Quay config>
      clair-config.yaml: <base64 encoded Clair config>
    Copy to Clipboard Toggle word wrap
    Note
    • When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.
  3. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace>. For example:

    $ oc get pods -n <namespace>
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                               READY   STATUS    RESTARTS   AGE
    f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2   1/1     Running   0          7s
    Copy to Clipboard Toggle word wrap

3.4.3. Clair in disconnected environments

Clair supports disconnected environments where your Red Hat Quay deployment has no direct internet access. You can use the clairctl tool to transfer vulnerability database updates from an open host to your isolated environment, enabling Clair to scan images without internet connectivity.

Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use.

Note

Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments.

For more information about Clair updaters, see "Clair updaters".

To install the clairctl command line utility for disconnected OpenShift Container Platform deployments, you can extract the tool from a running Clair pod and set its execution permissions. This lets you use clairctl to manage vulnerability database updates in disconnected environments.

Procedure

  1. Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command:

    $ oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl
    Copy to Clipboard Toggle word wrap
    Note

    Unofficially, the clairctl tool can be downloaded

  2. Set the permissions of the clairctl file so that it can be executed and run by the user, for example:

    $ chmod u+x ./clairctl
    Copy to Clipboard Toggle word wrap

To configure Clair for disconnected environments on OpenShift Container Platform, you can retrieve and decode the Clair configuration secret, then update the clair-config.yaml file to set disable_updaters and airgap parameters to True. This prepares Clair to work without direct internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.

Procedure

  1. Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML:

    $ oc get secret -n quay-enterprise example-registry-clair-config-secret  -o "jsonpath={$.data['config\.yaml']}" | base64 -d > clair-config.yaml
    Copy to Clipboard Toggle word wrap
  2. Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to True, for example:

    # ...
    indexer:
      airgap: true
    # ...
    matcher:
      disable_updaters: true
    # ...
    Copy to Clipboard Toggle word wrap

To export vulnerability database updates from a connected Clair instance for use in disconnected environments, you can use the clairctl tool with your configuration file to export the updaters bundle. This creates a bundle file that you can transfer to your isolated environment.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.

Procedure

  • From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example:

    $ ./clairctl --config ./config.yaml export-updaters updates.gz
    Copy to Clipboard Toggle word wrap

To configure access to the Clair database in your disconnected OpenShift Container Platform cluster, you can determine the database service, forward the database port, and update your Clair config.yaml file to use localhost. This lets you import the updaters bundle into the database using the clairctl tool.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.

Procedure

  1. Determine your Clair database service by using the oc CLI tool, for example:

    $ oc get svc -n quay-enterprise
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                             AGE
    example-registry-clair-app            ClusterIP      172.30.224.93    <none>        80/TCP,8089/TCP                     4d21h
    example-registry-clair-postgres       ClusterIP      172.30.246.88    <none>        5432/TCP                            4d21h
    ...
    Copy to Clipboard Toggle word wrap

  2. Forward the Clair database port so that it is accessible from the local machine. For example:

    $ oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
    Copy to Clipboard Toggle word wrap
  3. Update your Clair config.yaml file, for example:

    indexer:
        connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
        layer_scan_concurrency: 5
        migrations: true
        scanlock_retry: 10
        airgap: true
        scanner:
          repo:
            rhel-repository-scanner:
              repo2cpe_mapping_file: /data/repository-to-cpe.json
          package:
            rhel_containerscanner:
              name2repos_mapping_file: /data/container-name-repos-map.json
    Copy to Clipboard Toggle word wrap

    where:

    connstring:: Specifies the connection string for the database.

    rhel-repository-scanner:: Specifies the repository scanner configuration.

    rhel_containerscanner:: Specifies the container scanner configuration.

To import vulnerability database updates into your disconnected OpenShift Container Platform cluster, you can use the clairctl tool with your Clair configuration file to import the updaters bundle. This populates the Clair database with vulnerability data so Clair can scan images without internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.
  • You have transferred the updaters bundle into your disconnected environment.

Procedure

  • Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example:

    $ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
    Copy to Clipboard Toggle word wrap

To install the clairctl command line utility for a self-managed Clair deployment on OpenShift Container Platform, you can copy the tool from a Clair container using podman and set its execution permissions. This lets you use clairctl to manage vulnerability database updates in disconnected environments.

Procedure

  1. Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example:

    $ sudo podman cp clairv4:/usr/bin/clairctl ./clairctl
    Copy to Clipboard Toggle word wrap
  2. Set the permissions of the clairctl file so that it can be executed and run by the user, for example:

    $ chmod u+x ./clairctl
    Copy to Clipboard Toggle word wrap

To deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters, you can create a configuration directory, configure a Clair configuration file with disable_updaters enabled, and start the container using podman. This lets you run Clair independently in environments without direct internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.

Procedure

  1. Create a folder for your Clair configuration file, for example:

    $ mkdir /etc/clairv4/config/
    Copy to Clipboard Toggle word wrap
  2. Create a Clair configuration file with the disable_updaters parameter set to True, for example:

    ---
    indexer:
      airgap: true
    ---
    matcher:
      disable_updaters: true
    ---
    Copy to Clipboard Toggle word wrap
  3. Start Clair by using the container image, mounting in the configuration from the file you created:

    $ sudo podman run -it --rm --name clairv4 \
    -p 8081:8081 -p 8088:8088 \
    -e CLAIR_CONF=/clair/config.yaml \
    -e CLAIR_MODE=combo \
    -v /etc/clairv4/config:/clair:Z \
    registry.redhat.io/quay/clair-rhel9:v3.16.1
    Copy to Clipboard Toggle word wrap

To export vulnerability database updates from a connected self-managed Clair instance for use in disconnected environments, you can use the clairctl tool with your configuration file to export the updaters bundle. This creates a bundle file that you can transfer to your isolated environment.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have deployed Clair.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.

Procedure

  • From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example:

    $ ./clairctl --config ./config.yaml export-updaters updates.gz
    Copy to Clipboard Toggle word wrap

To configure access to the Clair database in your disconnected OpenShift Container Platform cluster for a self-managed deployment, you can determine the database service, forward the database port, and update your Clair config.yaml file to use localhost. This lets you import the updaters bundle into the database using the clairctl tool.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have deployed Clair.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.

Procedure

  1. Determine your Clair database service by using the oc CLI tool, for example:

    $ oc get svc -n quay-enterprise
    Copy to Clipboard Toggle word wrap

    Example output

    NAME                                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                             AGE
    example-registry-clair-app            ClusterIP      172.30.224.93    <none>        80/TCP,8089/TCP                     4d21h
    example-registry-clair-postgres       ClusterIP      172.30.246.88    <none>        5432/TCP                            4d21h
    ...
    Copy to Clipboard Toggle word wrap

  2. Forward the Clair database port so that it is accessible from the local machine. For example:

    $ oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
    Copy to Clipboard Toggle word wrap
  3. Update your Clair config.yaml file, for example:

    indexer:
        connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
        layer_scan_concurrency: 5
        migrations: true
        scanlock_retry: 10
        airgap: true
        scanner:
          repo:
            rhel-repository-scanner:
              repo2cpe_mapping_file: /data/repository-to-cpe.json
          package:
            rhel_containerscanner:
              name2repos_mapping_file: /data/container-name-repos-map.json
    Copy to Clipboard Toggle word wrap

    where:

    connstring:: Specifies the connection string for the database.

    rhel-repository-scanner:: Specifies the repository scanner configuration.

    rhel_containerscanner:: Specifies the container scanner configuration.

To import vulnerability database updates into your disconnected OpenShift Container Platform cluster for a self-managed deployment, you can use the clairctl tool with your Clair configuration file to import the updaters bundle. This populates the Clair database with vulnerability data so Clair can scan images without internet access.

Prerequisites

  • You have installed the clairctl command line utility tool.
  • You have deployed Clair.
  • The disable_updaters and airgap parameters are set to True in your Clair config.yaml file.
  • You have exported the updaters bundle from a Clair instance that has access to the internet.
  • You have transferred the updaters bundle into your disconnected environment.

Procedure

  • Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform:

    $ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
    Copy to Clipboard Toggle word wrap

3.4.4. Common Product Enumeration mapping in Clair

Clair uses Common Product Enumeration (CPE) mapping files to map RPM packages to security data for accurate vulnerability scanning of Red Hat Enterprise Linux (RHEL) container images. Understanding how Clair utilizes these files ensures that your vulnerability reports remain accurate and comprehensive.

The scanner requires the CPE file to be present and accessible to process RPM packages properly. If these files are missing or inaccessible, RPM packages installed in the container image are skipped during the scanning process.

By default, the Clair indexer includes the repos2cpe and names2repos data files within the Clair container. This allows you to reference local paths such as /data/repository-to-cpe.json without additional external configuration.

Important

While Red Hat Product Security updates CPE files regularly, the versions bundled within the Clair container are only updated during Red Hat Quay releases. This can lead to temporary discrepancies between the latest security data and the versions bundled with your current installation.

3.4.4.1. CPE mapping configuration reference

Common Product Enumeration (CPE) mapping configuration defines the fields and file paths used by Clair to associate packages with standardized product identifiers.

Expand
Table 3.1. Clair CPE mapping files
CPE TypeLink to JSON mapping file

repos2cpe

Red Hat Repository-to-CPE JSON

names2repos

Red Hat Name-to-Repos JSON

Example configuration

indexer:
  scanner:
    repo:
      rhel-repository-scanner:
        repo2cpe_mapping_file: /data/repository-to-cpe.json
    package:
      rhel_containerscanner:
        name2repos_mapping_file: /data/container-name-repos-map.json
Copy to Clipboard Toggle word wrap

where:

repo2cpe_mapping_file
Specifies the path to the JSON file mapping Red Hat repositories to CPEs.
name2repos_mapping_file
Specifies the path to the JSON file mapping container names to repositories.

3.5. Resizing Managed Storage

To expand storage capacity for your Red Hat Quay on OpenShift Container Platform deployment, you can use the OpenShift Container Platform console to resize the PostgreSQL and Clair PostgreSQL persistent volume claims. This lets you increase storage beyond the default 50 GiB allocation when your registry needs more space.

When deploying Red Hat Quay on OpenShift Container Platform, three distinct persistent volume claims (PVCs) are deployed:

  • One for the PostgreSQL 15 registry.
  • One for the Clair PostgreSQL 15 registry.
  • One that uses NooBaa as a backend storage.
Note

The connection between Red Hat Quay and NooBaa is done through the S3 API and ObjectBucketClaim API in OpenShift Container Platform. Red Hat Quay leverages that API group to create a bucket in NooBaa, obtain access keys, and automatically set everything up. On the backend, or NooBaa, side, that bucket is creating inside of the backing store. As a result, NooBaa PVCs are not mounted or connected to Red Hat Quay pods.

Prerequisites

  • You have cluster admin privileges on OpenShift Container Platform.

Procedure

  1. Log into the OpenShift Container Platform console and select Storage Persistent Volume Claims.
  2. Select the desired PersistentVolumeClaim for either PostgreSQL 13 or Clair PostgreSQL 13, for example, example-registry-quay-postgres-13.
  3. From the Action menu, select Expand PVC.
  4. Enter the new size of the Persistent Volume Claim and select Expand.

    After a few minutes, the expanded size should reflect in the PVC’s Capacity field.

3.6. Customizing Default Operator Images

Note

Currently, customizing default Operator images is not supported on IBM Power and IBM Z.

Customizing default Operator images lets you override the default container images used by the Red Hat Quay Operator by setting environment variables in the ClusterServiceVersion object.

Important

Customizing default Operator images is not supported for production Red Hat Quay environments and is only recommended for development or testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Red Hat Quay Operator.

3.6.1. Environment Variables

The Red Hat Quay Operator uses environment variables to override default container images for components such as base, clair, postgres, and redis. You can set these variables in the ClusterServiceVersion object to customize which images the Operator uses for each component.

Expand
Table 3.2. ClusterServiceVersion environment variables

Environment Variable

Component

RELATED_IMAGE_COMPONENT_QUAY

base

RELATED_IMAGE_COMPONENT_CLAIR

clair

RELATED_IMAGE_COMPONENT_POSTGRES

postgres and clair databases

RELATED_IMAGE_COMPONENT_REDIS

redis

Note

Overridden images must be referenced by manifest (@sha256:) and not by tag (:latest).

3.6.2. Applying overrides to a running Operator

To override container images for a running Red Hat Quay Operator, you can modify the ClusterServiceVersion object to add environment variables that point to your custom images. This applies the overrides at the Operator level, so all QuayRegistry instances use the same custom images.

Procedure

  1. The ClusterServiceVersion object is Operator Lifecycle Manager’s representation of a running Operator in the cluster. Find the Red Hat Quay Operator’s ClusterServiceVersion by using a Kubernetes UI or the kubectl/oc CLI tool. For example:

    $ oc get clusterserviceversions -n <namespace>
    Copy to Clipboard Toggle word wrap
  2. Using the UI, oc edit, or another method, modify the ClusterServiceVersion object to include the environment variables outlined above to point to the override images:

    JSONPath: spec.install.spec.deployments[0].spec.template.spec.containers[0].env

    - name: RELATED_IMAGE_COMPONENT_QUAY
      value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d
    - name: RELATED_IMAGE_COMPONENT_CLAIR
      value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6
    - name: RELATED_IMAGE_COMPONENT_POSTGRES
      value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33
    - name: RELATED_IMAGE_COMPONENT_REDIS
      value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
    Copy to Clipboard Toggle word wrap

3.7. AWS S3 CloudFront

To configure AWS S3 CloudFront for your Red Hat Quay backend registry storage, you can create a secret that includes your config.yaml file and the CloudFront signing key. This enables CloudFront content delivery for your registry storage.

Procedure

  • Create a secret that includes your config.yaml file and the CloudFront signing key by entering the following command:

    $ oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
    Copy to Clipboard Toggle word wrap
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top