Este conteúdo não está disponível no idioma selecionado.

Chapter 8. Advanced configuration


Advanced configuration options let you customize your Red Hat Quay deployment when default settings do not meet your needs. You can configure external databases, custom ingress, monitoring, and other components to integrate with existing infrastructure.

8.1. Using an external PostgreSQL database

Using an external PostgreSQL database with Red Hat Quay lets you manage your own database infrastructure instead of using the Operator-managed database. You must ensure that required configuration and extensions, such as pg_trgm, are in place before deployment.

Important

Do not share the same PostgreSQL database between Red Hat Quay and Clair deployments. Each service must use its own database instance. Sharing databases with other workloads is also not supported, because connection-intensive components such as Red Hat Quay and Clair can quickly exceed PostgreSQL’s connection limits.

Connection poolers such as pgBouncer are not supported with Red Hat Quay or Clair.

When managing your own PostgreSQL database for use with Red Hat Quay, the following best practices are recommended:

  • *pg_trgm extenion: The pg_trgm extension must be enabled on the database for a successful deployment.
  • Backups: Perform regular database backups using PostgreSQL-native tools or your existing backup infrastructure. The Red Hat Quay Operator does not manage database backups.
  • Restores: When restoring a backup, ensure that all Red Hat Quay pods are stopped before beginning the restore process.
  • Storage sizing: When using the Operator-managed PostgreSQL database, the default storage allocation is 50 GiB. For external databases, you must ensure sufficient storage capacity for your environment, as the Operator does not handle volume resizing.
  • Monitoring: Monitor disk usage, connection limits, and query performance to prevent outages caused by resource exhaustion.

8.1.1. Integrating an existing PostgreSQL database

To integrate an existing PostgreSQL database with your Red Hat Quay registry, you can set the postgres component to unmanaged and configure the DB_URI in the configBundleSecret. This lets you leverage your current database infrastructure instead of using the Operator-managed database.

Note

The following procedure uses the OpenShift Container Platform web console to configure the Red Hat Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler.

This procedure can also be done by using the oc CLI and following the instructions in "Modifying the QuayRegistry CR by using the CLI" and " Modifying the configuration file by using the CLI".

Procedure

  1. On the OpenShift Container Platform web console, click Operators Installed Operators.
  2. Click Red Hat Quay.
  3. Click Quay Registry.
  4. Click the name of your Red Hat Quay registry, for example, example-registry.
  5. Click YAML.
  6. Set the postgres field of the QuayRegistry CR to managed: false. For example:

        - kind: postgres
          managed: false
  7. Click Save.
  8. Click Details the name of your Config Bundle Secret resource.
  9. On the Secret Details page, click Actions Edit Secret.
  10. Add the DB_URI field to your config.yaml file. For example:

    DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
  11. Optional: Add additional database configuration fields, such as DB_CONNECTION_ARGS or SSL/TLS connection arguments. For more information, see Database connection arguments.
  12. Click Save.

8.2. Using an external Redis database

Using an external Redis database with Red Hat Quay lets you manage your own Redis infrastructure instead of using the Operator-managed Redis. You must ensure that Redis is properly configured and available before deployment, and use a dedicated instance separate from Clair.

Important

Do not share the same Redis instance between Red Hat Quay and Clair deployments. Each service must use its own dedicated Redis instance. Sharing Redis with other workloads is not supported, because connection-intensive components such as Red Hat Quay and Clair can quickly exhaust available Redis connections and degrade performance.

8.2.1. Integrating an external Redis database

To integrate an existing Redis database with your Red Hat Quay registry, you can set the redis component to unmanaged and configure BUILDLOGS_REDIS and USER_EVENTS_REDIS in the configBundleSecret. This lets you use your own Redis infrastructure for build logs and user event processing.

Note

The following procedure uses the OpenShift Container Platform web console to configure Red Hat Quay to use an external Redis database. For most users, using the web console is simpler.

You can also complete this procedure by using the oc CLI. For more information, see "Modifying the QuayRegistry CR by using the CLI" and "Modifying the configuration file by using the CLI".

Procedure

  1. In the OpenShift Container Platform web console, navigate to Operators Installed Operators.
  2. Click Red Hat Quay.
  3. Click QuayRegistry.
  4. Click the name of your Red Hat Quay registry, for example, example-registry.
  5. Click YAML.
  6. Set the redis component to unmanaged by adding the following entry under spec.components:

        - kind: redis
          managed: false
  7. Click Save.
  8. Click Details the name of your Config Bundle Secret resource.
  9. On the Secret details page, click Actions Edit Secret.
  10. In the config.yaml section, add entries for your external Redis instance. For example:

    BUILDLOGS_REDIS:
      host: redis.example.com
      port: 6379
      ssl: false
    
    USER_EVENTS_REDIS:
      host: redis.example.com
      port: 6379
      ssl: false
    Important

    If both the BUILDLOGS_REDIS and USER_EVENTS_REDIS fields reference the same Redis deployment, ensure that your Redis service can handle the combined connection load. For large or high-throughput registries, use separate Redis databases or clusters for these components.

  11. Optional: Add additional database configuration fields, such as DB_CONNECTION_ARGS or SSL/TLS connection arguments. For more information, see Redis configuration fields.
  12. Click Save.

8.3. About Horizontal Pod Autoscaling (HPA)

Horizontal Pod Autoscalers (HPAs) automatically adjust the number of running pods based on CPU and memory utilization. Red Hat Quay deployments include managed HPAs for key components to ensure availability and performance during load spikes or maintenance events.

A typical Red Hat Quay deployment includes the following pods:

  • Two pods for the Red Hat Quay application (example-registry-quay-app-*)
  • One Redis pod for Red Hat Quay logging (example-registry-quay-redis-*)
  • One PostgreSQL pod for metadata storage (example-registry-quay-database-*)
  • Two Quay mirroring pods (example-registry-quay-mirror-*)
  • Two pods for Clair (example-registry-clair-app-*)
  • One PostgreSQL pod for Clair (example-registry-clair-postgres-*)

HPAs are managed by default for the Quay, Clair, and Mirror components, each starting with two replicas to prevent downtime during upgrades, reconfigurations, or pod rescheduling ev

8.3.1. Managing Horizontal Pod Autoscaling

To customize scaling thresholds or replica limits for your Red Hat Quay registry, you can set the horizontalpodautoscaler component to unmanaged in the QuayRegistry custom resource. You can then explicitly set replica counts for the quay, clair, and mirror components.

Note

The following procedure uses the OpenShift Container Platform web console to configure the Red Hat Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler.

This procedure can also be done by using the oc CLI and following the instructions in "Modifying the QuayRegistry CR by using the CLI" and " Modifying the configuration file by using the CLI".

Procedure

  1. Edit your QuayRegistry CR:

    $ oc edit quayregistry <quay_registry_name> -n <quay_namespace>
    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: quay-registry
      namespace: quay-enterprise
    spec:
      components:
        - kind: horizontalpodautoscaler
          managed: false
        - kind: quay
          managed: true
          overrides:
            replicas: null
        - kind: clair
          managed: true
          overrides:
            replicas: null
        - kind: mirror
          managed: true
          overrides:
            replicas: null
    # ...
  2. Create a custom HorizontalPodAutoscaler resource with your desired configuration, for example:

    kind: HorizontalPodAutoscaler
    apiVersion: autoscaling/v2
    metadata:
      name: quay-registry-quay-app
      namespace: quay-enterprise
    spec:
      scaleTargetRef:
        kind: Deployment
        name: quay-registry-quay-app
        apiVersion: apps/v1
      minReplicas: 3
      maxReplicas: 20
      metrics:
        - type: Resource
          resource:
            name: memory
            target:
              type: Utilization
              averageUtilization: 90
        - type: Resource
          resource:
            name: cpu
            target:
              type: Utilization
              averageUtilization: 90
  3. Apply the new HPA configuration to your cluster:

    $ oc apply -f <custom_hpa>.yaml
    horizontalpodautoscaler.autoscaling/quay-registry-quay-app created

Verification

  1. Verify that your Red Hat Quay application pods are running:

    $ oc get pod | grep quay-app
    quay-registry-quay-app-5b8fd49d6b-7wvbk         1/1     Running     0          34m
    quay-registry-quay-app-5b8fd49d6b-jslq9         1/1     Running     0          3m42s
    quay-registry-quay-app-5b8fd49d6b-pskpz         1/1     Running     0          43m
  2. Verify that your custom HPA is active:

    $ oc get hpa
    NAME                     REFERENCE                           TARGETS            MINPODS   MAXPODS   REPLICAS   AGE
    quay-registry-quay-app   Deployment/quay-registry-quay-app   67%/90%, 54%/90%   3         20        3          51m

8.4. Configuring custom ingress

You can configure custom ingress for Red Hat Quay by disabling the Operator-managed route component and managing your own routes or ingress controllers. This configuration is useful when your environment requires a custom SSL/TLS setup, specific DNS naming conventions, or when Red Hat Quay is deployed behind a load balancer or proxy that handles TLS termination.

The Red Hat Quay Operator separates route management from SSL/TLS configuration by introducing a distinct tls component. You can therefore manage each independently, depending on whether Red Hat Quay or the cluster should handle TLS termination. For more information about using SSL/TLS certificates with your deployment, see "Securing Red Hat Quay".

Note

If you disable the managed route, you are responsible for creating and managing a Route, Ingress, or Service to expose Red Hat Quay. Ensure that your DNS entry matches the SERVER_HOSTNAME configured in config.yaml.

8.4.1. Disabling the Route component

To prevent the Red Hat Quay Operator from creating a route, you can set the route component to unmanaged in the QuayRegistry custom resource. You must then configure SSL/TLS handling in your config.yaml file.

Procedure

  1. In your quayregistry.yaml file, set the route component as managed: false:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: example-registry
      namespace: quay-enterprise
    spec:
      components:
        - kind: route
          managed: false
  2. In your config.yaml file, configure Red Hat Quay to handle SSL/TLS. For example:

    # ...
    EXTERNAL_TLS_TERMINATION: false
    SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com
    PREFERRED_URL_SCHEME: https
    # ...

    If the configuration is incomplete, the following error might appear:

    {
      "reason":"ConfigInvalid",
      "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields"
    }

8.4.2. Configuring SSL/TLS and routes

Configuring SSL/TLS and routes for Red Hat Quay lets you control how TLS termination and route management work together. The tls component provides support for OpenShift Container Platform edge termination routes and enables independent control of route management and TLS certificate handling.

EXTERNAL_TLS_TERMINATION: true is the default, opinionated setting, which assumes the cluster manages TLS termination.

Note
  • When tls is managed, the cluster’s default wildcard certificate is used.
  • When tls is unmanaged, you must supply your own SSL/TLS certificate and key pair.

Multiple valid configurations are possible, as shown in the following table:

Expand
Table 8.1. Valid configuration options for TLS and routes
OptionRouteTLSCerts providedResult

My own load balancer handles TLS

Managed

Managed

No

Edge route using default cluster wildcard certificate

Red Hat Quay handles TLS

Managed

Unmanaged

Yes

Passthrough route with certificates mounted in the Red Hat Quay pod

Red Hat Quay handles TLS

Unmanaged

Unmanaged

Yes

Certificates set inside the Red Hat Quay pod; user must manually create a route

8.5. Disabling the monitoring component

Disabling the monitoring component sets the monitoring component to unmanaged in the QuayRegistry custom resource. You must disable monitoring when you install the Red Hat Quay Operator in a single namespace, or you can disable it in multi-namespace installations to use your own monitoring stack.

Note

Monitoring cannot be enabled when the Red Hat Quay Operator is installed in a single namespace.

You might also disable monitoring in multi-namespace deployments if you use an external Prometheus or Grafana instance, want to reduce resource overhead, or require custom observability integration.

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: example-registry
  namespace: quay-enterprise
spec:
  components:
    - kind: monitoring
      managed: false

8.6. Disabling the mirroring component

Repository mirroring in Red Hat Quay allows you to automatically synchronize container images from remote registries into your local Red Hat Quay instance. The Red Hat Quay Operator deploys a separate mirroring worker component that handles these synchronization tasks.

You can disable the managed mirroring component by setting it to managed: false in the QuayRegistry custom resource.

Note

Disabling managed mirroring means that the Operator does not deploy or reconcile any mirroring pods. You are responsible for creating, scheduling, and maintaining mirroring jobs manually. For most production deployments, leaving mirroring as managed: true is recommended.

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
  name: example-registry
  namespace: quay-enterprise
spec:
  components:
    - kind: mirroring
      managed: false

8.7. Configuring QuayRegistry CR resources

Configuring resources for managed components lets you adjust CPU and memory requests for quay, clair, mirroring, and database pods. You can configure resources to run smaller test clusters or request more resources upfront to avoid performance issues.

The following components should not be set lower than their minimum requirements. Setting resources too low can cause issues with your deployment and, in some cases, result in failure of the pod’s deployment.

  • quay: Minimum of 6 GB, 2vCPUs
  • clair: Recommended of 2 GB memory, 2 vCPUs
  • clairpostgres: Minimum of 200 MB

You can configure resource requests on the OpenShift Container Platform UI or directly by updating the QuayRegistry CR via the CLI.

Important

The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively.

To configure resource requests for your Red Hat Quay registry components, you can use the OpenShift Container Platform web console to edit the QuayRegistry custom resource. You can set CPU and memory limits and requests for quay, clair, mirroring, and database pods.

Procedure

  1. On the OpenShift Container Platform developer console, click Operators Installed Operators Red Hat Quay.
  2. Click QuayRegistry.
  3. Click the name of your registry, for example, example-registry.
  4. Click YAML.
  5. In the spec.components field, you can override the resources of all components by setting values for the .overrides.resources.limits and the overrides.resources.requests fields. You can also specify a storageClassName for postgres and clairpostgres resources, however, these fields must be defined during initial installation of the component.. For example:

    spec:
      components:
        - kind: clair
          managed: true
          overrides:
            resources:
              limits:
                cpu: "5"     # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu)
                memory: "18Gi"  # Limiting to 18 Gibibytes of memory
              requests:
                cpu: "4"     # Requesting 4 CPU
                memory: "4Gi"   # Requesting 4 Gibibytes of memory
        - kind: postgres
          managed: true
          overrides:
            storageClassName: "local-path"
            resources:
              limits: {}
              requests:
                cpu: "700m"   # Requesting 700 millicpu or 0.7 CPU
                memory: "4Gi"   # Requesting 4 Gibibytes of memory
        - kind: mirror
          managed: true
          overrides:
            resources:
              limits:
              requests:
                cpu: "800m"   # Requesting 800 millicpu or 0.8 CPU
                memory: "1Gi"   # Requesting 1 Gibibyte of memory
        - kind: quay
          managed: true
          overrides:
            resources:
              limits:
                cpu: "4"    # Limiting to 4 CPU
                memory: "10Gi"   # Limiting to 10 Gibibytes of memory
              requests:
                cpu: "4"   # Requesting 4 CPU
                memory: "10Gi"   # Requesting 10 Gibi of memory
        - kind: clairpostgres
          managed: true
          overrides:
            storageClassName: "local-path"
            resources:
              limits:
                cpu: "800m"   # Limiting to 800 millicpu or 0.8 CPU
                memory: "3Gi"   # Limiting to 3 Gibibytes of memory
              requests: {}
    • limits: Setting the limits or requests fields to {} uses the default values for these resources.
    • limits: Leaving the limits or requests field empty puts no limitations on these resources.

8.7.2. Configuring resource requests by using the CLI

To configure resource requests for your Red Hat Quay registry components after deployment, you can edit the QuayRegistry custom resource using the CLI. You can set CPU and memory limits and requests for quay, clair, mirroring, and database pods.

Procedure

  1. Edit the QuayRegistry CR by entering the following command:

    $ oc edit quayregistry <registry_name> -n <namespace>
  2. Make any desired changes. For example:

        - kind: quay
          managed: true
          overrides:
            resources:
              limits: {}
              requests:
                cpu: "0.7"   # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu)
                memory: "512Mi"   # Requesting 512 Mebibytes of memory
  3. Save the changes.
Red Hat logoGithubredditYoutubeTwitter

Aprender

Experimente, compre e venda

Comunidades

Sobre a documentação da Red Hat

Ajudamos os usuários da Red Hat a inovar e atingir seus objetivos com nossos produtos e serviços com conteúdo em que podem confiar. Explore nossas atualizações recentes.

Tornando o open source mais inclusivo

A Red Hat está comprometida em substituir a linguagem problemática em nosso código, documentação e propriedades da web. Para mais detalhes veja o Blog da Red Hat.

Sobre a Red Hat

Fornecemos soluções robustas que facilitam o trabalho das empresas em plataformas e ambientes, desde o data center principal até a borda da rede.

Theme

© 2026 Red Hat
Voltar ao topo