Este conteúdo não está disponível no idioma selecionado.
Chapter 8. Advanced configuration
Advanced configuration options let you customize your Red Hat Quay deployment when default settings do not meet your needs. You can configure external databases, custom ingress, monitoring, and other components to integrate with existing infrastructure.
8.1. Using an external PostgreSQL database Copiar o linkLink copiado para a área de transferência!
Using an external PostgreSQL database with Red Hat Quay lets you manage your own database infrastructure instead of using the Operator-managed database. You must ensure that required configuration and extensions, such as pg_trgm, are in place before deployment.
Do not share the same PostgreSQL database between Red Hat Quay and Clair deployments. Each service must use its own database instance. Sharing databases with other workloads is also not supported, because connection-intensive components such as Red Hat Quay and Clair can quickly exceed PostgreSQL’s connection limits.
Connection poolers such as pgBouncer are not supported with Red Hat Quay or Clair.
When managing your own PostgreSQL database for use with Red Hat Quay, the following best practices are recommended:
-
*
pg_trgmextenion: Thepg_trgmextension must be enabled on the database for a successful deployment. - Backups: Perform regular database backups using PostgreSQL-native tools or your existing backup infrastructure. The Red Hat Quay Operator does not manage database backups.
- Restores: When restoring a backup, ensure that all Red Hat Quay pods are stopped before beginning the restore process.
- Storage sizing: When using the Operator-managed PostgreSQL database, the default storage allocation is 50 GiB. For external databases, you must ensure sufficient storage capacity for your environment, as the Operator does not handle volume resizing.
- Monitoring: Monitor disk usage, connection limits, and query performance to prevent outages caused by resource exhaustion.
8.1.1. Integrating an existing PostgreSQL database Copiar o linkLink copiado para a área de transferência!
To integrate an existing PostgreSQL database with your Red Hat Quay registry, you can set the postgres component to unmanaged and configure the DB_URI in the configBundleSecret. This lets you leverage your current database infrastructure instead of using the Operator-managed database.
The following procedure uses the OpenShift Container Platform web console to configure the Red Hat Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler.
This procedure can also be done by using the oc CLI and following the instructions in "Modifying the QuayRegistry CR by using the CLI" and " Modifying the configuration file by using the CLI".
Procedure
-
On the OpenShift Container Platform web console, click Operators
Installed Operators. - Click Red Hat Quay.
- Click Quay Registry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- Click YAML.
Set the
postgresfield of theQuayRegistryCR tomanaged: false. For example:- kind: postgres managed: false- Click Save.
-
Click Details
the name of your Config Bundle Secretresource. -
On the Secret Details page, click Actions
Edit Secret. Add the
DB_URIfield to yourconfig.yamlfile. For example:DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database-
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGSor SSL/TLS connection arguments. For more information, see Database connection arguments. - Click Save.
8.2. Using an external Redis database Copiar o linkLink copiado para a área de transferência!
Using an external Redis database with Red Hat Quay lets you manage your own Redis infrastructure instead of using the Operator-managed Redis. You must ensure that Redis is properly configured and available before deployment, and use a dedicated instance separate from Clair.
Do not share the same Redis instance between Red Hat Quay and Clair deployments. Each service must use its own dedicated Redis instance. Sharing Redis with other workloads is not supported, because connection-intensive components such as Red Hat Quay and Clair can quickly exhaust available Redis connections and degrade performance.
8.2.1. Integrating an external Redis database Copiar o linkLink copiado para a área de transferência!
To integrate an existing Redis database with your Red Hat Quay registry, you can set the redis component to unmanaged and configure BUILDLOGS_REDIS and USER_EVENTS_REDIS in the configBundleSecret. This lets you use your own Redis infrastructure for build logs and user event processing.
The following procedure uses the OpenShift Container Platform web console to configure Red Hat Quay to use an external Redis database. For most users, using the web console is simpler.
You can also complete this procedure by using the oc CLI. For more information, see "Modifying the QuayRegistry CR by using the CLI" and "Modifying the configuration file by using the CLI".
Procedure
-
In the OpenShift Container Platform web console, navigate to Operators
Installed Operators. - Click Red Hat Quay.
- Click QuayRegistry.
- Click the name of your Red Hat Quay registry, for example, example-registry.
- Click YAML.
Set the
rediscomponent to unmanaged by adding the following entry underspec.components:- kind: redis managed: false- Click Save.
-
Click Details
the name of your Config Bundle Secretresource. -
On the Secret details page, click Actions
Edit Secret. In the
config.yamlsection, add entries for your external Redis instance. For example:BUILDLOGS_REDIS: host: redis.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: redis.example.com port: 6379 ssl: falseImportantIf both the
BUILDLOGS_REDISandUSER_EVENTS_REDISfields reference the same Redis deployment, ensure that your Redis service can handle the combined connection load. For large or high-throughput registries, use separate Redis databases or clusters for these components.-
Optional: Add additional database configuration fields, such as
DB_CONNECTION_ARGSor SSL/TLS connection arguments. For more information, see Redis configuration fields. - Click Save.
8.3. About Horizontal Pod Autoscaling (HPA) Copiar o linkLink copiado para a área de transferência!
Horizontal Pod Autoscalers (HPAs) automatically adjust the number of running pods based on CPU and memory utilization. Red Hat Quay deployments include managed HPAs for key components to ensure availability and performance during load spikes or maintenance events.
A typical Red Hat Quay deployment includes the following pods:
-
Two pods for the Red Hat Quay application (
example-registry-quay-app-*) -
One Redis pod for Red Hat Quay logging (
example-registry-quay-redis-*) -
One PostgreSQL pod for metadata storage (
example-registry-quay-database-*) -
Two
Quaymirroring pods (example-registry-quay-mirror-*) -
Two pods for Clair (
example-registry-clair-app-*) -
One PostgreSQL pod for Clair (
example-registry-clair-postgres-*)
HPAs are managed by default for the Quay, Clair, and Mirror components, each starting with two replicas to prevent downtime during upgrades, reconfigurations, or pod rescheduling ev
8.3.1. Managing Horizontal Pod Autoscaling Copiar o linkLink copiado para a área de transferência!
To customize scaling thresholds or replica limits for your Red Hat Quay registry, you can set the horizontalpodautoscaler component to unmanaged in the QuayRegistry custom resource. You can then explicitly set replica counts for the quay, clair, and mirror components.
The following procedure uses the OpenShift Container Platform web console to configure the Red Hat Quay registry to use an external PostgreSQL database. For most users, use the web console is simpler.
This procedure can also be done by using the oc CLI and following the instructions in "Modifying the QuayRegistry CR by using the CLI" and " Modifying the configuration file by using the CLI".
Procedure
Edit your
QuayRegistryCR:$ oc edit quayregistry <quay_registry_name> -n <quay_namespace>apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false - kind: quay managed: true overrides: replicas: null - kind: clair managed: true overrides: replicas: null - kind: mirror managed: true overrides: replicas: null # ...Create a custom
HorizontalPodAutoscalerresource with your desired configuration, for example:kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: quay-registry-quay-app namespace: quay-enterprise spec: scaleTargetRef: kind: Deployment name: quay-registry-quay-app apiVersion: apps/v1 minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: memory target: type: Utilization averageUtilization: 90 - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 90Apply the new HPA configuration to your cluster:
$ oc apply -f <custom_hpa>.yamlhorizontalpodautoscaler.autoscaling/quay-registry-quay-app created
Verification
Verify that your Red Hat Quay application pods are running:
$ oc get pod | grep quay-appquay-registry-quay-app-5b8fd49d6b-7wvbk 1/1 Running 0 34m quay-registry-quay-app-5b8fd49d6b-jslq9 1/1 Running 0 3m42s quay-registry-quay-app-5b8fd49d6b-pskpz 1/1 Running 0 43mVerify that your custom HPA is active:
$ oc get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE quay-registry-quay-app Deployment/quay-registry-quay-app 67%/90%, 54%/90% 3 20 3 51m
8.4. Configuring custom ingress Copiar o linkLink copiado para a área de transferência!
You can configure custom ingress for Red Hat Quay by disabling the Operator-managed route component and managing your own routes or ingress controllers. This configuration is useful when your environment requires a custom SSL/TLS setup, specific DNS naming conventions, or when Red Hat Quay is deployed behind a load balancer or proxy that handles TLS termination.
The Red Hat Quay Operator separates route management from SSL/TLS configuration by introducing a distinct tls component. You can therefore manage each independently, depending on whether Red Hat Quay or the cluster should handle TLS termination. For more information about using SSL/TLS certificates with your deployment, see "Securing Red Hat Quay".
If you disable the managed route, you are responsible for creating and managing a Route, Ingress, or Service to expose Red Hat Quay. Ensure that your DNS entry matches the SERVER_HOSTNAME configured in config.yaml.
8.4.1. Disabling the Route component Copiar o linkLink copiado para a área de transferência!
To prevent the Red Hat Quay Operator from creating a route, you can set the route component to unmanaged in the QuayRegistry custom resource. You must then configure SSL/TLS handling in your config.yaml file.
Procedure
In your
quayregistry.yamlfile, set theroutecomponent asmanaged: false:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: falseIn your
config.yamlfile, configure Red Hat Quay to handle SSL/TLS. For example:# ... EXTERNAL_TLS_TERMINATION: false SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com PREFERRED_URL_SCHEME: https # ...If the configuration is incomplete, the following error might appear:
{ "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
8.4.2. Configuring SSL/TLS and routes Copiar o linkLink copiado para a área de transferência!
Configuring SSL/TLS and routes for Red Hat Quay lets you control how TLS termination and route management work together. The tls component provides support for OpenShift Container Platform edge termination routes and enables independent control of route management and TLS certificate handling.
EXTERNAL_TLS_TERMINATION: true is the default, opinionated setting, which assumes the cluster manages TLS termination.
-
When
tlsis managed, the cluster’s default wildcard certificate is used. -
When
tlsis unmanaged, you must supply your own SSL/TLS certificate and key pair.
Multiple valid configurations are possible, as shown in the following table:
| Option | Route | TLS | Certs provided | Result |
|---|---|---|---|---|
| My own load balancer handles TLS | Managed | Managed | No | Edge route using default cluster wildcard certificate |
| Red Hat Quay handles TLS | Managed | Unmanaged | Yes | Passthrough route with certificates mounted in the Red Hat Quay pod |
| Red Hat Quay handles TLS | Unmanaged | Unmanaged | Yes | Certificates set inside the Red Hat Quay pod; user must manually create a route |
8.5. Disabling the monitoring component Copiar o linkLink copiado para a área de transferência!
Disabling the monitoring component sets the monitoring component to unmanaged in the QuayRegistry custom resource. You must disable monitoring when you install the Red Hat Quay Operator in a single namespace, or you can disable it in multi-namespace installations to use your own monitoring stack.
Monitoring cannot be enabled when the Red Hat Quay Operator is installed in a single namespace.
You might also disable monitoring in multi-namespace deployments if you use an external Prometheus or Grafana instance, want to reduce resource overhead, or require custom observability integration.
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
8.6. Disabling the mirroring component Copiar o linkLink copiado para a área de transferência!
Repository mirroring in Red Hat Quay allows you to automatically synchronize container images from remote registries into your local Red Hat Quay instance. The Red Hat Quay Operator deploys a separate mirroring worker component that handles these synchronization tasks.
You can disable the managed mirroring component by setting it to managed: false in the QuayRegistry custom resource.
Disabling managed mirroring means that the Operator does not deploy or reconcile any mirroring pods. You are responsible for creating, scheduling, and maintaining mirroring jobs manually. For most production deployments, leaving mirroring as managed: true is recommended.
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false
8.7. Configuring QuayRegistry CR resources Copiar o linkLink copiado para a área de transferência!
Configuring resources for managed components lets you adjust CPU and memory requests for quay, clair, mirroring, and database pods. You can configure resources to run smaller test clusters or request more resources upfront to avoid performance issues.
The following components should not be set lower than their minimum requirements. Setting resources too low can cause issues with your deployment and, in some cases, result in failure of the pod’s deployment.
-
quay: Minimum of 6 GB, 2vCPUs -
clair: Recommended of 2 GB memory, 2 vCPUs -
clairpostgres: Minimum of 200 MB
You can configure resource requests on the OpenShift Container Platform UI or directly by updating the QuayRegistry CR via the CLI.
The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively.
8.7.1. Configuring resource requests by using the OpenShift Container Platform web console Copiar o linkLink copiado para a área de transferência!
To configure resource requests for your Red Hat Quay registry components, you can use the OpenShift Container Platform web console to edit the QuayRegistry custom resource. You can set CPU and memory limits and requests for quay, clair, mirroring, and database pods.
Procedure
-
On the OpenShift Container Platform developer console, click Operators
Installed Operators Red Hat Quay. - Click QuayRegistry.
- Click the name of your registry, for example, example-registry.
- Click YAML.
In the
spec.componentsfield, you can override the resources of all components by setting values for the.overrides.resources.limitsand theoverrides.resources.requestsfields. You can also specify astorageClassNameforpostgresandclairpostgresresources, however, these fields must be defined during initial installation of the component.. For example:spec: components: - kind: clair managed: true overrides: resources: limits: cpu: "5" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: "18Gi" # Limiting to 18 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: storageClassName: "local-path" resources: limits: {} requests: cpu: "700m" # Requesting 700 millicpu or 0.7 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: requests: cpu: "800m" # Requesting 800 millicpu or 0.8 CPU memory: "1Gi" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: "4" # Limiting to 4 CPU memory: "10Gi" # Limiting to 10 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "10Gi" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: storageClassName: "local-path" resources: limits: cpu: "800m" # Limiting to 800 millicpu or 0.8 CPU memory: "3Gi" # Limiting to 3 Gibibytes of memory requests: {}-
limits: Setting thelimitsorrequestsfields to{}uses the default values for these resources. -
limits: Leaving thelimitsorrequestsfield empty puts no limitations on these resources.
-
8.7.2. Configuring resource requests by using the CLI Copiar o linkLink copiado para a área de transferência!
To configure resource requests for your Red Hat Quay registry components after deployment, you can edit the QuayRegistry custom resource using the CLI. You can set CPU and memory limits and requests for quay, clair, mirroring, and database pods.
Procedure
Edit the
QuayRegistryCR by entering the following command:$ oc edit quayregistry <registry_name> -n <namespace>Make any desired changes. For example:
- kind: quay managed: true overrides: resources: limits: {} requests: cpu: "0.7" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: "512Mi" # Requesting 512 Mebibytes of memory- Save the changes.