Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 4. Geo-replication
Geo-replication lets multiple geographically distributed Red Hat Quay deployments work as a single registry from the client perspective. This improves push and pull performance in globally distributed setups and provides transparent failover and redirect for clients.
Geo-replication is supported on standalone and Operator-based deployments.
4.1. Geo-replication features Link kopierenLink in die Zwischenablage kopiert!
Geo-replication features optimize image push and pull operations by routing pushes to the nearest storage backend and replicating data in the background to other locations. Pulls automatically use the closest available storage engine to maximize performance, with fallback to the source storage if replication is incomplete.
The following are the key features of geo-replication:
- When geo-replication is configured, container image pushes are written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region.
- After the initial push, image data is replicated in the background to other storage engines.
- The list of replication locations is configurable and those can be different storage backends.
- An image pull always uses the closest available storage engine to maximize pull performance.
- If replication has not been completed yet, the pull uses the source storage backend instead.
4.2. Geo-replication requirements and constraints Link kopierenLink in die Zwischenablage kopiert!
To run geo-replication reliably in Red Hat Quay, you must meet strict requirements for shared object storage, networking, load balancing, and external health monitoring. Geo-replication does not provide automatic failover, database replication, or storage awareness, so you must design and manage these behaviors outside of Red Hat Quay.
The following are the requirements and constraints for geo-replication:
- In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region’s object storage. Object storage must be geographically accessible by all other regions.
- In case of an object storage system failure of one geo-replicating site, that site’s Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures.
- Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status.
-
To check the status of your geo-replication deployment, you must use the
/health/endtoendcheckpoint, which is used for global health monitoring. You must configure the redirect manually using the/health/endtoendendpoint. The/health/instanceend point only checks local instance health. - If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites.
- Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure.
A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions.
Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database.
- A single Redis cache is shared across the entire Red Hat Quay setup and needs to be accessible by all Red Hat Quay pods.
-
The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the
QUAY_DISTRIBUTED_STORAGE_PREFERENCEenvironment variable. - Geo-replication requires object storage in each region. It does not work with local storage.
- Each region must be able to access every storage engine in each region, which requires a network path.
- Alternatively, the storage proxy option can be used.
- The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image.
- All Red Hat Quay instances must share the same entrypoint, typically through a load balancer.
- All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file.
In geo-replication environments, your Clair configuration can be set to
unmanaged. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment where multiple instances of the Operator must communicate with the same database. For more information, see Advanced Clair configuration.If you keep your Clair configuration
managed, you must retrieve the configuration file for the deployed Clair instance that is deployed by the Operator. For more information, see Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform.- Geo-Replication requires SSL/TLS certificates and keys. For more information, see * Geo-Replication requires SSL/TLS certificates and keys. For more information, see Proof of concept deployment using SSL/TLS certificates. .
If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions.
4.2.1. Preparing your OpenShift Container Platform environment for geo-replication Link kopierenLink in die Zwischenablage kopiert!
To prepare your OpenShift Container Platform environment for geo-replication, you must deploy shared PostgreSQL and Redis instances, create object storage backends for each cluster, and configure a load balancer. This sets up the infrastructure needed for multiple Red Hat Quay deployments to work as a single registry.
Procedure
- Deploy a PostgreSQL instance for Red Hat Quay.
Login to the database by entering the following command:
psql -U <username> -h <hostname> -p <port> -d <database_name>Create a database for Red Hat Quay named
quay. For example:CREATE DATABASE quay;Enable pg_trm extension inside the database
\c quay; CREATE EXTENSION IF NOT EXISTS pg_trgm;Deploy a Redis instance:
Note- Deploying a Redis instance might be unnecessary if your cloud provider has its own service.
- Deploying a Redis instance is required if you are leveraging Builders.
- Deploy a VM for Redis
- Verify that it is accessible from the clusters where Red Hat Quay is running
- Port 6379/TCP must be open
Run Redis inside the instance
sudo dnf install -y podman podman run -d --name redis -p 6379:6379 redis
- Create two object storage backends, one for each cluster. Ideally, one object storage bucket will be close to the first, or primary, cluster, and the other will run closer to the second, or secondary, cluster.
- Deploy the clusters with the same config bundle, using environment variable overrides to select the appropriate storage backend for an individual cluster.
- Configure a load balancer to provide a single entry point to the clusters.
4.2.2. Configuring geo-replication for Red Hat Quay on OpenShift Container Platform Link kopierenLink in die Zwischenablage kopiert!
To configure geo-replication for Red Hat Quay on OpenShift Container Platform, you can create a shared config.yaml file with PostgreSQL, Redis, and storage backend details, create a configBundleSecret, and deploy QuayRegistry resources in each cluster with storage preference overrides. This enables multiple Red Hat Quay deployments to work as a single registry with improved performance across geographic regions.
Prerequisites
- You have prepared your OpenShift Container Platform environment for geo-replication by following the "Preparing your OpenShift Container Platform environment for geo-replication" procedure.
Procedure
Create a
config.yamlfile that is shared between clusters. Thisconfig.yamlfile contains the details for the common PostgreSQL, Redis and storage backends:Geo-replication
config.yamlfileSERVER_HOSTNAME: <georep.quayteam.org or any other name>1 DB_CONNECTION_ARGS: autorollback: true threadlocals: true DB_URI: postgresql://postgres:password@10.19.0.1:5432/quay BUILDLOGS_REDIS: host: 10.19.0.2 port: 6379 USER_EVENTS_REDIS: host: 10.19.0.2 port: 6379 DATABASE_SECRET_KEY: 0ce4f796-c295-415b-bf9d-b315114704b8 DISTRIBUTED_STORAGE_CONFIG: usstorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQABCDEFG bucket_name: georep-test-bucket-0 secret_key: AYWfEaxX/u84XRA2vUX5C987654321 storage_path: /quaygcp eustorage: - GoogleCloudStorage - access_key: GOOGQGPGVMASAAMQWERTYUIOP bucket_name: georep-test-bucket-1 secret_key: AYWfEaxX/u84XRA2vUX5Cuj12345678 storage_path: /quaygcp DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: - usstorage - eustorage DISTRIBUTED_STORAGE_PREFERENCE: - usstorage - eustorage FEATURE_STORAGE_REPLICATION: truewhere:
SERVER_HOSTNAME::Specifies the hostname of the global load balancer. Must match the hostname of the global load balancer.Create the
configBundleSecretcustom resource (CR) by entering the following command:$ oc create secret generic --from-file config.yaml=./config.yaml georep-config-bundleIn each of the clusters, set the
configBundleSecretand use theQUAY_DISTRIBUTED_STORAGE_PREFERENCEenvironmental variable override to configure the appropriate storage for that cluster. For example:NoteThe
config.yamlfile between both deployments must match. If making a change to one cluster, it must also be changed in the other.US cluster
QuayRegistryexampleapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: usstorageNoteBecause SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring SSL/TLS and Routes.
European cluster
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: georep-config-bundle components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: postgres managed: false - kind: clairpostgres managed: false - kind: redis managed: false - kind: quay managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorage - kind: mirror managed: true overrides: env: - name: QUAY_DISTRIBUTED_STORAGE_PREFERENCE value: eustorageNoteBecause SSL/TLS is unmanaged, and the route is managed, you must supply the certificates directly in the config bundle. For more information, see Configuring SSL and TLS for Red Hat Quay.
4.2.3. Mixed storage for geo-replication Link kopierenLink in die Zwischenablage kopiert!
Geo-replication supports using different storage backends, such as AWS S3 in public cloud and Ceph on-premise, for replication targets. You must grant access to all storage backends from all Red Hat Quay pods and cluster nodes, so use a VPN or token pair with bucket-specific access to meet security requirements.
Because geo-replication supports multiple replication targets, it is recommended to use a VPN or token pair with bucket-specific access to meet security requirements. This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network is encrypted, protected, and uses ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication.
4.2.4. Upgrading a geo-replication deployment of Red Hat Quay on OpenShift Container Platform Link kopierenLink in die Zwischenablage kopiert!
To upgrade your geo-replicated Red Hat Quay on OpenShift Container Platform deployment, you must stop operations, scale down secondary systems, upgrade the primary system, then upgrade secondary systems. This ensures a safe upgrade process with minimal downtime across your geo-replicated registry.
-
When upgrading geo-replicated Red Hat Quay on OpenShift Container Platform deployment to the next y-stream release (for example, Red Hat Quay 3.16
Red Hat Quay 3), you must stop operations before upgrading. - There is intermittent downtime down upgrading from one y-stream release to the next.
- It is highly recommended to back up your Red Hat Quay on OpenShift Container Platform deployment before upgrading.
The following procedure assumes that you are running the Red Hat Quay registry on three or more systems. For this procedure, three systems named System A, System B, and System C are used. System A serves as the primary system in which the Red Hat Quay Operator is deployed.
Procedure
On System B and System C, scale down your Red Hat Quay registry. This is done by disabling auto scaling and overriding the replica county for Red Hat Quay, mirror workers, and Clair if it is managed. Use the following
quayregistry.yamlfile as a reference:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: false - kind: quay managed: true overrides: replicas: 0 - kind: clair managed: true overrides: replicas: 0 - kind: mirror managed: true overrides: replicas: 0 …where:
managed: false::Disables auto scaling ofQuay,ClairandMirroringworkersoverrides::Sets the replica count to 0 for components accessing the database and objectstorageNoteYou must keep the Red Hat Quay registry running on System A. Do not update the
quayregistry.yamlfile on System A.Wait for the
registry-quay-app,registry-quay-mirror, andregistry-clair-apppods to disappear. Enter the following command to check their status:oc get pods -n <quay-namespace>Example output
quay-operator.v3.7.1-6f9d859bd-p5ftc 1/1 Running 0 12m quayregistry-clair-postgres-7487f5bd86-xnxpr 1/1 Running 1 (12m ago) 12m quayregistry-quay-app-upgrade-xq2v6 0/1 Completed 0 12m quayregistry-quay-redis-84f888776f-hhgms 1/1 Running 0 12m- On System A, initiate a Red Hat Quay upgrade to the latest y-stream version. This is a manual process. For more information about upgrading installed Operators, see Upgrading installed Operators. For more information about Red Hat Quay upgrade paths, see Upgrading the Red Hat Quay Operator.
-
After the new Red Hat Quay registry is installed, the necessary upgrades on the cluster are automatically completed. Afterwards, new Red Hat Quay pods are started with the latest y-stream version. Additionally, new
Quaypods are scheduled and started. Confirm that the update has properly worked by navigating to the Red Hat Quay UI:
In the OpenShift console, navigate to Operators
Installed Operators, and click the Registry Endpoint link. ImportantDo not execute the following step until the Red Hat Quay UI is available. Do not upgrade the Red Hat Quay registry on System B and on System C until the UI is available on System A.
Confirm that the update has properly worked on System A, initiate the Red Hat Quay upgrade on System B and on System C. The Operator upgrade results in an upgraded Red Hat Quay installation, and the pods are restarted.
NoteBecause the database schema is correct for the new y-stream installation, the new pods on System B and on System C should quickly start.
After updating, revert the changes made in step 1 of this procedure by removing
overridesfor the components. For example:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: registry namespace: ns spec: components: … - kind: horizontalpodautoscaler managed: true - kind: quay managed: true - kind: clair managed: true - kind: mirror managed: true …where:
kind: horizontalpodautoscaler::Set this resource toTrueif thehorizontalpodautoscalerresource was set toTruebefore the upgrade procedure, or if you want Red Hat Quay to scale in case of a resource shortage.
4.2.5. Removing a geo-replicated site from your Red Hat Quay on OpenShift Container Platform deployment Link kopierenLink in die Zwischenablage kopiert!
To remove a geo-replicated site from your Red Hat Quay on OpenShift Container Platform deployment, you must sync all blobs between sites, remove the storage configuration entry, and use the removelocation utility to permanently delete the site. This action cannot be undone, so ensure that all data is synchronized before proceeding.
Prerequisites
- You are logged into OpenShift Container Platform.
-
You have configured Red Hat Quay geo-replication with at least two sites, for example,
usstorageandeustorage. - Each site has its own Organization, Repository, and image tags.
Procedure
Sync the blobs between all of your defined sites by running the following command:
$ python -m util.backfillreplicationWarningPrior to removing storage engines from your Red Hat Quay
config.yamlfile, you must ensure that all blobs are synced between all defined sites.When running this command, replication jobs are created which are picked up by the replication worker. If there are blobs that need replicated, the script returns UUIDs of blobs that will be replicated. If you run this command multiple times, and the output from the return script is empty, it does not mean that the replication process is done; it means that there are no more blobs to be queued for replication. Customers should use appropriate judgement before proceeding, as the allotted time replication takes depends on the number of blobs detected.
Alternatively, you could use a third party cloud tool, such as Microsoft Azure, to check the synchronization status.
This step must be completed before proceeding.
-
In your Red Hat Quay
config.yamlfile for siteusstorage, remove theDISTRIBUTED_STORAGE_CONFIGentry for theeustoragesite. Identify your Red Hat Quay application pods by entering the following command:
$ oc get pod -n <quay_namespace>Example output
quay390usstorage-quay-app-5779ddc886-2drh2 quay390eustorage-quay-app-66969cd859-n2ssmOpen an interactive shell session in the
usstoragepod by entering the following command:$ oc rsh quay390usstorage-quay-app-5779ddc886-2drh2Permanently remove the
eustoragesite by entering the following command:ImportantThe following action cannot be undone. Use with caution.
sh-4.4$ python -m util.removelocation eustorageExample output
WARNING: This is a destructive operation. Are you sure you want to remove eustorage from your storage locations? [y/n] y Deleted placement 30 Deleted placement 31 Deleted placement 32 Deleted placement 33 Deleted location eustorage