Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. Upgrading the Quay Operator Overview
The Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Quay and the components that it manages. There is no field on the QuayRegistry custom resource which sets the version of Quay to deploy; the Operator only knows how to deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Quay on Kubernetes.
2.1. Operator Lifecycle Manager Copier lienLien copié sur presse-papiers!
The Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM). When creating a Subscription with the default approvalStrategy: Automatic, OLM will automatically upgrade the Quay Operator whenever a new version becomes available.
When the Quay Operator is installed via Operator Lifecycle Manager, it may be configured to support automatic or manual upgrades. This option is shown on the Operator Hub page for the Quay Operator during installation. It can also be found in the Quay Operator Subscription object via the approvalStrategy field. Choosing Automatic means that your Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the Manual approval strategy should be selected.
2.2. Upgrading the Quay Operator Copier lienLien copié sur presse-papiers!
The standard approach for upgrading installed Operators on OpenShift is documented at Upgrading installed Operators.
In general, Red Hat Quay only supports upgrading from one minor version to the next, for example, 3.4
-
3.3.z
3.6 -
3.4.z
3.6 -
3.5.z
3.6
For users on standalone deployments of Quay wanting to upgrade to 3.6, see the Standalone upgrade guide.
2.2.1. Upgrading Quay Copier lienLien copié sur presse-papiers!
To update Quay from one minor version to the next, for example, 3.4
For z stream upgrades, for example, 3.4.2 z stream upgrade depends on the approvalStrategy as outlined above. If the approval strategy is set to Automatic, the Quay Operator will upgrade automatically to the newest z stream. This results in automatic, rolling Quay updates to newer z streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin.
2.2.2. Notes on upgrading directly from 3.3.z or 3.4.z to 3.6 Copier lienLien copié sur presse-papiers!
2.2.2.1. Upgrading with edge routing enabled Copier lienLien copié sur presse-papiers!
- Previously, when running a 3.3.z version of Red Hat Quay with edge routing enabled, users were unable to upgrade to 3.4.z versions of Red Hat Quay. This has been resolved with the release of Red Hat Quay 3.6.
When upgrading from 3.3.z to 3.6, if
tls.terminationis set tononein your Red Hat Quay 3.3.z deployment, it will change to HTTPS with TLS edge termination and use the default cluster wildcard certificate. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2.2.2. Upgrading with custom TLS certificate/key pairs without Subject Alternative Names Copier lienLien copié sur presse-papiers!
There is an issue for customers using their own TLS certificate/key pairs without Subject Alternative Names (SANs) when upgrading from Red Hat Quay 3.3.4 to Red Hat Quay 3.6 directly. During the upgrade to Red Hat Quay 3.6, the deployment is blocked, with the error message from the Quay Operator pod logs indicating that the Quay TLS certificate must have SANs.
If possible, you should regenerate your TLS certificates with the correct hostname in the SANs. A possible workaround involves defining an environment variable in the quay-app, quay-upgrade and quay-config-editor pods after upgrade to enable CommonName matching:
GODEBUG=x509ignoreCN=0
GODEBUG=x509ignoreCN=0
The GODEBUG=x509ignoreCN=0 flag enables the legacy behavior of treating the CommonName field on X.509 certificates as a host name when no SANs are present. However, this workaround is not recommended, as it will not persist across a redeployment.
2.2.2.3. Configuring Clair v4 when upgrading from 3.3.z or 3.4.z to 3.6 using the Quay Operator Copier lienLien copié sur presse-papiers!
To set up Clair v4 on a new Red Hat Quay deployment on OpenShift, it is highly recommended to use the Quay Operator. By default, the Quay Operator will install or upgrade a Clair deployment along with your Red Hat Quay deployment and configure Clair security scanning automatically.
For instructions on setting up Clair v4 on OpenShift, see Setting Up Clair on a Red Hat Quay OpenShift deployment.
2.2.3. Changing the update channel for an Operator Copier lienLien copié sur presse-papiers!
The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Quay Operator. For subscriptions with an Automatic approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators.
2.2.4. Manually approving a pending Operator upgrade Copier lienLien copié sur presse-papiers!
If an installed Operator has the approval strategy in its subscription set to Manual, when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription tab for the Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve and return to the page that lists Installed Operators to monitor the progress of the upgrade.
The following image shows the Subscription tab in the UI, including the update Channel, the Approval strategy, the Upgrade status and the InstallPlan:
The list of Installed Operators provides a high-level summary of the current Quay installation:
2.3. Upgrading a QuayRegistry Copier lienLien copié sur presse-papiers!
When the Quay Operator starts, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used:
-
If
status.currentVersionis unset, reconcile as normal. -
If
status.currentVersionequals the Operator version, reconcile as normal. -
If
status.currentVersiondoes not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set thestatus.currentVersionto the Operator’s version once complete. If it cannot be upgraded, return an error and leave theQuayRegistryand its deployed Kubernetes objects alone.
2.4. Enabling features in Quay 3.6 Copier lienLien copié sur presse-papiers!
2.4.1. Console monitoring and alerting Copier lienLien copié sur presse-papiers!
The support for monitoring Quay 3.6 in the OpenShift console requires that the Operator is installed in all namespaces. If you previously installed the Operator in a specific namespace, delete the Operator itself and reinstall it for all namespaces once the upgrade has taken place.
2.4.2. OCI and Helm support Copier lienLien copié sur presse-papiers!
Support for Helm and some OCI artifacts is now enabled by default in Red Hat Quay 3.6. If you want to explicitly enable the feature, for example, if you are upgrading from a version where it is not enabled by default, you need to reconfigure your Quay deployment to enable the use of OCI artifacts using the following properties:
FEATURE_GENERAL_OCI_SUPPORT: true
FEATURE_GENERAL_OCI_SUPPORT: true
2.5. Upgrading a QuayEcosystem Copier lienLien copié sur presse-papiers!
Upgrades are supported from previous versions of the Operator which used the QuayEcosystem API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry, follow these steps:
Add
"quay-operator/migrate": "true"to themetadata.labelsof theQuayEcosystem.oc edit quayecosystem <quayecosystemname>
$ oc edit quayecosystem <quayecosystemname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow metadata: labels: quay-operator/migrate: "true"metadata: labels: quay-operator/migrate: "true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Wait for a
QuayRegistryto be created with the samemetadata.nameas yourQuayEcosystem. TheQuayEcosystemwill be marked with the label"quay-operator/migration-complete": "true". -
Once the
status.registryEndpointof the newQuayRegistryis set, access Quay and confirm all data and settings were migrated successfully. -
When you are confident everything worked correctly, you may delete the
QuayEcosystemand Kubernetes garbage collection will clean up all old resources.
2.5.1. Reverting QuayEcosystem Upgrade Copier lienLien copié sur presse-papiers!
If something goes wrong during the automatic upgrade from QuayEcosystem to QuayRegistry, follow these steps to revert back to using the QuayEcosystem:
Delete the
QuayRegistryusing either the UI orkubectl:kubectl delete -n <namespace> quayregistry <quayecosystem-name>
$ kubectl delete -n <namespace> quayregistry <quayecosystem-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
If external access was provided using a
Route, change theRouteto point back to the originalServiceusing the UI orkubectl.
If your QuayEcosystem was managing the Postgres database, the upgrade process will migrate your data to a new Postgres database managed by the upgraded Operator. Your old database will not be changed or removed but Quay will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component.
2.5.2. Supported QuayEcosystem Configurations for Upgrades Copier lienLien copié sur presse-papiers!
The Quay Operator will report errors in its logs and in status.conditions if migrating a QuayEcosystem component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Quay’s config.yaml.
Database
Ephemeral database not supported (volumeSize field must be set).
Redis
Nothing special needed.
External Access
Only passthrough Route access is supported for automatic migration. Manual migration required for other methods.
-
LoadBalancerwithout custom hostname: After theQuayEcosystemis marked with label"quay-operator/migration-complete": "true", delete themetadata.ownerReferencesfield from existingServicebefore deleting theQuayEcosystemto prevent Kubernetes from garbage collecting theServiceand removing the load balancer. A newServicewill be created withmetadata.nameformat<QuayEcosystem-name>-quay-app. Edit thespec.selectorof the existingServiceto match thespec.selectorof the newServiceso traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the oldService; the Quay Operator will not manage it. -
LoadBalancer/NodePort/Ingresswith custom hostname: A newServiceof typeLoadBalancerwill be created withmetadata.nameformat<QuayEcosystem-name>-quay-app. Change your DNS settings to point to thestatus.loadBalancerendpoint provided by the newService.
Clair
Nothing special needed.
Object Storage
QuayEcosystem did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported.
Repository Mirroring
Nothing special needed.