Chapter 3. Configuring Quay before deployment
The Operator can manage all the Red Hat Quay components when deploying on OpenShift, and this is the default configuration. Alternatively, you can manage one or more components externally yourself, where you want more control over the set up, and then allow the Operator to manage the remaining components.
The standard pattern for configuring unmanaged components is:
-
Create a
config.yaml
configuration file with the appropriate settings Create a Secret using the configuration file
oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
Copy to Clipboard Copied! Create a QuayRegistry YAML file
quayregistry.yaml
, identifying the unmanaged components and also referencing the created Secret, for example:quayregistry.yaml
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false
Copy to Clipboard Copied! Deploy the registry using the YAML file:
oc create -n quay-enterprise -f quayregistry.yaml
$ oc create -n quay-enterprise -f quayregistry.yaml
Copy to Clipboard Copied!
3.1. Pre-configuring Red Hat Quay for automation
Red Hat Quay has several configuration options that support automation. These options can be set before deployment to minimize the need to interact with the user interface.
3.1.1. Allowing the API to create the first user
To create the first user using the /api/v1/user/initialize
API, set the FEATURE_USER_INITIALIZE
parameter to true
. Unlike all other registry API calls which require an OAuth token that is generated by an OAuth application in an existing organization, the API endpoint does not require authentication.
After you have deployed Red Hat Quay, you can use the API to create a user, for example, quayadmin
, assuming that no other users have already been created. For more information see Using the API to create the first user.
3.1.2. Enabling general API access
Set the config option BROWSER_API_CALLS_XHR_ONLY
to false
to allow general access to the Red Hat Quay registry API.
3.1.3. Adding a superuser
After deploying Red Hat Quay, you can create a user. It is suggested that the first user be given administrator privileges with full permissions. Full permissions can be configured in advance by using the SUPER_USER
configuration object. For example:
... SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin ...
...
SERVER_HOSTNAME: quay-server.example.com
SETUP_COMPLETE: true
SUPER_USERS:
- quayadmin
...
3.1.4. Restricting user creation
After you have configured a super user, you can restrict the ability to create new users to the super user group. Set the FEATURE_USER_CREATION
to false
to restrict user creation. For example:
... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false ...
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
3.1.5. Enabling new functionality
To use new Red Hat Quay 3.8 functionality, enable some or all of the following features:
... FEATURE_UI_V2: true FEATURE_LISTEN_IP_VERSION: FEATURE_SUPERUSERS_FULL_ACCESS: true GLOBAL_READONLY_SUPER_USERS: - FEATURE_RESTRICTED_USERS: true RESTRICTED_USERS_WHITELIST: - ...
...
FEATURE_UI_V2: true
FEATURE_LISTEN_IP_VERSION:
FEATURE_SUPERUSERS_FULL_ACCESS: true
GLOBAL_READONLY_SUPER_USERS:
-
FEATURE_RESTRICTED_USERS: true
RESTRICTED_USERS_WHITELIST:
-
...
3.1.6. Enabling new functionality
To use new Red Hat Quay 3.7 functionality, enable some or all of the following features:
... FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: true FEATURE_PROXY_CACHE: true FEATURE_STORAGE_REPLICATION: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000 ...
...
FEATURE_QUOTA_MANAGEMENT: true
FEATURE_BUILD_SUPPORT: true
FEATURE_PROXY_CACHE: true
FEATURE_STORAGE_REPLICATION: true
DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000
...
3.1.7. Suggested configuration for automation
The following config.yaml
parameters are suggested for automation:
... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false ...
...
FEATURE_USER_INITIALIZE: true
BROWSER_API_CALLS_XHR_ONLY: false
SUPER_USERS:
- quayadmin
FEATURE_USER_CREATION: false
...
3.2. Configuring object storage
You need to configure object storage before installing Red Hat Quay, irrespective of whether you are allowing the Operator to manage the storage or managing it yourself.
If you want the Operator to be responsible for managing storage, see the section on Managed storage for information on installing and configuring the NooBaa / RHOCS Operator.
If you are using a separate storage solution, set objectstorage
as unmanaged
when configuring the Operator. See the following section. Unmanaged storage, for details of configuring existing storage.
3.2.1. Unmanaged storage
Some configuration examples for unmanaged storage are provided in this section for convenience. See the Red Hat Quay configuration guide for full details for setting up object storage.
3.2.1.1. AWS S3 storage
DISTRIBUTED_STORAGE_CONFIG: s3Storage: - S3Storage - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage
DISTRIBUTED_STORAGE_CONFIG:
s3Storage:
- S3Storage
- host: s3.us-east-2.amazonaws.com
s3_access_key: ABCDEFGHIJKLMN
s3_secret_key: OL3ABCDEFGHIJKLMN
s3_bucket: quay_bucket
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- s3Storage
3.2.1.2. Google Cloud storage
DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay-bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
3.2.1.3. Azure storage
DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
- azure_account_name: azure_account_name_here
azure_container: azure_container_here
storage_path: /datastorage/registry
azure_account_key: azure_account_key_here
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
- 1
- The
endpoint_url
parameter for Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Azure region.As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
3.2.1.4. Ceph/RadosGW Storage
DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - radosGWStorage
DISTRIBUTED_STORAGE_CONFIG:
radosGWStorage: #storage config name
- RadosGWStorage #actual driver
- access_key: access_key_here #parameters
secret_key: secret_key_here
bucket_name: bucket_name_here
hostname: hostname_here
is_secure: 'true'
port: '443'
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config
- radosGWStorage
3.2.1.5. Swift storage
DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 1 ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage
DISTRIBUTED_STORAGE_CONFIG:
swiftStorage:
- SwiftStorage
- swift_user: swift_user_here
swift_password: swift_password_here
swift_container: swift_container_here
auth_url: https://example.org/swift/v1/quay
auth_version: 1
ca_cert_path: /conf/stack/swift.cert"
storage_path: /datastorage/registry
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- swiftStorage
3.2.1.6. NooBaa unmanaged storage
Use the following procedure to deploy NooBaa as your unmanaged storage configuration.
Procedure
-
Create a NooBaa Object Bucket Claim in the {product-title} console by navigating to Storage
Object Bucket Claims. - Retrieve the Object Bucket Claim Data details, including the Access Key, Bucket Name, Endpoint (hostname), and Secret Key.
Create a
config.yaml
configuration file using the information for the Object Bucket Claim:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
Copy to Clipboard Copied!
For more information about configuring an Object Bucket Claim, see Object Bucket Claim.
3.2.2. Managed storage
If you want the Operator to manage object storage for Quay, your cluster needs to be capable of providing object storage via the ObjectBucketClaim
API. Using the Red Hat OpenShift Data Foundation (ODF) Operator, there are two supported options available:
A standalone instance of the Multi-Cloud Object Gateway backed by a local Kubernetes
PersistentVolume
storage- Not highly available
- Included in the Quay subscription
- Does not require a separate subscription for ODF
A production deployment of ODF with scale-out Object Service and Ceph
- Highly available
- Requires a separate subscription for ODF
To use the standalone instance option, continue reading below. For production deployment of ODF, please refer to the official documentation.
Object storage disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Red Hat Quay installations but may not be sufficient for your use cases. Resizing the RHOCS volume is currently not handled by the Operator. See the section below on resizing managed storage for more details.
3.2.2.1. Leveraging the Multicloud Object Gateway Component in the Red Hat OpenShift Data Foundation Operator for Red Hat Quay
As part of a Red Hat Quay subscription, users are entitled to use the Multicloud Object Gateway component of the Red Hat OpenShift Data Foundation Operator (formerly known as OpenShift Container Storage Operator). This gateway component allows you to provide an S3-compatible object storage interface to Red Hat Quay backed by Kubernetes PersistentVolume
-based block storage. The usage is limited to a Red Hat Quay deployment managed by the Operator and to the exact specifications of the multicloud Object Gateway instance as documented below.
Since Red Hat Quay does not support local filesystem storage, users can leverage the gateway in combination with Kubernetes PersistentVolume
storage instead, to provide a supported deployment. A PersistentVolume
is directly mounted on the gateway instance as a backing store for object storage and any block-based StorageClass
is supported.
By the nature of PersistentVolume
, this is not a scale-out, highly available solution and does not replace a scale-out storage system like Red Hat OpenShift Data Foundation. Only a single instance of the gateway is running. If the pod running the gateway becomes unavailable due to rescheduling, updates or unplanned downtime, this will cause temporary degradation of the connected Red Hat Quay instances.
Using the following procedures, you will install the Local Storage Operator, Red Hat OpenShift Data Foundation, and create a standalone Multicloud Object Gateway to deploy Red Hat Quay on OpenShift Container Platform.
The following documentation shares commonality with the official Red Hat OpenShift Data Foundation documentation.
3.2.2.1.1. Installing the Local Storage Operator on OpenShift Container Platform
Use the following procedure to install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. - Type local storage into the search box to find the Local Storage Operator from the list of Operators. Click Local Storage.
- Click Install.
Set the following options on the Install Operator page:
- For Update channel, select stable.
- For Installation mode, select A specific namespace on the cluster.
- For Installed Namespace, select Operator recommended namespace openshift-local-storage.
- For Update approval, select Automatic.
- Click Install.
3.2.2.1.2. Installing Red Hat OpenShift Data Foundation on OpenShift Container Platform
Use the following procedure to install Red Hat OpenShift Data Foundation on OpenShift Container Platform.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and Operator installation permissions. - You must have at least three worker nodes in the OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. - Type OpenShift Data Foundation in the search box. Click OpenShift Data Foundation.
- Click Install.
Set the following options on the Install Operator page:
- For Update channel, select the most recent stable version.
- For Installation mode, select A specific namespace on the cluster.
- For Installed Namespace, select Operator recommended Namespace: openshift-storage.
For Update approval, select Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- For Console plugin, select Enable.
Click Install.
After the Operator is installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- Continue to the following section, "Creating a standalone Multicloud Object Gateway", to leverage the Multicloud Object Gateway Component for Red Hat Quay.
3.2.2.1.3. Creating a standalone Multicloud Object Gateway using the OpenShift Container Platform UI
Use the following procedure to create a standalone Multicloud Object Gateway.
Prerequisites
- You have installed the Local Storage Operator.
- You have installed the Red Hat OpenShift Data Foundation Operator.
Procedure
In the OpenShift Web Console, click Operators
Installed Operators to view all installed Operators. Ensure that the namespace is
openshift-storage
.- Click Create StorageSystem.
On the Backing storage page, select the following:
- Select Multicloud Object Gateway for Deployment type.
- Select the Create a new StorageClass using the local storage devices option.
Click Next.
NoteYou are prompted to install the Local Storage Operator if it is not already installed. Click Install, and follow the procedure as described in "Installing the Local Storage Operator on OpenShift Container Platform".
On the Create local volume set page, provide the following information:
- Enter a name for the LocalVolumeSet and the StorageClass. By default, the local volume set name appears for the storage class name. You can change the name.
Choose one of the following:
Disk on all nodes
Uses the available disks that match the selected filters on all the nodes.
Disk on selected nodes
Uses the available disks that match the selected filters only on the selected nodes.
- From the available list of Disk Type, select SSD/NVMe.
Expand the Advanced section and set the following options:
Volume Mode
Filesystem is selected by default. Always ensure that Filesystem is selected for Volume Mode.
Device Type
Select one or more device type from the dropdown list.
Disk Size
Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.
Maximum Disks Limit
This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
Click Next
A pop-up to confirm the creation of
LocalVolumeSet
is displayed.- Click Yes to continue.
In the Capacity and nodes page, configure the following:
- Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
- Click Next to continue.
Optional. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
Select an Authentication Method.
Using Token Authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
Click Save and skip to step iv.
Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:
- Enter the Key Value secret path in Backend Path that is dedicated and unique to Red Hat OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Authentication Path if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
- Click Save and skip to step iv.
To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
- Address: 123.34.3.2
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local
.
- Select a Network.
- Click Next.
- In the Review and create page, review the configuration details. To modify any configuration settings, click Back.
- Click Create StorageSystem.
3.2.2.1.4. Create A standalone Multicloud Object Gateway using the CLI
Use the following procedure to install the Red Hat OpenShift Data Foundation (formerly known as OpenShift Container Storage) Operator and configure a single instance Multi-Cloud Gateway service.
The following configuration cannot be run in parallel on a cluster with Red Hat OpenShift Data Foundation installed.
Procedure
-
On the OpenShift Web Console, and then select Operators
OperatorHub. - Search for Red Hat OpenShift Data Foundation, and then select Install.
- Accept all default options, and then select Install.
Confirm that the Operator has installed by viewing the Status column, which should be marked as Succeeded.
WarningWhen the installation of the Red Hat OpenShift Data Foundation Operator is finished, you are prompted to create a storage system. Do not follow this instruction. Instead, create NooBaa object storage as outlined the following steps.
On your machine, create a file named
noobaa.yaml
with the following information:apiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: noobaa namespace: openshift-storage spec: dbResources: requests: cpu: '0.1' memory: 1Gi dbType: postgres coreResources: requests: cpu: '0.1' memory: 1Gi
apiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: noobaa namespace: openshift-storage spec: dbResources: requests: cpu: '0.1' memory: 1Gi dbType: postgres coreResources: requests: cpu: '0.1' memory: 1Gi
Copy to Clipboard Copied! This creates a single instance deployment of the Multi-cloud Object Gateway.
Apply the configuration with the following command:
oc create -n openshift-storage -f noobaa.yaml
$ oc create -n openshift-storage -f noobaa.yaml
Copy to Clipboard Copied! Example output
noobaa.noobaa.io/noobaa created
noobaa.noobaa.io/noobaa created
Copy to Clipboard Copied! After a few minutes, the Multi-cloud Object Gateway should finish provisioning. You can enter the following command to check its status:
oc get -n openshift-storage noobaas noobaa -w
$ oc get -n openshift-storage noobaas noobaa -w
Copy to Clipboard Copied! Example output
NAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa [https://10.0.32.3:30318] [https://10.0.32.3:31958] registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d Ready 3d18h
NAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa [https://10.0.32.3:30318] [https://10.0.32.3:31958] registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d Ready 3d18h
Copy to Clipboard Copied! Configure a backing store for the gateway by creating the following YAML file, named
noobaa-pv-backing-store.yaml
:apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: noobaa-pv-backing-store namespace: openshift-storage spec: pvPool: numVolumes: 1 resources: requests: storage: 50Gi storageClass: STORAGE-CLASS-NAME type: pv-pool
apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: noobaa-pv-backing-store namespace: openshift-storage spec: pvPool: numVolumes: 1 resources: requests: storage: 50Gi
1 storageClass: STORAGE-CLASS-NAME
2 type: pv-pool
Copy to Clipboard Copied! Enter the following command to apply the configuration:
oc create -f noobaa-pv-backing-store.yaml
$ oc create -f noobaa-pv-backing-store.yaml
Copy to Clipboard Copied! Example output
backingstore.noobaa.io/noobaa-pv-backing-store created
backingstore.noobaa.io/noobaa-pv-backing-store created
Copy to Clipboard Copied! This creates the backing store configuration for the gateway. All images in Red Hat Quay will be stored as objects through the gateway in a
PersistentVolume
created by the above configuration.Run the following command to make the
PersistentVolume
backing store the default for allObjectBucketClaims
issued by the Red Hat Quay Operator:oc patch bucketclass noobaa-default-bucket-class --patch '{"spec":{"placementPolicy":{"tiers":[{"backingStores":["noobaa-pv-backing-store"]}]}}}' --type merge -n openshift-storage
$ oc patch bucketclass noobaa-default-bucket-class --patch '{"spec":{"placementPolicy":{"tiers":[{"backingStores":["noobaa-pv-backing-store"]}]}}}' --type merge -n openshift-storage
Copy to Clipboard Copied!
3.3. Configuring the database
3.3.1. Using an existing Postgres database
Requirements:
If you are using an externally managed PostgreSQL database, you must manually enable pg_trgm extension for a successful deployment.
Create a configuration file
config.yaml
with the necessary database fields:config.yaml:
DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database
Copy to Clipboard Copied! Create a Secret using the configuration file:
kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
$ kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
Copy to Clipboard Copied! Create a QuayRegistry YAML file
quayregistry.yaml
which marks thepostgres
component as unmanaged and references the created Secret:quayregistry.yaml
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false
Copy to Clipboard Copied! - Deploy the registry as detailed in the following sections.
3.3.2. Database configuration
This section describes the database configuration fields available for Red Hat Quay deployments.
3.3.2.1. Database URI
With Red Hat Quay, connection to the database is configured by using the required DB_URI
field.
The following table describes the DB_URI
configuration field:
Field | Type | Description |
---|---|---|
DB_URI | String | The URI for accessing the database, including any credentials.
Example postgresql://quayuser:quaypass@quay-server.example.com:5432/quay |
3.3.2.2. Database connection arguments
Optional connection arguments are configured by the DB_CONNECTION_ARGS
parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS
are generic, while others are database specific.
The following table describes database connection arguments:
Field | Type | Description |
---|---|---|
DB_CONNECTION_ARGS | Object | Optional connection arguments for the database, such as timeouts and SSL. |
.autorollback | Boolean |
Whether to use thread-local connections. |
.threadlocals | Boolean |
Whether to use auto-rollback connections. |
3.3.2.2.1. PostgreSQL SSL connection arguments
With SSL, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL configuration:
DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert
DB_CONNECTION_ARGS:
sslmode: verify-ca
sslrootcert: /path/to/cacert
The sslmode
option determines whether, or with, what priority a secure SSL TCP/IP connection will be negotiated with the server. There are six modes:
Mode | Description |
---|---|
disable | Your configuration only tries non-SSL connections. |
allow | Your configuration first tries a non-SSL connection. Upon failure, tries an SSL connection. |
prefer | Your configuration first tries an SSL connection. Upon failure, tries a non-SSL connection. |
require | Your configuration only tries an SSL connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. |
verify-ca | Your configuration only tries an SSL connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). |
verify-full | Only tries an SSL connection, and verifies that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate. |
For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions.
3.3.2.2.2. MySQL SSL connection arguments
The following example shows a sample MySQL SSL configuration:
DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert
DB_CONNECTION_ARGS:
ssl:
ca: /path/to/cacert
Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs.
3.3.3. Using the managed PostgreSQL
Recommendations:
- Database backups should be performed regularly using either the supplied tools on the Postgres image or your own backup infrastructure. The Operator does not currently ensure the Postgres database is backed up.
-
Restoring the Postgres database from a backup must be done using Postgres tools and procedures. Be aware that your Quay
Pods
should not be running while the database restore is in progress. - Database disk space is allocated automatically by the Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Red Hat Quay installations but may not be sufficient for your use cases. Resizing the database volume is currently not handled by the Operator.
3.4. Configuring SSL/TLS and Routes
Support for OpenShift Container Platform Edge-Termination Routes has been added by way of a new managed component, tls
. This separates the route
component from SSL/TLS and allows users to configure both separately.
EXTERNAL_TLS_TERMINATION: true
is the opinionated setting.
-
Managed
tls
means that the default cluster wildcard certificate is used. -
Unmanaged
tls
means that the user provided key and certificate pair is be injected into theRoute
.
The ssl.cert
and ssl.key
are now moved to a separate, persistent secret, which ensures that the key and certificate pair are not re-generated upon every reconcile. The key and certificate pair are now formatted as edge
routes and mounted to the same directory in the Quay
container.
Multiple permutations are possible when configuring SSL/TLS and Routes, but the following rules apply:
-
If SSL/TLS is
managed
, then your route must also bemanaged
-
If SSL/TLS is
unmanaged
then you must supply certificates, either with the config tool or directly in the config bundle
The following table describes the valid options:
Option | Route | TLS | Certs provided | Result |
---|---|---|---|---|
My own load balancer handles TLS | Managed | Managed | No | Edge Route with default wildcard cert |
Red Hat Quay handles TLS | Managed | Unmanaged | Yes | Passthrough route with certs mounted inside the pod |
Red Hat Quay handles TLS | Unmanaged | Unmanaged | Yes | Certificates are set inside the quay pod but route must be created manually |
Red Hat Quay 3.7 does not support builders when TLS is managed by the Operator.
3.4.1. Creating the config bundle secret with the SSL/TLS cert and key pair
Use the following procedure to create a config bundle secret that includes your own SSL/TLS certificate and key pair.
Procedure
Enter the following command to create config bundle secret that includes your own SSL/TLS certificate and key pair:
oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
$ oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
Copy to Clipboard Copied!
3.5. Configuring external Redis
Use the content in this section to an external Redis deployment.
3.5.1. Using external Redis
Use the following procedure to use an external Redis database.
If you wish to use an external Redis database, set the component as unmanaged in the QuayRegistry
instance:
Procedure
Create a
config.yaml
file using the following Redis fields:BUILDLOGS_REDIS: host: quay-server.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: quay-server.example.com port: 6379 ssl: false
BUILDLOGS_REDIS: host: quay-server.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: quay-server.example.com port: 6379 ssl: false
Copy to Clipboard Copied! Enter the following command to create a secret using the configuration file:
oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
Copy to Clipboard Copied! Create a
quayregistry.yaml
file that sets the Redis component tounmanaged
and references the created secret:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false
Copy to Clipboard Copied! - Deploy the Red Hat Quay registry.
3.5.2. Horizontal Pod Autoscaler
Horizontal Pod Autoscalers (HPAs) have been added to the Clair
, Quay
, and Mirror
pods, so that they now automatically scale during load spikes.
As HPA is configured by default to be managed
, the number of Clair
, Quay
, and Mirror
pods is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Red Hat Quay by the Operator or during rescheduling events.
3.5.2.1. Disabling the Horizontal Pod Autoscaler
To disable autoscaling or create your own HorizontalPodAutoscaler
, specify the component as unmanaged
in the QuayRegistry
instance. For example:
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: horizontalpodautoscaler
managed: false
3.5.3. Disabling Route Component
Use the following procedure to prevent the Red Hat Quay Operator from creating a route.
Procedure
Set the component as
unmanaged
in thequayregistry.yaml
file:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false
Copy to Clipboard Copied! Edit the
config.yaml
file to specify that Red Hat Quay handles SSL/TLS. For example:... EXTERNAL_TLS_TERMINATION: false ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com ... PREFERRED_URL_SCHEME: https ...
... EXTERNAL_TLS_TERMINATION: false ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com ... PREFERRED_URL_SCHEME: https ...
Copy to Clipboard Copied! If you do not configure the
unmanaged
route correctly, the following error is returned:{ { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
{ { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" }
Copy to Clipboard Copied!
Disabling the default route means you are now responsible for creating a Route
, Service
, or Ingress
in order to access the Red Hat Quay instance. Additionally, whatever DNS you use must match the SERVER_HOSTNAME
in the Red Hat Quay config.
3.5.4. Unmanaged monitoring
If you install the Red Hat Quay Operator in a single namespace, the monitoring component is automatically set to unmanaged
. Use the following reference to explicitly disable monitoring.
Unmanaged monitoring
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: monitoring managed: false
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: monitoring
managed: false
To enable monitoring in this scenario, see the section Enabling monitoring when the Red Hat Quay Operator is installed in a single namespace.
3.5.5. Unmanaged mirroring
To disable mirroring explicitly:
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: mirroring managed: false
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: quay-enterprise
spec:
components:
- kind: mirroring
managed: false