Chapter 3. Configuring Red Hat Quay before deployment
The Red Hat Quay Operator can manage all of the Red Hat Quay components when deployed on OpenShift Container Platform. This is the default configuration, however, you can manage one or more components externally when you want more control over the set up.
Use the following pattern to configure unmanaged Red Hat Quay components.
Procedure
Create a
config.yaml
configuration file with the appropriate settings. Use the following reference for a minimal configuration:$ touch config.yaml
AUTHENTICATION_TYPE: Database BUILDLOGS_REDIS: host: <quay-server.example.com> password: <strongpassword> port: 6379 ssl: false DATABASE_SECRET_KEY: <0ce4f796-c295-415b-bf9d-b315114704b8> DB_URI: <postgresql://quayuser:quaypass@quay-server.example.com:5432/quay> DEFAULT_TAG_EXPIRATION: 2w DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default PREFERRED_URL_SCHEME: http SECRET_KEY: <e8f9fe68-1f84-48a8-a05f-02d72e6eccba> SERVER_HOSTNAME: <quay-server.example.com> SETUP_COMPLETE: true TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w - 3y USER_EVENTS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false
Create a
Secret
using the configuration file by entering the following command:$ oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret
Create a
quayregistry.yaml
file, identifying the unmanaged components and also referencing the createdSecret
, for example:Example
QuayRegistry
YAML fileapiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: <config_bundle_secret> components: - kind: objectstorage managed: false # ...
Enter the following command to deploy the registry by using the
quayregistry.yaml
file:$ oc create -n quay-enterprise -f quayregistry.yaml
3.1. Pre-configuring Red Hat Quay for automation
Red Hat Quay supports several configuration options that enable automation. Users can configure these options before deployment to reduce the need for interaction with the user interface.
3.1.1. Allowing the API to create the first user
To create the first user, users need to set the FEATURE_USER_INITIALIZE
parameter to true
and call the /api/v1/user/initialize
API. Unlike all other registry API calls that require an OAuth token generated by an OAuth application in an existing organization, the API endpoint does not require authentication.
Users can use the API to create a user such as quayadmin
after deploying Red Hat Quay, provided no other users have been created. For more information, see Using the API to create the first user.
3.1.2. Enabling general API access
Users should set the BROWSER_API_CALLS_XHR_ONLY
configuration option to false
to allow general access to the Red Hat Quay registry API.
3.1.3. Adding a superuser
After deploying Red Hat Quay, users can create a user and give the first user administrator privileges with full permissions. Users can configure full permissions in advance by using the SUPER_USER
configuration object. For example:
# ... SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin # ...
3.1.4. Restricting user creation
After you have configured a superuser, you can restrict the ability to create new users to the superuser group by setting the FEATURE_USER_CREATION
to false
. For example:
# ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ...
3.1.5. Enabling new functionality in Red Hat Quay 3.13
To use new Red Hat Quay 3.13 functions, enable some or all of the following features:
# ... FEATURE_UI_V2: true FEATURE_UI_V2_REPO_SETTINGS: true FEATURE_AUTO_PRUNE: true ROBOTS_DISALLOW: false # ...
3.1.6. Suggested configuration for automation
The following config.yaml
parameters are suggested for automation:
# ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ...
3.2. Configuring object storage
You need to configure object storage before installing Red Hat Quay, irrespective of whether you are allowing the Red Hat Quay Operator to manage the storage or managing it yourself.
If you want the Red Hat Quay Operator to be responsible for managing storage, see the section on Managed storage for information on installing and configuring NooBaa and the Red Hat OpenShift Data Foundations Operator.
If you are using a separate storage solution, set objectstorage
as unmanaged
when configuring the Operator. See the following section. Unmanaged storage, for details of configuring existing storage.
3.2.1. Using unmanaged storage
This section provides configuration examples for unmanaged storage for your convenience. Refer to the Red Hat Quay configuration guide for complete instructions on how to set up object storage.
3.2.1.1. AWS S3 storage
Use the following example when configuring AWS S3 storage for your Red Hat Quay deployment.
DISTRIBUTED_STORAGE_CONFIG: s3Storage: - S3Storage - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket s3_region: <region> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage
3.2.1.2. Google Cloud storage
Use the following example when configuring Google Cloud storage for your Red Hat Quay deployment.
DISTRIBUTED_STORAGE_CONFIG:
googleCloudStorage:
- GoogleCloudStorage
- access_key: GOOGQIMFB3ABCDEFGHIJKLMN
bucket_name: quay-bucket
secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN
storage_path: /datastorage/registry
boto_timeout: 120 1
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- googleCloudStorage
- 1
- Optional. The time, in seconds, until a timeout exception is thrown when attempting to read from a connection. The default is
60
seconds. Also encompasses the time, in seconds, until a timeout exception is thrown when attempting to make a connection. The default is60
seconds.
3.2.1.3. Microsoft Azure storage
Use the following example when configuring Microsoft Azure storage for your Red Hat Quay deployment.
DISTRIBUTED_STORAGE_CONFIG:
azureStorage:
- AzureStorage
- azure_account_name: azure_account_name_here
azure_container: azure_container_here
storage_path: /datastorage/registry
azure_account_key: azure_account_key_here
sas_token: some/path/
endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- azureStorage
- 1
- The
endpoint_url
parameter for Microsoft Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, theendpoint_url
will connect to the normal Microsoft Azure region.As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error:
AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary
.
3.2.1.4. Ceph/RadosGW Storage
Use the following example when configuring Ceph/RadosGW storage for your Red Hat Quay deployment.
DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - radosGWStorage
3.2.1.5. Swift storage
Use the following example when configuring Swift storage for your Red Hat Quay deployment.
DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 3 os_options: tenant_id: <osp_tenant_id_here> user_domain_name: <osp_domain_name_here> ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage
3.2.1.6. NooBaa unmanaged storage
Use the following procedure to deploy NooBaa as your unmanaged storage configuration.
Procedure
-
Create a NooBaa Object Bucket Claim in the Red Hat Quay console by navigating to Storage
Object Bucket Claims. - Retrieve the Object Bucket Claim Data details, including the Access Key, Bucket Name, Endpoint (hostname), and Secret Key.
Create a
config.yaml
configuration file that uses the information for the Object Bucket Claim:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
For more information about configuring an Object Bucket Claim, see Object Bucket Claim.
3.2.2. Using an unmanaged NooBaa instance
Use the following procedure to use an unmanaged NooBaa instance for your Red Hat Quay deployment.
Procedure
-
Create a NooBaa Object Bucket Claim in the console at Storage
Object Bucket Claims. -
Retrieve the Object Bucket Claim Data details including the
Access Key
,Bucket Name
,Endpoint (hostname)
, andSecret Key
. Create a
config.yaml
configuration file using the information for the Object Bucket Claim. For example:DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default
3.2.3. Managed storage
If you want the Red Hat Quay Operator to manage object storage for Red Hat Quay, your cluster needs to be capable of providing object storage through the ObjectBucketClaim
API. Using the Red Hat OpenShift Data Foundation Operator, there are two supported options available:
A standalone instance of the Multi-Cloud Object Gateway backed by a local Kubernetes
PersistentVolume
storage- Not highly available
- Included in the Red Hat Quay subscription
- Does not require a separate subscription for Red Hat OpenShift Data Foundation
A production deployment of Red Hat OpenShift Data Foundation with scale-out Object Service and Ceph
- Highly available
- Requires a separate subscription for Red Hat OpenShift Data Foundation
To use the standalone instance option, continue reading below. For production deployment of Red Hat OpenShift Data Foundation, please refer to the official documentation.
Object storage disk space is allocated automatically by the Red Hat Quay Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Red Hat Quay installations but might not be sufficient for your use cases. Resizing the Red Hat OpenShift Data Foundation volume is currently not handled by the Red Hat Quay Operator. See the section below about resizing managed storage for more details.
3.2.3.1. Leveraging the Multicloud Object Gateway Component in the Red Hat OpenShift Data Foundation Operator for Red Hat Quay
As part of a Red Hat Quay subscription, users are entitled to use the Multicloud Object Gateway component of the Red Hat OpenShift Data Foundation Operator (formerly known as OpenShift Container Storage Operator). This gateway component allows you to provide an S3-compatible object storage interface to Red Hat Quay backed by Kubernetes PersistentVolume
-based block storage. The usage is limited to a Red Hat Quay deployment managed by the Operator and to the exact specifications of the multicloud Object Gateway instance as documented below.
Since Red Hat Quay does not support local filesystem storage, users can leverage the gateway in combination with Kubernetes PersistentVolume
storage instead, to provide a supported deployment. A PersistentVolume
is directly mounted on the gateway instance as a backing store for object storage and any block-based StorageClass
is supported.
By the nature of PersistentVolume
, this is not a scale-out, highly available solution and does not replace a scale-out storage system like Red Hat OpenShift Data Foundation. Only a single instance of the gateway is running. If the pod running the gateway becomes unavailable due to rescheduling, updates or unplanned downtime, this will cause temporary degradation of the connected Red Hat Quay instances.
Using the following procedures, you will install the Local Storage Operator, Red Hat OpenShift Data Foundation, and create a standalone Multicloud Object Gateway to deploy Red Hat Quay on OpenShift Container Platform.
The following documentation shares commonality with the official Red Hat OpenShift Data Foundation documentation.
3.2.3.1.1. Installing the Local Storage Operator on OpenShift Container Platform
Use the following procedure to install the Local Storage Operator from the OperatorHub before creating Red Hat OpenShift Data Foundation clusters on local storage devices.
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. - Type local storage into the search box to find the Local Storage Operator from the list of Operators. Click Local Storage.
- Click Install.
Set the following options on the Install Operator page:
- For Update channel, select stable.
- For Installation mode, select A specific namespace on the cluster.
- For Installed Namespace, select Operator recommended namespace openshift-local-storage.
- For Update approval, select Automatic.
- Click Install.
3.2.3.1.2. Installing Red Hat OpenShift Data Foundation on OpenShift Container Platform
Use the following procedure to install Red Hat OpenShift Data Foundation on OpenShift Container Platform.
Prerequisites
-
Access to an OpenShift Container Platform cluster using an account with
cluster-admin
and Operator installation permissions. - You must have at least three worker nodes in the OpenShift Container Platform cluster.
- For additional resource requirements, see the Planning your deployment guide.
Procedure
- Log in to the OpenShift Web Console.
-
Click Operators
OperatorHub. - Type OpenShift Data Foundation in the search box. Click OpenShift Data Foundation.
- Click Install.
Set the following options on the Install Operator page:
- For Update channel, select the most recent stable version.
- For Installation mode, select A specific namespace on the cluster.
- For Installed Namespace, select Operator recommended Namespace: openshift-storage.
For Update approval, select Automatic or Manual.
If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.
If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.
- For Console plugin, select Enable.
Click Install.
After the Operator is installed, a pop-up with a message,
Web console update is available
appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect.- Continue to the following section, "Creating a standalone Multicloud Object Gateway", to leverage the Multicloud Object Gateway Component for Red Hat Quay.
3.2.3.1.3. Creating a standalone Multicloud Object Gateway using the OpenShift Container Platform UI
Use the following procedure to create a standalone Multicloud Object Gateway.
Prerequisites
- You have installed the Local Storage Operator.
- You have installed the Red Hat OpenShift Data Foundation Operator.
Procedure
In the OpenShift Web Console, click Operators
Installed Operators to view all installed Operators. Ensure that the namespace is
openshift-storage
.- Click Create StorageSystem.
On the Backing storage page, select the following:
- Select Multicloud Object Gateway for Deployment type.
- Select the Create a new StorageClass using the local storage devices option.
Click Next.
NoteYou are prompted to install the Local Storage Operator if it is not already installed. Click Install, and follow the procedure as described in "Installing the Local Storage Operator on OpenShift Container Platform".
On the Create local volume set page, provide the following information:
- Enter a name for the LocalVolumeSet and the StorageClass. By default, the local volume set name appears for the storage class name. You can change the name.
Choose one of the following:
Disk on all nodes
Uses the available disks that match the selected filters on all the nodes.
Disk on selected nodes
Uses the available disks that match the selected filters only on the selected nodes.
- From the available list of Disk Type, select SSD/NVMe.
Expand the Advanced section and set the following options:
Volume Mode
Filesystem is selected by default. Always ensure that Filesystem is selected for Volume Mode.
Device Type
Select one or more device type from the dropdown list.
Disk Size
Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included.
Maximum Disks Limit
This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes.
Click Next
A pop-up to confirm the creation of
LocalVolumeSet
is displayed.- Click Yes to continue.
In the Capacity and nodes page, configure the following:
- Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class.
- Click Next to continue.
Optional. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption.
- From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP). If you selected Vault, go to the next step. If you selected Thales CipherTrust Manager (using KMIP), go to step iii.
Select an Authentication Method.
Using Token Authentication method
- Enter a unique Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Token.
Expand Advanced Settings to enter additional settings and certificate details based on your
Vault
configuration:- Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Vault Enterprise Namespace.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
Click Save and skip to step iv.
Using Kubernetes authentication method
- Enter a unique Vault Connection Name, host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name.
Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration:
- Enter the Key Value secret path in Backend Path that is dedicated and unique to Red Hat OpenShift Data Foundation.
- Optional: Enter TLS Server Name and Authentication Path if applicable.
- Upload the respective PEM encoded certificate file to provide the CA Certificate, Client Certificate, and Client Private Key.
- Click Save and skip to step iv.
To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below:
- Enter a unique Connection Name for the Key Management service within the project.
In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example:
- Address: 123.34.3.2
- Port: 5696
- Upload the Client Certificate, CA certificate, and Client Private Key.
- If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above.
-
The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example,
kmip_all_<port>.ciphertrustmanager.local
.
- Select a Network.
- Click Next.
- In the Review and create page, review the configuration details. To modify any configuration settings, click Back.
- Click Create StorageSystem.
3.2.3.1.4. Create A standalone Multicloud Object Gateway using the CLI
Use the following procedure to install the Red Hat OpenShift Data Foundation (formerly known as OpenShift Container Storage) Operator and configure a single instance Multi-Cloud Gateway service.
The following configuration cannot be run in parallel on a cluster with Red Hat OpenShift Data Foundation installed.
Procedure
-
On the OpenShift Web Console, and then select Operators
OperatorHub. - Search for Red Hat OpenShift Data Foundation, and then select Install.
- Accept all default options, and then select Install.
Confirm that the Operator has installed by viewing the Status column, which should be marked as Succeeded.
WarningWhen the installation of the Red Hat OpenShift Data Foundation Operator is finished, you are prompted to create a storage system. Do not follow this instruction. Instead, create NooBaa object storage as outlined the following steps.
On your machine, create a file named
noobaa.yaml
with the following information:apiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: noobaa namespace: openshift-storage spec: dbResources: requests: cpu: '0.1' memory: 1Gi dbType: postgres coreResources: requests: cpu: '0.1' memory: 1Gi
This creates a single instance deployment of the Multi-cloud Object Gateway.
Apply the configuration with the following command:
$ oc create -n openshift-storage -f noobaa.yaml
Example output
noobaa.noobaa.io/noobaa created
After a few minutes, the Multi-cloud Object Gateway should finish provisioning. You can enter the following command to check its status:
$ oc get -n openshift-storage noobaas noobaa -w
Example output
NAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa [https://10.0.32.3:30318] [https://10.0.32.3:31958] registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d Ready 3d18h
Configure a backing store for the gateway by creating the following YAML file, named
noobaa-pv-backing-store.yaml
:apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: noobaa-pv-backing-store namespace: openshift-storage spec: pvPool: numVolumes: 1 resources: requests: storage: 50Gi 1 storageClass: STORAGE-CLASS-NAME 2 type: pv-pool
Enter the following command to apply the configuration:
$ oc create -f noobaa-pv-backing-store.yaml
Example output
backingstore.noobaa.io/noobaa-pv-backing-store created
This creates the backing store configuration for the gateway. All images in Red Hat Quay will be stored as objects through the gateway in a
PersistentVolume
created by the above configuration.Run the following command to make the
PersistentVolume
backing store the default for allObjectBucketClaims
issued by the Red Hat Quay Operator:$ oc patch bucketclass noobaa-default-bucket-class --patch '{"spec":{"placementPolicy":{"tiers":[{"backingStores":["noobaa-pv-backing-store"]}]}}}' --type merge -n openshift-storage