Chapter 8. Advanced Concepts
8.1. Deploying Quay on infrastructure nodes
By default, Quay-related pods are placed on arbitrary worker nodes when using the Operator to deploy the registry. The OpenShift Container Platform documentation shows how to use machine sets to configure nodes to only host infrastructure components (see https://docs.openshift.com/container-platform/4.7/machine_management/creating-infrastructure-machinesets.html).
If you are not using OCP MachineSet resources to deploy infra nodes, this section shows you how to manually label and taint nodes for infrastructure purposes.
Once you have configured your infrastructure nodes, either manually or using machine sets, you can then control the placement of Quay pods on these nodes using node selectors and tolerations.
8.1.1. Label and taint nodes for infrastructure use
In the cluster used in this example, there are three master nodes and six worker nodes:
oc get nodes
$ oc get nodes
NAME STATUS ROLES AGE VERSION
user1-jcnp6-master-0.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583
user1-jcnp6-master-1.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583
user1-jcnp6-master-2.c.quay-devel.internal Ready master 3h30m v1.20.0+ba45583
user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583
user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583
user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583
user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583
user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready worker 3h22m v1.20.0+ba45583
user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready worker 3h21m v1.20.0+ba45583
Label the final three worker nodes for infrastructure use:
oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra= oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra= oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=
$ oc label node --overwrite user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra=
$ oc label node --overwrite user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra=
$ oc label node --overwrite user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra=
Now, when you list the nodes in the cluster, the last 3 worker nodes will have an added role of infra
:
oc get nodes
$ oc get nodes
NAME STATUS ROLES AGE VERSION
user1-jcnp6-master-0.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583
user1-jcnp6-master-1.c.quay-devel.internal Ready master 4h15m v1.20.0+ba45583
user1-jcnp6-master-2.c.quay-devel.internal Ready master 4h14m v1.20.0+ba45583
user1-jcnp6-worker-b-65plj.c.quay-devel.internal Ready worker 4h6m v1.20.0+ba45583
user1-jcnp6-worker-b-jr7hc.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583
user1-jcnp6-worker-c-jrq4v.c.quay-devel.internal Ready worker 4h5m v1.20.0+ba45583
user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583
user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583
user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal Ready infra,worker 4h6m v1.20.0+ba45583
With an infra node being assigned as a worker, there is a chance that user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and then add tolerations for the pods you want to control.
oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule
$ oc adm taint nodes user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule
$ oc adm taint nodes user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule
$ oc adm taint nodes user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal node-role.kubernetes.io/infra:NoSchedule
8.1.2. Create a Project with node selector and toleration
If you have already deployed Quay using the Quay Operator, remove the installed operator and any specific namespace(s) you created for the deployment.
Create a Project resource, specifying a node selector and toleration as shown in the following example:
quay-registry.yaml
kind: Project apiVersion: project.openshift.io/v1 metadata: name: quay-registry annotations: openshift.io/node-selector: 'node-role.kubernetes.io/infra=' scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "node-role.kubernetes.io/infra"} ]
kind: Project
apiVersion: project.openshift.io/v1
metadata:
name: quay-registry
annotations:
openshift.io/node-selector: 'node-role.kubernetes.io/infra='
scheduler.alpha.kubernetes.io/defaultTolerations: >-
[{"operator": "Exists", "effect": "NoSchedule", "key":
"node-role.kubernetes.io/infra"}
]
Use the oc apply
command to create the project:
oc apply -f quay-registry.yaml
$ oc apply -f quay-registry.yaml
project.project.openshift.io/quay-registry created
Any subsequent resources created in the quay-registry
namespace should now be scheduled on the dedicated infrastructure nodes.
8.1.3. Install the Quay Operator in the namespace
When installing the Quay Operator, specify the appropriate project namespace explicitly, in this case quay-registry
. This will result in the operator pod itself landing on one of the three infrastructure nodes:
oc get pods -n quay-registry -o wide
$ oc get pods -n quay-registry -o wide
NAME READY STATUS RESTARTS AGE IP NODE
quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 30s 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
8.1.4. Create the registry
Create the registry as explained earlier, and then wait for the deployment to be ready. When you list the Quay pods, you should now see that they have only been scheduled on the three nodes that you have labelled for infrastructure purposes:
oc get pods -n quay-registry -o wide
$ oc get pods -n quay-registry -o wide
NAME READY STATUS RESTARTS AGE IP NODE
example-registry-clair-app-789d6d984d-gpbwd 1/1 Running 1 5m57s 10.130.2.80 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal
example-registry-clair-postgres-7c8697f5-zkzht 1/1 Running 0 4m53s 10.129.2.19 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal
example-registry-quay-app-56dd755b6d-glbf7 1/1 Running 1 5m57s 10.129.2.17 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal
example-registry-quay-config-editor-7bf9bccc7b-dpc6d 1/1 Running 0 5m57s 10.131.0.23 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
example-registry-quay-database-8dc7cfd69-dr2cc 1/1 Running 0 5m43s 10.129.2.18 user1-jcnp6-worker-c-pwxfp.c.quay-devel.internal
example-registry-quay-mirror-78df886bcc-v75p9 1/1 Running 0 5m16s 10.131.0.24 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
example-registry-quay-postgres-init-8s8g9 0/1 Completed 0 5m54s 10.130.2.79 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal
example-registry-quay-redis-5688ddcdb6-ndp4t 1/1 Running 0 5m56s 10.130.2.78 user1-jcnp6-worker-d-m9gg4.c.quay-devel.internal
quay-operator.v3.4.1-6f6597d8d8-bd4dp 1/1 Running 0 22m 10.131.0.16 user1-jcnp6-worker-d-h5tv2.c.quay-devel.internal
8.2. Enabling monitoring when the Red Hat Quay Operator is installed in a single namespace
When the Red Hat Quay Operator is installed in a single namespace, the monitoring component is set to unmanaged
. To configure monitoring, you need to enable it for user-defined namespaces in OpenShift Container Platform.
For more information, see the OpenShift Container Platform documentation for Configuring the monitoring stack and Enabling monitoring for user-defined projects.
The following sections shows you how to enable monitoring for Red Hat Quay based on the OpenShift Container Platform documentation.
8.2.1. Creating a cluster monitoring config map
Use the following procedure check if the cluster-monitoring-config
ConfigMap
object exists.
Procedure
Enter the following command to check whether the
cluster-monitoring-config
ConfigMap object exists:oc -n openshift-monitoring get configmap cluster-monitoring-config
$ oc -n openshift-monitoring get configmap cluster-monitoring-config
Copy to Clipboard Copied! Example output
Error from server (NotFound): configmaps "cluster-monitoring-config" not found
Error from server (NotFound): configmaps "cluster-monitoring-config" not found
Copy to Clipboard Copied! Optional: If the
ConfigMap
object does not exist, create a YAML manifest. In the following example, the file is calledcluster-monitoring-config.yaml
.cat <<EOF > cluster-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | EOF
cat <<EOF > cluster-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | EOF
Copy to Clipboard Copied! Optional: If the
ConfigMap
object does not exist, create theConfigMap
object:oc apply -f cluster-monitoring-config.yaml configmap/cluster-monitoring-config created
$ oc apply -f cluster-monitoring-config.yaml configmap/cluster-monitoring-config created
Copy to Clipboard Copied! Ensure that the
ConfigMap
object exists by running the following command:oc -n openshift-monitoring get configmap cluster-monitoring-config
$ oc -n openshift-monitoring get configmap cluster-monitoring-config
Copy to Clipboard Copied! Example output
NAME DATA AGE cluster-monitoring-config 1 12s
NAME DATA AGE cluster-monitoring-config 1 12s
Copy to Clipboard Copied!
8.2.2. Creating a user-defined workload monitoring ConfigMap object
Use the following procedure check if the user-workload-monitoring-config
ConfigMap
object exists.
Procedure
Enter the following command to check whether the
user-workload-monitoring-config
ConfigMap
object exists:oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config
$ oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config
Copy to Clipboard Copied! Example output
Error from server (NotFound): configmaps "user-workload-monitoring-config" not found
Error from server (NotFound): configmaps "user-workload-monitoring-config" not found
Copy to Clipboard Copied! If the
ConfigMap
object does not exist, create a YAML manifest. In the following example, the file is calleduser-workload-monitoring-config.yaml
.cat <<EOF > user-workload-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | EOF
cat <<EOF > user-workload-monitoring-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | EOF
Copy to Clipboard Copied! Optional: Create the
ConfigMap
object by entering the following command:oc apply -f user-workload-monitoring-config.yaml
$ oc apply -f user-workload-monitoring-config.yaml
Copy to Clipboard Copied! Example output
configmap/user-workload-monitoring-config created
configmap/user-workload-monitoring-config created
Copy to Clipboard Copied!
8.2.3. Enable monitoring for user-defined projects
Use the following procedure to enable monitoring for user-defined projects.
Procedure
Enter the following command to check if monitoring for user-defined projects is running:
oc get pods -n openshift-user-workload-monitoring
$ oc get pods -n openshift-user-workload-monitoring
Copy to Clipboard Copied! Example output
No resources found in openshift-user-workload-monitoring namespace.
No resources found in openshift-user-workload-monitoring namespace.
Copy to Clipboard Copied! Edit the
cluster-monitoring-config
ConfigMap
by entering the following command:oc -n openshift-monitoring edit configmap cluster-monitoring-config
$ oc -n openshift-monitoring edit configmap cluster-monitoring-config
Copy to Clipboard Copied! Set
enableUserWorkload: true
in yourconfig.yaml
file to enable monitoring for user-defined projects on the cluster:apiVersion: v1 data: config.yaml: | enableUserWorkload: true kind: ConfigMap metadata: annotations:
apiVersion: v1 data: config.yaml: | enableUserWorkload: true kind: ConfigMap metadata: annotations:
Copy to Clipboard Copied! Enter the following command to save the file, apply the changes, and ensure that the appropriate pods are running:
oc get pods -n openshift-user-workload-monitoring
$ oc get pods -n openshift-user-workload-monitoring
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE prometheus-operator-6f96b4b8f8-gq6rl 2/2 Running 0 15s prometheus-user-workload-0 5/5 Running 1 12s prometheus-user-workload-1 5/5 Running 1 12s thanos-ruler-user-workload-0 3/3 Running 0 8s thanos-ruler-user-workload-1 3/3 Running 0 8s
NAME READY STATUS RESTARTS AGE prometheus-operator-6f96b4b8f8-gq6rl 2/2 Running 0 15s prometheus-user-workload-0 5/5 Running 1 12s prometheus-user-workload-1 5/5 Running 1 12s thanos-ruler-user-workload-0 3/3 Running 0 8s thanos-ruler-user-workload-1 3/3 Running 0 8s
Copy to Clipboard Copied!
8.2.4. Creating a Service object to expose Red Hat Quay metrics
Use the following procedure to create a Service
object to expose Red Hat Quay metrics.
Procedure
Create a YAML file for the Service object:
cat <<EOF > quay-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: quay-component: monitoring quay-operator/quayregistry: example-registry name: example-registry-quay-metrics namespace: quay-enterprise spec: ports: - name: quay-metrics port: 9091 protocol: TCP targetPort: 9091 selector: quay-component: quay-app quay-operator/quayregistry: example-registry type: ClusterIP EOF
$ cat <<EOF > quay-service.yaml apiVersion: v1 kind: Service metadata: annotations: labels: quay-component: monitoring quay-operator/quayregistry: example-registry name: example-registry-quay-metrics namespace: quay-enterprise spec: ports: - name: quay-metrics port: 9091 protocol: TCP targetPort: 9091 selector: quay-component: quay-app quay-operator/quayregistry: example-registry type: ClusterIP EOF
Copy to Clipboard Copied! Create the
Service
object by entering the following command:oc apply -f quay-service.yaml
$ oc apply -f quay-service.yaml
Copy to Clipboard Copied! Example output
service/example-registry-quay-metrics created
service/example-registry-quay-metrics created
Copy to Clipboard Copied!
8.2.5. Creating a ServiceMonitor object
Use the following procedure to configure OpenShift Monitoring to scrape the metrics by creating a ServiceMonitor
resource.
Procedure
Create a YAML file for the
ServiceMonitor
resource:cat <<EOF > quay-service-monitor.yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: quay-operator/quayregistry: example-registry name: example-registry-quay-metrics-monitor namespace: quay-enterprise spec: endpoints: - port: quay-metrics namespaceSelector: any: true selector: matchLabels: quay-component: monitoring EOF
$ cat <<EOF > quay-service-monitor.yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: quay-operator/quayregistry: example-registry name: example-registry-quay-metrics-monitor namespace: quay-enterprise spec: endpoints: - port: quay-metrics namespaceSelector: any: true selector: matchLabels: quay-component: monitoring EOF
Copy to Clipboard Copied! Create the
ServiceMonitor
resource by entering the following command:oc apply -f quay-service-monitor.yaml
$ oc apply -f quay-service-monitor.yaml
Copy to Clipboard Copied! Example output
servicemonitor.monitoring.coreos.com/example-registry-quay-metrics-monitor created
servicemonitor.monitoring.coreos.com/example-registry-quay-metrics-monitor created
Copy to Clipboard Copied!
8.2.6. Viewing metrics in OpenShift Container Platform
You can access the metrics in the OpenShift Container Platform console under Monitoring
For example, if you have added users to your registry, select the quay-users_rows metric:
8.3. Resizing Managed Storage
The Red Hat Quay Operator creates default object storage using the defaults provided by Red Hat OpenShift Data Foundation (ODF) when creating a NooBaa
object (50 Gib).
There are two ways to extend NooBaa
object storage:
- You can resize an existing persistent volume claim (PVC).
- You can add more PVCs to a new storage pool.
Expanding CSI volumes is a Technology Preview feature only. For more information, see https://access.redhat.com/documentation/en-us/openshift_container_platform/4.6/html/storage/expanding-persistent-volumes.
8.3.1. Resizing the NooBaa PVC
Use the following procedure to resize the NooBaa PVC.
Procedure
-
Log into the OpenShift Container Platform console and select Storage
Persistent Volume Claims. -
Select the
PersistentVolumeClaim
named likenoobaa-default-backing-store-noobaa-pvc-*
. - From the Action menu, select Expand PVC.
- Enter the new size of the Persistent Volume Claim and select Expand.
After a few minutes (depending on the size of the PVC), the expanded size should reflect in the PVC’s Capacity field.
8.4. Customizing Default Operator Images
In certain circumstances, it might be useful to override the default images used by the Red Hat Quay Operator. This can be done by setting one or more environment variables in the Red Hat Quay Operator ClusterServiceVersion
.
Using this mechanism is not supported for production Red Hat Quay environments and is strongly encouraged only for development/testing purposes. There is no guarantee your deployment will work correctly when using non-default images with the Red Hat Quay Operator.
8.4.1. Environment Variables
The following environment variables are used in the Red Hat Quay Operator to override component images:
Environment Variable | Component |
|
|
|
|
|
|
|
|
Overridden images must be referenced by manifest (@sha256:) and not by tag (:latest).
8.4.2. Applying overrides to a running Operator
When the Red Hat Quay Operator is installed in a cluster through the Operator Lifecycle Manager (OLM), the managed component container images can be easily overridden by modifying the ClusterServiceVersion
object.
Use the following procedure to apply overrides to a running Operator.
Procedure
The
ClusterServiceVersion
object is OLM’s representation of a running Operator in the cluster. Find the Red Hat Quay Operator’sClusterServiceVersion
by using a Kubernetes UI or thekubectl
/oc
CLI tool. For example:oc get clusterserviceversions -n <your-namespace>
$ oc get clusterserviceversions -n <your-namespace>
Copy to Clipboard Copied! Using the UI,
oc edit
, or another method, modify the Red Hat QuayClusterServiceVersion
to include the environment variables outlined above to point to the override images:JSONPath:
spec.install.spec.deployments[0].spec.template.spec.containers[0].env
- name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
- name: RELATED_IMAGE_COMPONENT_QUAY value: quay.io/projectquay/quay@sha256:c35f5af964431673f4ff5c9e90bdf45f19e38b8742b5903d41c10cc7f6339a6d - name: RELATED_IMAGE_COMPONENT_CLAIR value: quay.io/projectquay/clair@sha256:70c99feceb4c0973540d22e740659cd8d616775d3ad1c1698ddf71d0221f3ce6 - name: RELATED_IMAGE_COMPONENT_POSTGRES value: centos/postgresql-10-centos7@sha256:de1560cb35e5ec643e7b3a772ebaac8e3a7a2a8e8271d9e91ff023539b4dfb33 - name: RELATED_IMAGE_COMPONENT_REDIS value: centos/redis-32-centos7@sha256:06dbb609484330ec6be6090109f1fa16e936afcf975d1cbc5fff3e6c7cae7542
Copy to Clipboard Copied!
Note that this is done at the Operator level, so every QuayRegistry will be deployed using these same overrides.
8.5. AWS S3 CloudFront
Use the following procedure if you are using AWS S3 Cloudfront for your backend registry storage.
Procedure
Enter the following command to specify the registry key:
oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
$ oc create secret generic --from-file config.yaml=./config_awss3cloudfront.yaml --from-file default-cloudfront-signing-key.pem=./default-cloudfront-signing-key.pem test-config-bundle
Copy to Clipboard Copied!
8.6. Advanced Clair configuration
Use the procedures in the following sections to configure advanced Clair settings.
8.6.1. Unmanaged Clair configuration
Red Hat Quay users can run an unmanaged Clair configuration with the Red Hat Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database.
An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster.
8.6.1.1. Running a custom Clair configuration with an unmanaged Clair database
Use the following procedure to set your Clair database to unmanaged.
Procedure
In the Quay Operator, set the
clairpostgres
component of theQuayRegistry
custom resource tomanaged: false
:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false
Copy to Clipboard Copied!
8.6.1.2. Configuring a custom Clair database with an unmanaged Clair database
The Red Hat Quay Operator for OpenShift Container Platform allows users to provide their own Clair database.
Use the following procedure to create a custom Clair database.
The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TSL certifications, see "Configuring a custom Clair database with a managed Clair configuration".
Procedure
Create a Quay configuration bundle secret that includes the
clair-config.yaml
by entering the following command:oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
$ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
Copy to Clipboard Copied! Example Clair
config.yaml
fileindexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true
indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true
Copy to Clipboard Copied! Note-
The database certificate is mounted under
/run/certs/rds-ca-2019-root.pem
on the Clair application pod in theclair-config.yaml
. It must be specified when configuring yourclair-config.yaml
. -
An example
clair-config.yaml
can be found at Clair on OpenShift config.
-
The database certificate is mounted under
Add the
clair-config.yaml
file to your bundle secret, for example:apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>
apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>
Copy to Clipboard Copied! NoteWhen updated, the provided
clair-config.yaml
file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.You can check the status of your Clair pod by clicking the commit in the Build History page, or by running
oc get pods -n <namespace>
. For example:oc get pods -n <namespace>
$ oc get pods -n <namespace>
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s
NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s
Copy to Clipboard Copied!
8.6.2. Running a custom Clair configuration with a managed Clair database
In some cases, users might want to run a custom Clair configuration with a managed Clair database. This is useful in the following scenarios:
- When a user wants to disable specific updater resources.
When a user is running Red Hat Quay in an disconnected environment. For more information about running Clair in a disconnected environment, see Configuring access to the Clair database in the air-gapped OpenShift cluster.
Note-
If you are running Red Hat Quay in an disconnected environment, the
airgap
parameter of yourclair-config.yaml
must be set totrue
. - If you are running Red Hat Quay in an disconnected environment, you should disable all updater components.
-
If you are running Red Hat Quay in an disconnected environment, the
8.6.2.1. Setting a Clair database to managed
Use the following procedure to set your Clair database to managed.
Procedure
In the Quay Operator, set the
clairpostgres
component of theQuayRegistry
custom resource tomanaged: true
:apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true
apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true
Copy to Clipboard Copied!
8.6.2.2. Configuring a custom Clair database with a managed Clair configuration
The Red Hat Quay Operator for OpenShift Container Platform allows users to provide their own Clair database.
Use the following procedure to create a custom Clair database.
Procedure
Create a Quay configuration bundle secret that includes the
clair-config.yaml
by entering the following command:oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret
$ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret
Copy to Clipboard Copied! Example Clair
config.yaml
fileindexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true
indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true
Copy to Clipboard Copied! Note-
The database certificate is mounted under
/run/certs/rds-ca-2019-root.pem
on the Clair application pod in theclair-config.yaml
. It must be specified when configuring yourclair-config.yaml
. -
An example
clair-config.yaml
can be found at Clair on OpenShift config.
-
The database certificate is mounted under
Add the
clair-config.yaml
file to your bundle secret, for example:apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>
apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>
Copy to Clipboard Copied! Note-
When updated, the provided
clair-config.yaml
file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.
-
When updated, the provided
You can check the status of your Clair pod by clicking the commit in the Build History page, or by running
oc get pods -n <namespace>
. For example:oc get pods -n <namespace>
$ oc get pods -n <namespace>
Copy to Clipboard Copied! Example output
NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s
NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s
Copy to Clipboard Copied!
8.6.3. Clair in disconnected environments
Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use. However, some users might require Red Hat Quay to run in a disconnected environment, or an environment without direct access to the internet. Clair supports disconnected environments by working with different types of update workflows that take network isolation into consideration. This works by using the clairctl
command line interface tool, which obtains updater data from the internet by using an open host, securely transferring the data to an isolated host, and then important the updater data on the isolated host into Clair.
Use this guide to deploy Clair in a disconnected environment.
Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments.
For more information about Clair updaters, see "Clair updaters".
8.6.3.1. Setting up Clair in a disconnected OpenShift Container Platform cluster
Use the following procedures to set up an OpenShift Container Platform provisioned Clair pod in a disconnected OpenShift Container Platform cluster.
8.6.3.1.1. Installing the clairctl command line utility tool for OpenShift Container Platform deployments
Use the following procedure to install the clairctl
CLI tool for OpenShift Container Platform deployments.
Procedure
Install the
clairctl
program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command:oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl
$ oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl
Copy to Clipboard Copied! NoteUnofficially, the
clairctl
tool can be downloadedSet the permissions of the
clairctl
file so that it can be executed and run by the user, for example:chmod u+x ./clairctl
$ chmod u+x ./clairctl
Copy to Clipboard Copied!
8.6.3.1.2. Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform
Use the following procedure to retrieve and decode the configuration secret for an OpenShift Container Platform provisioned Clair instance on OpenShift Container Platform.
Prerequisites
-
You have installed the
clairctl
command line utility tool.
Procedure
Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML:
oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={$.data['config\.yaml']}" | base64 -d > clair-config.yaml
$ oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={$.data['config\.yaml']}" | base64 -d > clair-config.yaml
Copy to Clipboard Copied! Update the
clair-config.yaml
file so that thedisable_updaters
andairgap
parameters are set totrue
, for example:--- indexer: airgap: true --- matcher: disable_updaters: true ---
--- indexer: airgap: true --- matcher: disable_updaters: true ---
Copy to Clipboard Copied!
8.6.3.1.3. Exporting the updaters bundle from a connected Clair instance
Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet.
Prerequisites
-
You have installed the
clairctl
command line utility tool. -
You have retrieved and decoded the Clair configuration secret, and saved it to a Clair
config.yaml
file. -
The
disable_updaters
andairgap
parameters are set totrue
in your Clairconfig.yaml
file.
Procedure
From a Clair instance that has access to the internet, use the
clairctl
CLI tool with your configuration file to export the updaters bundle. For example:./clairctl --config ./config.yaml export-updaters updates.gz
$ ./clairctl --config ./config.yaml export-updaters updates.gz
Copy to Clipboard Copied!
8.6.3.1.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster
Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster.
Prerequisites
-
You have installed the
clairctl
command line utility tool. -
You have retrieved and decoded the Clair configuration secret, and saved it to a Clair
config.yaml
file. -
The
disable_updaters
andairgap
parameters are set totrue
in your Clairconfig.yaml
file. - You have exported the updaters bundle from a Clair instance that has access to the internet.
Procedure
Determine your Clair database service by using the
oc
CLI tool, for example:oc get svc -n quay-enterprise
$ oc get svc -n quay-enterprise
Copy to Clipboard Copied! Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ...
Copy to Clipboard Copied! Forward the Clair database port so that it is accessible from the local machine. For example:
oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
$ oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
Copy to Clipboard Copied! Update your Clair
config.yaml
file, for example:indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json
indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner:
2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner:
3 name2repos_mapping_file: /data/repo-map.json
Copy to Clipboard Copied! - 1
- Replace the value of the
host
in the multipleconnstring
fields withlocalhost
. - 2
- For more information about the
rhel-repository-scanner
parameter, see "Mapping repositories to Common Product Enumeration information". - 3
- For more information about the
rhel_containerscanner
parameter, see "Mapping repositories to Common Product Enumeration information".
8.6.3.1.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster
Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster.
Prerequisites
-
You have installed the
clairctl
command line utility tool. -
You have retrieved and decoded the Clair configuration secret, and saved it to a Clair
config.yaml
file. -
The
disable_updaters
andairgap
parameters are set totrue
in your Clairconfig.yaml
file. - You have exported the updaters bundle from a Clair instance that has access to the internet.
- You have transferred the updaters bundle into your disconnected environment.
Procedure
Use the
clairctl
CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example:./clairctl --config ./clair-config.yaml import-updaters updates.gz
$ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
Copy to Clipboard Copied!
8.6.3.2. Setting up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster
Use the following procedures to set up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster.
8.6.3.2.1. Installing the clairctl command line utility tool for a self-managed Clair deployment on OpenShift Container Platform
Use the following procedure to install the clairctl
CLI tool for self-managed Clair deployments on OpenShift Container Platform.
Procedure
Install the
clairctl
program for a self-managed Clair deployment by using thepodman cp
command, for example:sudo podman cp clairv4:/usr/bin/clairctl ./clairctl
$ sudo podman cp clairv4:/usr/bin/clairctl ./clairctl
Copy to Clipboard Copied! Set the permissions of the
clairctl
file so that it can be executed and run by the user, for example:chmod u+x ./clairctl
$ chmod u+x ./clairctl
Copy to Clipboard Copied!
8.6.3.2.2. Deploying a self-managed Clair container for disconnected OpenShift Container Platform clusters
Use the following procedure to deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters.
Prerequisites
-
You have installed the
clairctl
command line utility tool.
Procedure
Create a folder for your Clair configuration file, for example:
mkdir /etc/clairv4/config/
$ mkdir /etc/clairv4/config/
Copy to Clipboard Copied! Create a Clair configuration file with the
disable_updaters
parameter set totrue
, for example:--- indexer: airgap: true --- matcher: disable_updaters: true ---
--- indexer: airgap: true --- matcher: disable_updaters: true ---
Copy to Clipboard Copied! Start Clair by using the container image, mounting in the configuration from the file you created:
sudo podman run -it --rm --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ -v /etc/clairv4/config:/clair:Z \ registry.redhat.io/quay/clair-rhel8:v3.8.15
$ sudo podman run -it --rm --name clairv4 \ -p 8081:8081 -p 8088:8088 \ -e CLAIR_CONF=/clair/config.yaml \ -e CLAIR_MODE=combo \ -v /etc/clairv4/config:/clair:Z \ registry.redhat.io/quay/clair-rhel8:v3.8.15
Copy to Clipboard Copied!
8.6.3.2.3. Exporting the updaters bundle from a connected Clair instance
Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet.
Prerequisites
-
You have installed the
clairctl
command line utility tool. - You have deployed Clair.
-
The
disable_updaters
andairgap
parameters are set totrue
in your Clairconfig.yaml
file.
Procedure
From a Clair instance that has access to the internet, use the
clairctl
CLI tool with your configuration file to export the updaters bundle. For example:./clairctl --config ./config.yaml export-updaters updates.gz
$ ./clairctl --config ./config.yaml export-updaters updates.gz
Copy to Clipboard Copied!
8.6.3.2.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster
Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster.
Prerequisites
-
You have installed the
clairctl
command line utility tool. - You have deployed Clair.
-
The
disable_updaters
andairgap
parameters are set totrue
in your Clairconfig.yaml
file. - You have exported the updaters bundle from a Clair instance that has access to the internet.
Procedure
Determine your Clair database service by using the
oc
CLI tool, for example:oc get svc -n quay-enterprise
$ oc get svc -n quay-enterprise
Copy to Clipboard Copied! Example output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ...
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ...
Copy to Clipboard Copied! Forward the Clair database port so that it is accessible from the local machine. For example:
oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
$ oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432
Copy to Clipboard Copied! Update your Clair
config.yaml
file, for example:indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json
indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable
1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner:
2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner:
3 name2repos_mapping_file: /data/repo-map.json
Copy to Clipboard Copied! - 1
- Replace the value of the
host
in the multipleconnstring
fields withlocalhost
. - 2
- For more information about the
rhel-repository-scanner
parameter, see "Mapping repositories to Common Product Enumeration information". - 3
- For more information about the
rhel_containerscanner
parameter, see "Mapping repositories to Common Product Enumeration information".
8.6.3.2.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster
Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster.
Prerequisites
-
You have installed the
clairctl
command line utility tool. - You have deployed Clair.
-
The
disable_updaters
andairgap
parameters are set totrue
in your Clairconfig.yaml
file. - You have exported the updaters bundle from a Clair instance that has access to the internet.
- You have transferred the updaters bundle into your disconnected environment.
Procedure
Use the
clairctl
CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform:./clairctl --config ./clair-config.yaml import-updaters updates.gz
$ ./clairctl --config ./clair-config.yaml import-updaters updates.gz
Copy to Clipboard Copied!
8.6.4. Enabling Clair CRDA
Java scanning depends on a public, Red Hat provided API service called Code Ready Dependency Analytics (CRDA). CRDA is only available with internet access and is not enabled by default.
Use the following procedure to integrate the CRDA service with a custom API key and enable CRDA for Java and Python scanning.
Prerequisites
- Red Hat Quay 3.7 or greater
Procedure
- Submit the API key request form to obtain the Quay-specific CRDA remote matcher.
Set the CRDA configuration in your
clair-config.yaml
file:matchers: config: crda: url: https://gw.api.openshift.io/api/v2/ key: <CRDA_API_KEY> source: <QUAY_SERVER_HOSTNAME>
matchers: config: crda: url: https://gw.api.openshift.io/api/v2/ key: <CRDA_API_KEY>
1 source: <QUAY_SERVER_HOSTNAME>
2 Copy to Clipboard Copied! - 1
- Insert the Quay-specific CRDA remote matcher from the API key request form here.
- 2
- The hostname of your Quay server.
8.6.5. Mapping repositories to Common Product Enumeration information
Clair’s Red Hat Enterprise Linux (RHEL) scanner relies on a Common Product Enumeration (CPE) file to map RPM packages to the corresponding security data to produce matching results. These files are owned by product security and updated daily.
The CPE file must be present, or access to the file must be allowed, for the scanner to properly process RPM packages. If the file is not present, RPM packages installed in the container image will not be scanned.
CPE | Link to JSON mapping file |
---|---|
| |
|
In addition to uploading CVE information to the database for disconnected Clair installations, you must also make the mapping file available locally:
- For standalone Red Hat Quay and Clair deployments, the mapping file must be loaded into the Clair pod.
-
For Red Hat Quay Operator deployments on OpenShift Container Platform and Clair deployments, you must set the Clair component to
unamanged
. Then, Clair must be deployed manually, setting the configuration to load a local copy of the mapping file.
8.6.5.1. Mapping repositories to Common Product Enumeration example configuration
Use the repo2cpe_mapping_file
and name2repos_mapping_file
fields in your Clair configuration to include the CPE JSON mapping files. For example:
indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json
indexer:
scanner:
repo:
rhel-repository-scanner:
repo2cpe_mapping_file: /data/cpe-map.json
package:
rhel_containerscanner:
name2repos_mapping_file: /data/repo-map.json
For more information, see How to accurately match OVAL security data to installed RPMs.