Chapter 6. Troubleshooting Red Hat Quay components
This document focuses on troubleshooting specific components within Red Hat Quay, providing targeted guidance for resolving issues that might arise. Designed for system administrators, operators, and developers, this resource aims to help diagnose and troubleshoot problems related to individual components of Red Hat Quay.
In addition to the following procedures, Red Hat Quay components can also be troubleshot by running Red Hat Quay in debug mode, obtaining log information, obtaining configuration information, and performing health checks on endpoints.
By using the following procedures, you are able to troubleshoot common component issues. Afterwards, you can search for solutions on the Red Hat Knowledgebase, or file a support ticket with the Red Hat Support team.
6.1. Troubleshooting the Red Hat Quay database
The PostgreSQL database used for Red Hat Quay store various types of information related to container images and their management. Some of the key pieces of information that the PostgreSQL database stores includes:
- Image Metadata. The database stores metadata associated with container images, such as image names, versions, creation timestamps, and the user or organization that owns the image. This information allows for easy identification and organization of container images within the registry.
- Image Tags. Red Hat Quay allows users to assign tags to container images, enabling convenient labeling and versioning. The PostgreSQL database maintains the mapping between image tags and their corresponding image manifests, allowing users to retrieve specific versions of container images based on the provided tags.
- Image Layers. Container images are composed of multiple layers, which are stored as individual objects. The database records information about these layers, including their order, checksums, and sizes. This data is crucial for efficient storage and retrieval of container images.
- User and Organization Data. Red Hat Quay supports user and organization management, allowing users to authenticate and manage access to container images. The PostgreSQL database stores user and organization information, including usernames, email addresses, authentication tokens, and access permissions.
- Repository Information. Red Hat Quay organizes container images into repositories, which act as logical units for grouping related images. The database maintains repository data, including names, descriptions, visibility settings, and access control information, enabling users to manage and share their repositories effectively.
- Event Logs. Red Hat Quay tracks various events and activities related to image management and repository operations. These event logs, including image pushes, pulls, deletions, and repository modifications, are stored in the PostgreSQL database, providing an audit trail and allowing administrators to monitor and analyze system activities.
The content in this section covers the following procedures:
- Checking the type of deployment: Determine if the database is deployed as a container on a virtual machine or as a pod on OpenShift Container Platform.
-
Checking the container or pod status: Verify the status of the
database
pod or container using specific commands based on the deployment type. - Examining the database container or pod logs: Access and examine the logs of the database pod or container, including commands for different deployment types.
-
Checking the connectivity between Red Hat Quay and the database pod: Check the connectivity between Red Hat Quay and the
database
pod using relevant commands. - Checking the database configuration: Check the database configuration at various levels (OpenShift Container Platform or PostgreSQL level) based on the deployment type.
- Checking resource allocation: Monitor resource allocation for the Red Hat Quay deployment, including disk usage and other resource usage.
- Interacting with the Red Hat Quay database: Learn how to interact with the PostgreSQL database, including commands to access and query databases.
6.1.1. Troubleshooting Red Hat Quay database issues
Use the following procedures to troubleshoot the PostgreSQL database.
6.1.1.1. Interacting with the Red Hat Quay database
Use the following procedure to interact with the PostgreSQL database.
Interacting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist.
Interacting with the PostgreSQL database can also be used to troubleshoot authorization and authentication issues.
Procedure
Exec into the Red Hat Quay database.
Enter the following commands to exec into the Red Hat Quay database pod on OpenShift Container Platform:
$ oc exec -it <quay_database_pod> -- psql
Enter the following command to exec into the Red Hat Quay database on a standalone deployment:
$ sudo podman exec -it <quay_container_name> /bin/bash
Enter the PostgreSQL shell.
WarningInteracting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist.
If you are using the Red Hat Quay Operator, enter the following command to enter the PostgreSQL shell:
$ oc rsh <quay_pod_name> psql -U your_username -d your_database_name
If you are on a standalone Red Hat Quay deployment, enter the following command to enter the PostgreSQL shell:
bash-4.4$ psql -U your_username -d your_database_name
6.1.1.2. Troubleshooting crashloopbackoff states
Use the following procedure to troueblshoot crashloopbackoff
states.
Procedure
If your container or pod is in a
crashloopbackoff
state, you can enter the following commands.Enter the following command to scale down the Red Hat Quay Operator:
$ oc scale deployment/quay-operator.v3.8.z --replicas=0
Example output
deployment.apps/quay-operator.v3.8.z scaled
Enter the following command to scale down the Red Hat Quay database:
$ oc scale deployment/<quay_database> --replicas=0
Example output
deployment.apps/<quay_database> scaled
Enter the following command to edit the Red Hat Quay database:
WarningInteracting with the PostgreSQL database is potentially destructive. It is highly recommended that you perform the following procedure with the help of a Red Hat Quay Support Specialist.
$ oc edit deployment <quay_database>
... template: metadata: creationTimestamp: null labels: quay-component: <quay_database> quay-operator/quayregistry: quay-operator.v3.8.z spec: containers: - env: - name: POSTGRESQL_USER value: postgres - name: POSTGRESQL_DATABASE value: postgres - name: POSTGRESQL_PASSWORD value: postgres - name: POSTGRESQL_ADMIN_PASSWORD value: postgres - name: POSTGRESQL_MAX_CONNECTIONS value: "1000" image: registry.redhat.io/rhel8/postgresql-10@sha256:a52ad402458ec8ef3f275972c6ebed05ad64398f884404b9bb8e3010c5c95291 imagePullPolicy: IfNotPresent name: postgres command: ["/bin/bash", "-c", "sleep 86400"] 1 ...
- 1
- Add this line in the same indentation.
Example output
deployment.apps/<quay_database> edited
Execute the following command inside of your
<quay_database>
:$ oc exec -it <quay_database> -- cat /var/lib/pgsql/data/userdata/postgresql/logs/* /path/to/desired_directory_on_host
6.1.1.3. Checking the connectivity between Red Hat Quay and the database pod
Use the following procedure to check the connectivity between Red Hat Quay and the database pod
Procedure
Check the connectivity between Red Hat Quay and the database pod.
If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command:
$ oc exec -it _quay_pod_name_ -- curl -v telnet://<database_pod_name>:5432
If you are using a standalone deployment of Red Hat Quay, enter the following command:
$ podman exec -it <quay_container_name >curl -v telnet://<database_container_name>:5432
6.1.1.4. Checking resource allocation
Use the following procedure to check resource allocation.
Procedure
- Obtain a list of running containers.
Monitor disk usage of your Red Hat Quay deployment.
If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command:
$ oc exec -it <quay_database_pod_name> -- df -ah
If you are using a standalone deployment of Red Hat Quay, enter the following command:
$ podman exec -it <quay_database_conatiner_name> df -ah
Monitor other resource usage.
Enter the following command to check resource allocation on a Red Hat Quay Operator deployment:
$ oc adm top pods
Enter the following command to check the status of a specific pod on a standalone deployment of Red Hat Quay:
$ podman pod stats <pod_name>
Enter the following command to check the status of a specific container on a standalone deployment of Red Hat Quay:
$ podman stats <container_name>
The following information is returned:
- CPU %. The percentage of CPU usage by the container since the last measurement. This value represents the container’s share of the available CPU resources.
-
MEM USAGE / LIMIT. The current memory usage of the container followed by its memory limit. The values are displayed in the format
current_usage / memory_limit
. For example,300.4MiB / 7.795GiB
indicates that the container is currently using 300.4 megabytes of memory out of a limit of 7.795 gigabytes. - MEM %. The percentage of memory usage by the container in relation to its memory limit.
-
NET I/O. The network I/O (input/output) statistics of the container. It displays the amount of data transmitted and received by the container over the network. The values are displayed in the format:
transmitted_bytes / received_bytes
. -
BLOCK I/O. The block I/O (input/output) statistics of the container. It represents the amount of data read from and written to the block devices (for example, disks) used by the container. The values are displayed in the format
read_bytes / written_bytes
.
6.1.2. Resetting superuser passwords on Red Hat Quay standalone deployments
Use the following procedure to reset a superuser’s password.
Prerequisites
- You have created a Red Hat Quay superuser.
- You have installed Python 3.9.
-
You have installed the
pip
package manager for Python. -
You have installed the
bcrypt
package forpip
.
Procedure
Generate a secure, hashed password using the
bcrypt
package in Python 3.9 by entering the following command:$ python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b"newpass1234", bcrypt.gensalt(12)).decode("utf-8"))'
Example output
$2b$12$T8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm
Enter the following command to show the container ID of your Red Hat Quay container registry:
$ sudo podman ps -a
Example output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70560beda7aa registry.redhat.io/rhel8/redis-5:1 run-redis 2 hours ago Up 2 hours ago 0.0.0.0:6379->6379/tcp redis 8012f4491d10 registry.redhat.io/quay/quay-rhel8:v3.8.2 registry 3 minutes ago Up 8 seconds ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay 8b35b493ac05 registry.redhat.io/rhel8/postgresql-10:1 run-postgresql 39 seconds ago Up 39 seconds ago 0.0.0.0:5432->5432/tcp postgresql-quay
Execute an interactive shell for the
postgresql
container image by entering the following command:$ sudo podman exec -it 8b35b493ac05 /bin/bash
Re-enter the
quay
PostgreSQL database server, specifying the database, username, and host address:bash-4.4$ psql -d quay -U quayuser -h 192.168.1.28 -W
Update the
password_hash
of the superuser admin who lost their password:quay=> UPDATE public.user SET password_hash = '$2b$12$T8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm' where username = 'quayadmin';
Example output
UPDATE 1
Enter the following to command to ensure that the
password_hash
has been updated:quay=> select * from public.user;
Example output
id | uuid | username | password_hash | email | verified | stripe_id | organization | robot | invoice_email | invalid_login_attempts | last_invalid_login |removed_tag_expiration_s | enabled | invoice_email_address | company | family_name | given_name | location | maximum_queued_builds_count | creation_date | last_accessed ----+--------------------------------------+-----------+--------------------------------------------------------------+-----------------------+--- -------+-----------+--------------+-------+---------------+------------------------+----------------------------+--------------------------+------ ---+-----------------------+---------+-------------+------------+----------+-----------------------------+----------------------------+----------- 1 | 73f04ef6-19ba-41d3-b14d-f2f1eed94a4a | quayadmin | $2b$12$T8pkgtOoys3G5ut7FV1She6vXlYgU.6TeoGmbbAVQtN8X8ch4knKm | quayadmin@example.com | t | | f | f | f | 0 | 2023-02-23 07:54:39.116485 | 1209600 | t | | | | | | | 2023-02-23 07:54:39.116492
Log in to your Red Hat Quay deployment using the new password:
$ sudo podman login -u quayadmin -p newpass1234 http://quay-server.example.com --tls-verify=false
Example output
Login Succeeded!
Additional resources
For more information, see Resetting Superuser Password for Quay.
6.1.3. Resetting superuser passwords on the Red Hat Quay Operator
Prerequisites
- You have created a Red Hat Quay superuser.
- You have installed Python 3.9.
-
You have installed the
pip
package manager for Python. -
You have installed the
bcrypt
package forpip
.
Procedure
- Log in to your Red Hat Quay deployment.
-
On the OpenShift Container Platform UI, navigate to Workloads
Secrets. -
Select the namespace for your Red Hat Quay deployment, for example,
Project quay
. - Locate and store the PostgreSQL database credentials.
Generate a secure, hashed password using the
bcrypt
package in Python 3.9 by entering the following command:$ python3.9 -c 'import bcrypt; print(bcrypt.hashpw(b"newpass1234", bcrypt.gensalt(12)).decode("utf-8"))'
Example output
$2b$12$zoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y
On the CLI, log in to the database, for example:
$ oc rsh quayuser-quay-quay-database-669c8998f-v9qsl
Enter the following command to open a connection to the
quay
PostgreSQL database server, specifying the database, username, and host address:sh-4.4$ psql -U quayuser-quay-quay-database -d quayuser-quay-quay-database -W
Enter the following command to connect to the default database for the current user:
quay=> \c
Update the
password_hash
of the superuser admin who lost their password:quay=> UPDATE public.user SET password_hash = '$2b$12$zoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y' where username = 'quayadmin';
Enter the following to command to ensure that the
password_hash
has been updated:quay=> select * from public.user;
Example output
id | uuid | username | password_hash | email | verified | stripe_id | organization | robot | invoice_email | invalid_login_attempts | last_invalid_login |removed_tag_expiration_s | enabled | invoice_email_address | company | family_name | given_name | location | maximum_queued_builds_count | creation_date | last_accessed ----+--------------------------------------+-----------+--------------------------------------------------------------+-----------------------+--- -------+-----------+--------------+-------+---------------+------------------------+----------------------------+--------------------------+------ ---+-----------------------+---------+-------------+------------+----------+-----------------------------+----------------------------+----------- 1 | 73f04ef6-19ba-41d3-b14d-f2f1eed94a4a | quayadmin | $2b$12$zoilcTG6XQeAoVuDuIZH0..UpvQEZcKh3V6puksQJaUQupHgJ4.4y | quayadmin@example.com | t | | f | f | f | 0 | 2023-02-23 07:54:39.116485 | 1209600 | t | | | | | | | 2023-02-23 07:54:39.116492
- Navigate to your Red Hat Quay UI on OpenShift Container Platform and log in using the new credentials.
6.2. Troubleshooting Red Hat Quay authentication
Authentication and authorization is crucial for secure access to Red Hat Quay. Together, they safeguard sensitive container images, verify user identities, enforce access controls, facilitate auditing and accountability, and enable seamless integration with external identity providers. By prioritizing authentication, organizations can bolster the overall security and integrity of their container registry environment.
The following authentication methods are supported by Red Hat Quay:
- Username and password. Users can authentication by providing their username and password, which are validated against the user database configured in Red Hat Quay. This traditional method requires users to enter their credentials to gain access.
- OAuth. Red Hat Quay supports OAuth authentication, which allows users to authenticate using their credentials from third party services like Google, GitHub, or Keycloak. OAuth enables a seamless and federated login experience, eliminating the need for separate account creation and simplifying user management.
- OIDC. OpenID Connect enables single sign-on (SSO) capabilities and integration with enterprise identity providers. With OpenID Connect, users can authenticate using their existing organizational credentials, providing a unified authentication experience across various systems and applications.
- Token-based authentication. Users can obtain unique tokens that grant access to specific resources within Red Hat Quay. Tokens can be obtained through various means, such as OAuth or by generating API tokens within the Red Hat Quay user interface. Token-based authentication is often used for automated or programmatic access to the registry.
- External identity provider. Red Hat Quay can integrate with external identity providers, such as LDAP or AzureAD, for authentication purposes. This integration allows organizations to use their existing identity management infrastructure, enabling centralized user authentication and reducing the need for separate user databases.
6.2.1. Troubleshooting Red Hat Quay authentication and authorization issues for specific users
Use the following procedure to troubleshoot authentication and authorization issues for specific users.
Procedure
- Exec into the Red Hat Quay pod or container. For more information, see "Interacting with the Red Hat Quay database".
Enter the following command to show all users for external authentication:
quay=# select * from federatedlogin;
Example output
id | user_id | service_id | service_ident | metadata_json ----+---------+------------+---------------------------------------------+------------------------------------------- 1 | 1 | 3 | testuser0 | {} 2 | 1 | 8 | PK7Zpg2Yu2AnfUKG15hKNXqOXirqUog6G-oE7OgzSWc | {"service_username": "live.com#testuser0"} 3 | 2 | 3 | testuser1 | {} 4 | 2 | 4 | 110875797246250333431 | {"service_username": "testuser1"} 5 | 3 | 3 | testuser2 | {} 6 | 3 | 1 | 26310880 | {"service_username": "testuser2"} (6 rows)
Verify that the users are inserted into the
user
table:quay=# select username, email from "user";
Example output
username | email -----------+---------------------- testuser0 | testuser0@outlook.com testuser1 | testuser1@gmail.com testuser2 | testuser2@redhat.com (3 rows)
6.3. Troubleshooting Red Hat Quay object storage
Object storage is a type of data storage architecture that manages data as discrete units called objects
. Unlike traditional file systems that organize data into hierarchical directories and files, object storage treats data as independent entities with unique identifiers. Each object contains the data itself, along with metadata that describes the object and enables efficient retrieval.
Red Hat Quay uses object storage as the underlying storage mechanism for storing and managing container images. It stores container images as individual objects. Each container image is treated as an object, with its own unique identifier and associated metadata.
6.3.1. Troubleshooting Red Hat Quay object storage issues
Use the following options to troubleshoot Red Hat Quay object storage issues.
Procedure
Enter the following command to see what object storage is used:
$ oc get quayregistry quay-registry-name -o yaml
- Ensure that the object storage you are using is officially supported by Red Hat Quay by checking the tested integrations page.
- Enable debug mode. For more information, see "Running Red Hat Quay in debug mode".
-
Check your object storage configuration in your
config.yaml
file. Ensure that it is accurate and matches the settings provided by your object storage provider. You can check information like access credentials, endpoint URLs, bucket and container names, and other relevant configuration parameters. - Ensure that Red Hat Quay has network connectivity to the object storage endpoint. Check the network configurations to ensure that there are no restrictions blocking the communication between Red Hat Quay and the object storage endpoint.
If
FEATURE_STORAGE_PROXY
is enabled in yourconfig.yaml
file, check to see if its download URL is accessible. This can be found in the Red Hat Quay debug logs. For example:$ curl -vvv "https://QUAY_HOSTNAME/_storage_proxy/dhaWZKRjlyO......Kuhc=/https/quay.hostname.com/quay-test/datastorage/registry/sha256/0e/0e1d17a1687fa270ba4f52a85c0f0e7958e13d3ded5123c3851a8031a9e55681?AWSAccessKeyId=xxxx&Signature=xxxxxx4%3D&Expires=1676066703"
-
Try access the object storage service outside of Red Hat Quay to determine if the issue is specific to your deployment, or the underlying object storage. You can use command line tools like
aws
,gsutil
, ors3cmd
provided by the object storage provider to perform basic operations like listing buckets, containers, or uploading and downloading objects. This might help you isolate the problem.
6.4. Geo-replication
Currently, the geo-replication feature is not supported on IBM Power.
Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients.
Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments.
6.4.1. Troubleshooting geo-replication for Red Hat Quay
Use the following sections to troubleshoot geo-replication for Red Hat Quay.
6.4.1.1. Checking data replication in backend buckets
Use the following procedure to ensure that your data is properly replicated in all backend buckets.
Prerequisites
-
You have installed the
aws
CLI.
Procedure
Enter the following command to ensure that your data is replicated in all backend buckets:
$ aws --profile quay_prod_s3 --endpoint=http://10.0.x.x:port s3 ls ocp-quay --recursive --human-readable --summarize
Example output
Total Objects: 17996 Total Size: 514.4 GiB
6.4.1.2. Checking the status of your backend storage
Use the following resources to check the status of your backend storage.
-
Amazon Web Service Storage (AWS). Check the AWS S3 service health status on the AWS Service Health Dashboard. Validate your access to S3 by listing objects in a known bucket using the
aws
CLI or SDKs. - Google Cloud Storage (GCS). Check the Google Cloud Status Dashboard for the status of the GCS service. Verify your access to GCS by listing objects in a known bucket using the Google Cloud SDK or GCS client libraries.
- NooBaa. Check the NooBaa management console or administrative interface for any health or status indicators. Ensure that the NooBaa services and related components are running and accessible. Verify access to NooBaa by listing objects in a known bucket using the NooBaa CLI or SDK.
- Red Hat OpenShift Data Foundation. Check the OpenShift Container Platform Console or management interface for the status of the Red Hat OpenShift Data Foundation components. Verify the availability of Red Hat OpenShift Data Foundation S3 interface and services. Ensure that the Red Hat OpenShift Data Foundation services are running and accessible. Validate access to Red Hat OpenShift Data Foundation S3 by listing objects in a known bucket using the appropriate S3-compatible SDK or CLI.
- Ceph. Check the status of Ceph services, including Ceph monitors, OSDs, and RGWs. Validate that the Ceph cluster is healthy and operational. Verify access to Ceph object storage by listing objects in a known bucket using the appropriate Ceph object storage API or CLI.
- Azure Blob Storage. Check the Azure Status Dashboard to see the health status of the Azure Blob Storage service. Validate your access to Azure Blob Storage by listing containers or objects using the Azure CLI or Azure SDKs.
- OpenStack Swift. Check the OpenStack Status page to verify the status of the OpenStack Swift service. Ensure that the Swift services, like the proxy server, container servers, object servers, are running and accessible. Validate your access to Swift by listing containers or objects using the appropriate Swift CLI or SDK.
After checking the status of your backend storage, ensure that all Red Hat Quay instances have access to all s3 storage backends.
6.5. Repository mirroring
Red Hat Quay repository mirroring lets you mirror images from external container registries, or another local registry, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags.
From your Red Hat Quay cluster with repository mirroring enabled, you can perform the following:
- Choose a repository from an external registry to mirror
- Add credentials to access the external registry
- Identify specific container image repository names and tags to sync
- Set intervals at which a repository is synced
- Check the current state of synchronization
To use the mirroring functionality, you need to perform the following actions:
- Enable repository mirroring in the Red Hat Quay configuration file
- Run a repository mirroring worker
- Create mirrored repositories
All repository mirroring configurations can be performed using the configuration tool UI or by the Red Hat Quay API.
6.5.1. Troubleshooting repository mirroring
Use the following sections to troubleshoot repository mirroring for Red Hat Quay.
6.5.1.1. Verifying authentication and permissions
Ensure that the authentication credentials used for mirroring have the necessary permissions and access rights on both the source and destination Red Hat Quay instances.
On the Red Hat Quay UI, check the following settings:
- The access control settings. Ensure that the user or service account performing the mirroring operation has the required privileges.
- The permissions of your robot account on the Red Hat Quay registry.
6.6. Clair security scanner
6.6.1. Troubleshooting Clair issue
Use the following procedures to troubleshoot Clair.
6.6.1.1. Verifying image compatibility
If you are using Clair, ensure that the images you are trying to scan are supported by Clair. Clair has certain requirements and does not support all image formats or configurations.
For more information, see Clair vulnerability databases.
6.6.1.2. Allowlisting Clair updaters
If you are using Clair behind a proxy configuration, you must allowlist the updaters in your proxy or firewall configuration. For more information about updater URLs, see Clair updater URLs.
6.6.1.3. Updating Clair scanner and its dependencies
Ensure that you are using the latest version of Clair security scanner. Outdated versions might lack support for newer image formats, or might have known issues.
Use the following procedure to check your version of Clair.
Checking Clair logs can also be used to check if there are any errors from the updaters microservice in your Clair logs. By default, Clair updates the vulnerability database every 30 minutes.
Procedure
Check your version of Clair.
If you are running Clair on Red Hat Quay on OpenShift Container Platform, enter the following command:
$ oc logs clair-pod
If you are running a standalone deployment of Red Hat Quay and using a Clair container, enter the following command:
$ podman logs clair-container
Example output
"level":"info", "component":"main", "version":"v4.5.1",
6.6.1.4. Enabling debug mode for Clair
By default, debug mode for Clair is disabled. You can enable debug mode for Clair by updating your clair-config.yaml
file.
Prerequisites
- For Clair on Red Hat Quay on OpenShift Container Platform deployments, you must Running a custom Clair configuration with a managed Clair database.
Use the following procedure to enable debug mode for Clair.
Procedure
Update your
clair-config.yaml
file to include the debug option.On standalone Red Hat Quay deployments:
Add the following configuration field to your
clair-config.yaml
file:log_level: debug
Restart your Clair deployment by entering the following command:
$ podman restart <clair_container_name>
On Red Hat Quay on OpenShift Container Platform deployments:
-
On the OpenShift Container Platform web console, click Operators
Installed Operators Quay Registry. - Click the name of your registry, for example, Example Registry. You are redirected to the Details page of your registry.
- Click the Config Bundle Secret, for example, example-registry-config-bundle-xncls.
-
Confirm that you are running a custom Clair configuration by looking for the
clair-config.yaml
file under the Data section of the Details page of your secret. -
If you have a
clair-config.yaml
file, click ActionsEdit Secret. If you do not, see "Running a custom Clair configuration with a managed Clair database". Update your
clair-config.yaml
file to include thelog_level: debug
configuration variable. For example:log_level: debug
- Click Save.
-
You can check the status of your Clair deployment by clicking Workloads
Pods. The clair-app
pod should report1/1
under the Ready category. -
You can confirm that Clair is returning debugging information by clicking the clair-app pod that is ready
Logs.
-
On the OpenShift Container Platform web console, click Operators
6.6.1.5. Checking Clair configuration
Check your Clair config.yaml
file to ensure that there are no misconfigurations or inconsistencies that could lead to issues. For more information, see Clair configuration overview.
6.6.1.6. Inspect image metadata
In some cases, you might receive an Unsupported message. This might indicate that the scanner is unable to extract the necessary metadata from the image. Check if the image metadata is properly formatted and accessible.
Additional resources
For more information, see Troubleshooting Clair.