Chapter 7. Clair Security Scanning


Clair is a set of micro services that can be used with Red Hat Quay to perform vulnerability scanning of container images associated with a set of Linux operating systems. The micro services design of Clair makes it appropriate to run in a highly scalable configuration, where components can be scaled separately as appropriate for enterprise environments.

Clair uses the following vulnerability databases to scan for issues in your images:

  • Alpine SecDB database
  • AWS UpdateInfo
  • Debian Oval database
  • Oracle Oval database
  • RHEL Oval database
  • SUSE Oval database
  • Ubuntu Oval database
  • Pyup.io (python) database

For information on how Clair does security mapping with the different databases, see ClairCore Severity Mapping.

Note

With the release of Red Hat Quay 3.4, the new Clair V4 (image registry.redhat.io/quay/clair-rhel8 fully replaces the prior Clair V2 (image quay.io/redhat/clair-jwt). See below for how to run V2 in read-only mode while V4 is updating.

7.1. Setting Up Clair on a Red Hat Quay OpenShift deployment

7.1.1. Deploying Via the Quay Operator

To set up Clair V4 on a new Red Hat Quay deployment on OpenShift, it is highly recommended to use the Quay Operator. By default, the Quay Operator will install or upgrade a Clair deployment along with your Red Hat Quay deployment and configure Clair security scanning automatically.

7.1.2. Manually Deploying Clair

To configure Clair V4 on an existing Red Hat Quay OpenShift deployment running Clair V2, first ensure Red Hat Quay has been upgraded to at least version 3.4.0. Then use the following steps to manually set up Clair V4 alongside Clair V2.

  1. Set your current project to the name of the project in which Red Hat Quay is running. For example:

    $ oc project quay-enterprise
  2. Create a Postgres deployment file for Clair v4 (for example, clairv4-postgres.yaml) as follows.

    clairv4-postgres.yaml

    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: clairv4-postgres
      namespace: quay-enterprise
      labels:
        quay-component: clairv4-postgres
    spec:
      replicas: 1
      selector:
        matchLabels:
          quay-component: clairv4-postgres
      template:
        metadata:
          labels:
            quay-component: clairv4-postgres
        spec:
          volumes:
            - name: postgres-data
              persistentVolumeClaim:
                claimName: clairv4-postgres
          containers:
            - name: postgres
              image: postgres:11.5
              imagePullPolicy: "IfNotPresent"
              ports:
                - containerPort: 5432
              env:
                - name: POSTGRES_USER
                  value: "postgres"
                - name: POSTGRES_DB
                  value: "clair"
                - name: POSTGRES_PASSWORD
                  value: "postgres"
                - name: PGDATA
                  value: "/etc/postgres/data"
              volumeMounts:
                - name: postgres-data
                  mountPath: "/etc/postgres"
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: clairv4-postgres
      labels:
        quay-component: clairv4-postgres
    spec:
      accessModes:
        - "ReadWriteOnce"
      resources:
        requests:
          storage: "5Gi"
        volumeName: "clairv4-postgres"
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: clairv4-postgres
      labels:
        quay-component: clairv4-postgres
    spec:
      type: ClusterIP
      ports:
        - port: 5432
          protocol: TCP
          name: postgres
          targetPort: 5432
      selector:
        quay-component: clairv4-postgres

  3. Deploy the postgres database as follows:

    $ oc create -f ./clairv4-postgres.yaml
  4. Create a Clair config.yaml file to use for Clair v4. For example:

    config.yaml

    introspection_addr: :8089
    http_listen_addr: :8080
    log_level: debug
    indexer:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      scanlock_retry: 10
      layer_scan_concurrency: 5
      migrations: true
    matcher:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      max_conn_pool: 100
      run: ""
      migrations: true
      indexer_addr: clair-indexer
    notifier:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      delivery: 1m
      poll_interval: 5m
      migrations: true
    auth:
      psk:
        key: MTU5YzA4Y2ZkNzJoMQ== 1
        iss: ["quay"]
    # tracing and metrics
    trace:
      name: "jaeger"
      probability: 1
      jaeger:
        agent_endpoint: "localhost:6831"
        service_name: "clair"
    metrics:
      name: "prometheus"

    1
    To generate a Clair pre-shared key (PSK), enable scanning in the Security Scanner section of the User Interface and click Generate PSK.

More information about Clair’s configuration format can be found in upstream Clair documentation.

  1. Create a secret from the Clair config.yaml:

    $ oc create secret generic clairv4-config-secret --from-file=./config.yaml
  2. Create the Clair v4 deployment file (for example, clair-combo.yaml) and modify it as necessary:

    clair-combo.yaml

    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      labels:
        quay-component: clair-combo
      name: clair-combo
    spec:
      replicas: 1
      selector:
        matchLabels:
          quay-component: clair-combo
      template:
        metadata:
          labels:
            quay-component: clair-combo
        spec:
          containers:
            - image: registry.redhat.io/quay/clair-rhel8:v3.7.13  1
              imagePullPolicy: IfNotPresent
              name: clair-combo
              env:
                - name: CLAIR_CONF
                  value: /clair/config.yaml
                - name: CLAIR_MODE
                  value: combo
              ports:
                - containerPort: 8080
                  name: clair-http
                  protocol: TCP
                - containerPort: 8089
                  name: clair-intro
                  protocol: TCP
              volumeMounts:
                - mountPath: /clair/
                  name: config
          imagePullSecrets:
            - name: redhat-pull-secret
          restartPolicy: Always
          volumes:
            - name: config
              secret:
                secretName: clairv4-config-secret
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: clairv4 2
      labels:
        quay-component: clair-combo
    spec:
      ports:
        - name: clair-http
          port: 80
          protocol: TCP
          targetPort: 8080
        - name: clair-introspection
          port: 8089
          protocol: TCP
          targetPort: 8089
      selector:
        quay-component: clair-combo
      type: ClusterIP

    1
    Change image to latest clair image name and version.
    2
    With the Service set to clairv4, the scanner endpoint for Clair v4 is entered later into the Red Hat Quay config.yaml in the SECURITY_SCANNER_V4_ENDPOINT as http://clairv4.
  3. Create the Clair v4 deployment as follows:

    $ oc create -f ./clair-combo.yaml
  4. Modify the config.yaml file for your Red Hat Quay deployment to add the following entries at the end:

    FEATURE_SECURITY_NOTIFICATIONS: true
    FEATURE_SECURITY_SCANNER: true
    SECURITY_SCANNER_V4_ENDPOINT: http://clairv4 1
    1
    Identify the Clair v4 service endpoint
  5. Redeploy the modified config.yaml to the secret containing that file (for example, quay-enterprise-config-secret:

    $ oc delete secret quay-enterprise-config-secret
    $ oc create secret generic quay-enterprise-config-secret --from-file=./config.yaml
  6. For the new config.yaml to take effect, you need to restart the Red Hat Quay pods. Simply deleting the quay-app pods causes pods with the updated configuration to be deployed.

At this point, images in any of the organizations identified in the namespace whitelist will be scanned by Clair v4.

7.2. Setting up Clair on a non-OpenShift Red Hat Quay deployment

For Red Hat Quay deployments not running on OpenShift, it is possible to configure Clair security scanning manually. Red Hat Quay deployments already running Clair V2 can use the instructions below to add Clair V4 to their deployment.

  1. Deploy a (preferably fault-tolerant) Postgres database server. Note that Clair requires the uuid-ossp extension to be added to its Postgres database. If the user supplied in Clair’s config.yaml has the necessary privileges to create the extension then it will be added automatically by Clair itself. If not, then the extension must be added before starting Clair. If the extension is not present, the following error will be displayed when Clair attempts to start.

    ERROR: Please load the "uuid-ossp" extension. (SQLSTATE 42501)
  2. Create a Clair config file in a specific folder, for example, /etc/clairv4/config/config.yaml).

    config.yaml

    introspection_addr: :8089
    http_listen_addr: :8080
    log_level: debug
    indexer:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      scanlock_retry: 10
      layer_scan_concurrency: 5
      migrations: true
    matcher:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      max_conn_pool: 100
      run: ""
      migrations: true
      indexer_addr: clair-indexer
    notifier:
      connstring: host=clairv4-postgres port=5432 dbname=clair user=postgres password=postgres sslmode=disable
      delivery_interval: 1m
      poll_interval: 5m
      migrations: true
    
    # tracing and metrics
    trace:
      name: "jaeger"
      probability: 1
      jaeger:
        agent_endpoint: "localhost:6831"
        service_name: "clair"
    metrics:
      name: "prometheus"

More information about Clair’s configuration format can be found in upstream Clair documentation.

  1. Run Clair via the container image, mounting in the configuration from the file you created.

    $ podman run -p 8080:8080 -p 8089:8089 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clair4/config:/clair -d registry.redhat.io/quay/clair-rhel8:v3.7.13
  2. Follow the remaining instructions from the previous section for configuring Red Hat Quay to use the new Clair V4 endpoint.

Running multiple Clair containers in this fashion is also possible, but for deployment scenarios beyond a single container the use of a container orchestrator like Kubernetes or OpenShift is strongly recommended.

7.3. Advanced Clair configuration

7.3.1. Unmanaged Clair configuration

With Red Hat Quay 3.7, users can run an unmanaged Clair configuration on the Red Hat Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database.

7.3.1.1. Unmanaging a Clair database

An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster.

Procedure

  • In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to unmanaged:

    apiVersion: quay.redhat.com/v1
    kind: QuayRegistry
    metadata:
      name: quay370
    spec:
      configBundleSecret: config-bundle-secret
      components:
        - kind: objectstorage
          managed: false
        - kind: route
          managed: true
        - kind: tls
          managed: false
        - kind: clairpostgres
          managed: false

7.3.1.2. Configuring a custom Clair database

The Red Hat Quay Operator for OpenShift Container Platform allows users to provide their own Clair configuration by editing the configBundleSecret parameter.

Procedure

  1. Create a Quay config bundle secret that includes the clair-config.yaml:

    $ oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret

    Example clair-config.yaml configuration:

    indexer:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        layer_scan_concurrency: 6
        migrations: true
        scanlock_retry: 11
    log_level: debug
    matcher:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        migrations: true
    metrics:
        name: prometheus
    notifier:
        connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca
        migrations: true
    Note
    • The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml. It must be specified when configuring your clair-config.yaml.
    • An example clair-config.yaml can be found at Clair on OpenShift config.
  2. Add the clair-config.yaml to your bundle secret, named configBundleSecret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: config-bundle-secret
      namespace: quay-enterprise
    data:
      config.yaml: <base64 encoded Quay config>
      clair-config.yaml: <base64 encoded Clair config>
      extra_ca_cert_<name>: <base64 encoded ca cert>
      clair-ssl.crt: >-
      clair-ssl.key: >-
    Note

    When updated, the provided clair-config.yaml is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module.

After proper configuration, the Clair application pod should return to a Ready state.

7.3.2. Running a custom Clair configuration with a managed database

In some cases, users might want to run a custom Clair configuration with a managed database. This is useful in the following scenarios:

  • When a user wants to disable an updater.
  • When a user is running in an air-gapped environment.

    Note
    • If you are running Quay in an air-gapped environment, the airgap parameter of your clair-config.yaml must be set to true.
    • If you are running Quay in an air-gapped environment, you should disable all updaters.

Use the steps in "Configuring a custom Clair database" to configure your database when clairpostgres is set to managed.

For more information about running Clair in an air-gapped environment, see Configuring access to the Clair database in the air-gapped OpenShift cluster.

7.4. Clair CRDA configuration

7.4.1. Enabling Clair CRDA

Java scanning depends on a public, Red Hat provided API service called Code Ready Dependency Analytics (CRDA). CRDA is only available with internet access and is not enabled by default. Use the following procedure to integrate the CRDA service with a custom API key and enable CRDA for Java and Python scanning.

Prerequisites

  • Red Hat Quay 3.7 or greater

Procedure

  1. Submit the API key request form to obtain the Quay-specific CRDA remote matcher.
  2. Set the CRDA configuration in your clair-config.yaml file:

    matchers:
      config:
        crda:
          url: https://gw.api.openshift.io/api/v2/
          key: <CRDA_API_KEY> 1
          source: <QUAY_SERVER_HOSTNAME> 2
    1
    Insert the Quay-specific CRDA remote matcher from the API key request form here.
    2
    The hostname of your Quay server.

7.5. Using Clair

  1. Log in to your Red Hat Quay cluster and select an organization for which you have configured Clair scanning.
  2. Select a repository from that organization that holds some images and select Tags from the left navigation. The following figure shows an example of a repository with two images that have been scanned:

    Security scan information appears for scanned repository images

  3. If vulnerabilities are found, select to under the Security Scan column for the image to see either all vulnerabilities or those that are fixable. The following figure shows information on all vulnerabilities found:

    See all vulnerabilities or only those that are fixable

7.6. CVE ratings from the National Vulnerability Database

With Clair v4.2, enrichment data is now viewable in the Quay UI. Additionally, Clair v4.2 adds CVSS scores from the National Vulnerability Database for detected vulnerabilities.

With this change, if the vulnerability has a CVSS score that is within 2 levels of the distro’s score, the Quay UI present’s the distro’s score by default. For example:

Clair v4.2 data display

This differs from the previous interface, which would only display the following information:

Clair v4 data display

7.7. Configuring Clair for Disconnected Environments

Clair utilizes a set of components called Updaters to handle the fetching and parsing of data from various vulnerability databases. These Updaters are set up by default to pull vulnerability data directly from the internet and work out of the box. For customers in disconnected environments without direct access to the internet this poses a problem. Clair supports these environments through the ability to work with different types of update workflows that take into account network isolation. Using the clairctl command line utility, any process can easily fetch Updater data from the internet via an open host, securely transfer the data to an isolated host, and then import the Updater data on the isolated host into Clair itself.

The steps are as follows.

  1. First ensure that your Clair configuration has disabled automated Updaters from running.

    config.yaml

    matcher:
      disable_updaters: true

  2. Export out the latest Updater data to a local archive. This requires the clairctl tool which can be run directly as a binary, or via the Clair container image. Assuming your Clair configuration is in /etc/clairv4/config/config.yaml, to run via the container image:

    $ podman run -it --rm -v /etc/clairv4/config:/cfg:Z -v /path/to/output/directory:/updaters:Z --entrypoint /bin/clairctl registry.redhat.io/quay/clair-rhel8:v3.7.13 --config /cfg/config.yaml export-updaters  /updaters/updaters.gz

    Note that you need to explicitly reference the Clair configuration. This will create the Updater archive in /etc/clairv4/updaters/updaters.gz. If you want to ensure the archive was created without any errors from the source databases, you can supply the --strict flag to clairctl. The archive file should be copied over to a volume that is accessible from the disconnected host running Clair. From the disconnected host, use the same procedure now to import the archive into Clair.

    $ podman run -it --rm -v /etc/clairv4/config:/cfg:Z -v /path/to/output/directory:/updaters:Z --entrypoint /bin/clairctl registry.redhat.io/quay/clair-rhel8:v3.7.13 --config /cfg/config.yaml import-updaters /updaters/updaters.gz

7.7.1. Mapping repositories to Common Product Enumeration (CPE) information

Clair’s RHEL scanner relies on a Common Product Enumeration (CPE) file to properly map RPM packages to the corresponding security data, in order to produce matching results. This file must be present, or access to the file must be allowed, for the scanner to properly process RPMs. If the file is not present, RPMs installed in the container images will not be scanned.

Red Hat publishes the JSON mapping file at https://www.redhat.com/security/data/metrics/repository-to-cpe.json.

In addition to uploading CVE information to the database for disconnected Clair, you must also make the mapping file available locally:

  • For standalone Quay and Clair deployments, the mapping file must be loaded into the Clair pod.
  • For Operator-based deployments, you must set the Clair component to unmanaged. Then deploy Clair manually, setting the configuration to load a local copy of the mapping file.

Use the repo2cpe_mapping_file field in the Clair configuration to specify the file:

indexer:
  scanner:
    repo:
      rhel-repository-scanner:
        repo2cpe_mapping_file: /path/to/repository-to-cpe.json

Further information is available from Red Hat at How to accurately match OVAL security data to installed RPMs.

7.8. Clair updater URLs

The following are the HTTP hosts and paths that Clair will attempt to talk to in a default configuration. This list is non-exhaustive, as some servers will issue redirects and some request URLs are constructed dynamically.

  • https://secdb.alpinelinux.org/
  • http://repo.us-west-2.amazonaws.com/2018.03/updates/x86_64/mirror.list
  • https://cdn.amazonlinux.com/2/core/latest/x86_64/mirror.list
  • https://www.debian.org/security/oval/
  • https://linux.oracle.com/security/oval/
  • https://packages.vmware.com/photon/photon_oval_definitions/
  • https://github.com/pyupio/safety-db/archive/
  • https://catalog.redhat.com/api/containers/
  • https://www.redhat.com/security/data/
  • https://support.novell.com/security/oval/
  • https://people.canonical.com/~ubuntu-security/oval/

7.9. Additional Information

For detailed documentation on the internals of Clair, including how the microservices are structured, please see the Upstream Clair and ClairCore documentation.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.