Search

Chapter 2. Setting up the Registry

download PDF

2.1. Internal Registry Overview

2.1.1. About the Registry

OpenShift Container Platform can build container images from your source code, deploy them, and manage their lifecycle. To enable this, OpenShift Container Platform provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images.

2.1.2. Integrated or Stand-alone Registries

During an initial installation of a full OpenShift Container Platform cluster, it is likely that the registry was deployed automatically during the installation process. If it was not, or if you want to further customize the configuration of your registry, see Deploying a Registry on Existing Clusters.

While it can be deployed to run as an integrated part of your full OpenShift Container Platform cluster, the OpenShift Container Platform registry can alternatively be installed separately as a stand-alone container image registry.

To install a stand-alone registry, follow Installing a Stand-alone Registry. This installation path deploys an all-in-one cluster running a registry and specialized web console.

2.2. Deploying a Registry on Existing Clusters

2.2.1. Overview

If the integrated registry was not previously deployed automatically during the initial installation of your OpenShift Container Platform cluster, or if it is no longer running successfully and you need to redeploy it on your existing cluster, see the following sections for options on deploying a new registry.

Note

This topic is not required if you installed a stand-alone registry.

2.2.2. Setting the Registry Host Name

You can configure the host name and port the registry is known by for both internal and external references. By doing this, image streams will provide hostname based push and pull specifications for images, allowing consumers of the images to be isolated from changes to the registry service IP and potentially allowing image streams and their references to be portable between clusters.

To set the hostname used to reference the registry from within the cluster, set the internalRegistryHostname in the imagePolicyConfig section of the master configuration file. The external host name is controlled by setting the externalRegistryHostname value in the same location.

Image Policy Configuration

imagePolicyConfig:
  internalRegistryHostname: docker-registry.default.svc.cluster.local:5000
  externalRegistryHostname: docker-registry.mycompany.com

The registry itself must be configured with the same internal hostname value. This can be accomplished by setting the REGISTRY_OPENSHIFT_SERVER_ADDR environment variable on the registry deployment configuration, or by setting the value in the OpenShift section of the registry configuration.

Note

If you have enabled TLS for your registry the server certificate must include the hostnames by which you expect the registry to be referenced. See securing the registry for instructions on adding hostnames to the server certificate.

2.2.3. Deploying the Registry

To deploy the integrated container image registry, use the oc adm registry command as a user with cluster administrator privileges. For example:

$ oc adm registry --config=/etc/origin/master/admin.kubeconfig \1
    --service-account=registry \2
    --images='registry.redhat.io/openshift3/ose-${component}:${version}' 3
1
--config is the path to the CLI configuration file for the cluster administrator.
2
--service-account is the service account used to run the registry’s pod.
3
Required to pull the correct image for OpenShift Container Platform. ${component} and ${version} are dynamically replaced during installation.

This creates a service and a deployment configuration, both called docker-registry. Once deployed successfully, a pod is created with a name similar to docker-registry-1-cpty9.

To see a full list of options that you can specify when creating the registry:

$ oc adm registry --help

The value for --fs-group must be permitted by the SCC used by the registry (typically, the restricted SCC).

2.2.4. Deploying the Registry as a DaemonSet

Use the oc adm registry command to deploy the registry as a DaemonSet with the --daemonset option.

Daemonsets ensure that when nodes are created, they contain copies of a specified pod. When the nodes are removed, the pods are garbage collected.

For more information on DaemonSets, see Using Daemonsets.

2.2.5. Registry Compute Resources

By default, the registry is created with no settings for compute resource requests or limits. For production, it is highly recommended that the deployment configuration for the registry be updated to set resource requests and limits for the registry pod. Otherwise, the registry pod will be considered a BestEffort pod.

See Compute Resources for more information on configuring requests and limits.

2.2.6. Storage for the Registry

The registry stores container images and metadata. If you simply deploy a pod with the registry, it uses an ephemeral volume that is destroyed if the pod exits. Any images anyone has built or pushed into the registry would disappear.

This section lists the supported registry storage drivers. See the container image registry documentation for more information.

The following list includes storage drivers that need to be configured in the registry’s configuration file:

General registry storage configuration options are supported. See the container image registry documentation for more information.

The following storage options need to be configured through the filesystem driver:

Note

For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.

2.2.6.1. Production Use

For production use, attach a remote volume or define and use the persistent storage method of your choice.

For example, to use an existing persistent volume claim:

$ oc set volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \
     --claim-name=<pvc_name> --overwrite
Important

Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.

2.2.6.1.1. Use Amazon S3 as a Storage Back-end

There is also an option to use Amazon Simple Storage Service storage with the internal container image registry. It is a secure cloud storage manageable through AWS Management Console. To use it, the registry’s configuration file must be manually edited and mounted to the registry pod. However, before you start with the configuration, look at upstream’s recommended steps.

Take a default YAML configuration file as a base and replace the filesystem entry in the storage section with s3 entry such as below. The resulting storage section may look like this:

storage:
  cache:
    layerinfo: inmemory
  delete:
    enabled: true
  s3:
    accesskey: awsaccesskey 1
    secretkey: awssecretkey 2
    region: us-west-1
    regionendpoint: http://myobjects.local
    bucket: bucketname
    encrypt: true
    keyid: mykeyid
    secure: true
    v4auth: false
    chunksize: 5242880
    rootdirectory: /s3/object/name/prefix
1
Replace with your Amazon access key.
2
Replace with your Amazon secret key.

All of the s3 configuration options are documented in upstream’s driver reference documentation.

Overriding the registry configuration will take you through the additional steps on mounting the configuration file into pod.

Warning

When the registry runs on the S3 storage back-end, there are reported issues.

If you want to use a S3 region that is not supported by the integrated registry you are using, see S3 Driver Configuration.

2.2.6.2. Non-Production Use

For non-production use, you can use the --mount-host=<path> option to specify a directory for the registry to use for persistent storage. The registry volume is then created as a host-mount at the specified <path>.

Important

The --mount-host option mounts a directory from the node on which the registry container lives. If you scale up the docker-registry deployment configuration, it is possible that your registry pods and containers will run on different nodes, which can result in two or more registry containers, each with its own local storage. This will lead to unpredictable behavior, as subsequent requests to pull the same image repeatedly may not always succeed, depending on which container the request ultimately goes to.

The --mount-host option requires that the registry container run in privileged mode. This is automatically enabled when you specify --mount-host. However, not all pods are allowed to run privileged containers by default. If you still want to use this option, create the registry and specify that it use the registry service account that was created during installation:

$ oc adm registry --service-account=registry \
    --config=/etc/origin/master/admin.kubeconfig \
    --images='registry.redhat.io/openshift3/ose-${component}:${version}' \ 1
    --mount-host=<path>
1
Required to pull the correct image for OpenShift Container Platform. ${component} and ${version} are dynamically replaced during installation.
Important

The container image registry pod runs as user 1001. This user must be able to write to the host directory. You may need to change directory ownership to user ID 1001 with this command:

$ sudo chown 1001:root <path>

2.2.7. Enabling the Registry Console

OpenShift Container Platform provides a web-based interface to the integrated registry. This registry console is an optional component for browsing and managing images. It is deployed as a stateless service running as a pod.

Note

If you installed OpenShift Container Platform as a stand-alone registry, the registry console is already deployed and secured automatically during installation.

Important

If Cockpit is already running, you’ll need to shut it down before proceeding in order to avoid a port conflict (9090 by default) with the registry console.

2.2.7.1. Deploying the Registry Console

Important

You must first have exposed the registry.

  1. Create a passthrough route in the default project. You will need this when creating the registry console application in the next step.

    $ oc create route passthrough --service registry-console \
        --port registry-console \
        -n default
  2. Deploy the registry console application. Replace <openshift_oauth_url> with the URL of the OpenShift Container Platform OAuth provider, which is typically the master.

    $ oc new-app -n default --template=registry-console \
        -p OPENSHIFT_OAUTH_PROVIDER_URL="https://<openshift_oauth_url>:8443" \
        -p REGISTRY_HOST=$(oc get route docker-registry -n default --template='{{ .spec.host }}') \
        -p COCKPIT_KUBE_URL=$(oc get route registry-console -n default --template='https://{{ .spec.host }}')
    Note

    If the redirection URL is wrong when you are trying to log in to the registry console, check your OAuth client with oc get oauthclients.

  3. Finally, use a web browser to view the console using the route URI.

2.2.7.2. Securing the Registry Console

By default, the registry console generates self-signed TLS certificates if deployed manually per the steps in Deploying the Registry Console. See Troubleshooting the Registry Console for more information.

Use the following steps to add your organization’s signed certificates as a secret volume. This assumes your certificates are available on the oc client host.

  1. Create a .cert file containing the certificate and key. Format the file with:

    • One or more BEGIN CERTIFICATE blocks for the server certificate and the intermediate certificate authorities
    • A block containing a BEGIN PRIVATE KEY or similar for the key. The key must not be encrypted

      For example:

      -----BEGIN CERTIFICATE-----
      MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV
      BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls
      ...
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV
      BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls
      ...
      -----END CERTIFICATE-----
      -----BEGIN PRIVATE KEY-----
      MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCyOJ5garOYw0sm
      8TBCDSqQ/H1awGMzDYdB11xuHHsxYS2VepPMzMzryHR137I4dGFLhvdTvJUH8lUS
      ...
      -----END PRIVATE KEY-----
    • The secured registry should contain the following Subject Alternative Names (SAN) list:

      • Two service hostnames.

        For example:

        docker-registry.default.svc.cluster.local
        docker-registry.default.svc
      • Service IP address.

        For example:

        172.30.124.220

        Use the following command to get the container image registry service IP address:

        oc get service docker-registry --template='{{.spec.clusterIP}}'
      • Public hostname.

        For example:

        docker-registry-default.apps.example.com

        Use the following command to get the container image registry public hostname:

        oc get route docker-registry --template '{{.spec.host}}'

        For example, the server certificate should contain SAN details similar to the following:

        X509v3 Subject Alternative Name:
                       DNS:docker-registry-public.openshift.com, DNS:docker-registry.default.svc, DNS:docker-registry.default.svc.cluster.local, DNS:172.30.2.98, IP Address:172.30.2.98

        The registry console loads a certificate from the /etc/cockpit/ws-certs.d directory. It uses the last file with a .cert extension in alphabetical order. Therefore, the .cert file should contain at least two PEM blocks formatted in the OpenSSL style.

        If no certificate is found, a self-signed certificate is created using the openssl command and stored in the 0-self-signed.cert file.

  2. Create the secret:

    $ oc create secret generic console-secret \
        --from-file=/path/to/console.cert
  3. Add the secrets to the registry-console deployment configuration:

    $ oc set volume dc/registry-console --add --type=secret \
        --secret-name=console-secret -m /etc/cockpit/ws-certs.d

    This triggers a new deployment of the registry console to include your signed certificates.

2.2.7.3. Troubleshooting the Registry Console

2.2.7.3.1. Debug Mode

The registry console debug mode is enabled using an environment variable. The following command redeploys the registry console in debug mode:

$ oc set env dc registry-console G_MESSAGES_DEBUG=cockpit-ws,cockpit-wrapper

Enabling debug mode allows more verbose logging to appear in the registry console’s pod logs.

2.2.7.3.2. Display SSL Certificate Path

To check which certificate the registry console is using, a command can be run from inside the console pod.

  1. List the pods in the default project and find the registry console’s pod name:

    $ oc get pods -n default
    NAME                       READY     STATUS    RESTARTS   AGE
    registry-console-1-rssrw   1/1       Running   0          1d
  2. Using the pod name from the previous command, get the certificate path that the cockpit-ws process is using. This example shows the console using the auto-generated certificate:

    $ oc exec registry-console-1-rssrw remotectl certificate
    certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert

2.3. Accessing the Registry

2.3.1. Viewing Logs

To view the logs for the container image registry, use the oc logs command with the deployment configuration:

$ oc logs dc/docker-registry
2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown"
2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002
2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002
2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler"
2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002

2.3.2. File Storage

Tag and image metadata is stored in OpenShift Container Platform, but the registry stores layer and signature data in a volume that is mounted into the registry container at /registry. As oc exec does not work on privileged containers, to view a registry’s contents you must manually SSH into the node housing the registry pod’s container, then run docker exec on the container itself:

  1. List the current pods to find the pod name of your container image registry:

    # oc get pods

    Then, use oc describe to find the host name for the node running the container:

    # oc describe pod <pod_name>
  2. Log in to the desired node:

    # ssh node.example.com
  3. List the running containers from the default project on the node host and identify the container ID for the container image registry:

    # docker ps --filter=name=registry_docker-registry.*_default_
  4. List the registry contents using the oc rsh command:

    # oc rsh dc/docker-registry find /registry
    /registry/docker
    /registry/docker/registry
    /registry/docker/registry/v2
    /registry/docker/registry/v2/blobs 1
    /registry/docker/registry/v2/blobs/sha256
    /registry/docker/registry/v2/blobs/sha256/ed
    /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810
    /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/data 2
    /registry/docker/registry/v2/blobs/sha256/a3
    /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
    /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/data
    /registry/docker/registry/v2/blobs/sha256/f7
    /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845
    /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/data
    /registry/docker/registry/v2/repositories 3
    /registry/docker/registry/v2/repositories/p1
    /registry/docker/registry/v2/repositories/p1/pause 4
    /registry/docker/registry/v2/repositories/p1/pause/_manifests
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures 5
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/link 6
    /registry/docker/registry/v2/repositories/p1/pause/_uploads 7
    /registry/docker/registry/v2/repositories/p1/pause/_layers 8
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/link 9
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/link
    1
    This directory stores all layers and signatures as blobs.
    2
    This file contains the blob’s contents.
    3
    This directory stores all the image repositories.
    4
    This directory is for a single image repository p1/pause.
    5
    This directory contains signatures for a particular image manifest revision.
    6
    This file contains a reference back to a blob (which contains the signature data).
    7
    This directory contains any layers that are currently being uploaded and staged for the given repository.
    8
    This directory contains links to all the layers this repository references.
    9
    This file contains a reference to a specific layer that has been linked into this repository via an image.

2.3.3. Accessing the Registry Directly

For advanced usage, you can access the registry directly to invoke docker commands. This allows you to push images to or pull them from the integrated registry directly using operations like docker push or docker pull. To do so, you must be logged in to the registry using the docker login command. The operations you can perform depend on your user permissions, as described in the following sections.

2.3.3.1. User Prerequisites

To access the registry directly, the user that you use must satisfy the following, depending on your intended usage:

  • For any direct access, you must have a regular user for your preferred identity provider. A regular user can generate an access token required for logging in to the registry. System users, such as system:admin, cannot obtain access tokens and, therefore, cannot access the registry directly.

    For example, if you are using HTPASSWD authentication, you can create one using the following command:

    # htpasswd /etc/origin/master/htpasswd <user_name>
  • For pulling images, for example when using the docker pull command, the user must have the registry-viewer role. To add this role:

    $ oc policy add-role-to-user registry-viewer <user_name>
  • For writing or pushing images, for example when using the docker push command, the user must have the registry-editor role. To add this role:

    $ oc policy add-role-to-user registry-editor <user_name>

For more information on user permissions, see Managing Role Bindings.

2.3.3.2. Logging in to the Registry

Note

Ensure your user satisfies the prerequisites for accessing the registry directly.

To log in to the registry directly:

  1. Ensure you are logged in to OpenShift Container Platform as a regular user:

    $ oc login
  2. Log in to the container image registry by using your access token:

    docker login -u openshift -p $(oc whoami -t) <registry_ip>:<port>
Note

You can pass any value for the username, the token contains all necessary information. Passing a username that contains colons will result in a login failure.

2.3.3.3. Pushing and Pulling Images

After logging in to the registry, you can perform docker pull and docker push operations against your registry.

Important

You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.

In the following examples, we use:

Component

Value

<registry_ip>

172.30.124.220

<port>

5000

<project>

openshift

<image>

busybox

<tag>

omitted (defaults to latest)

  1. Pull an arbitrary image:

    $ docker pull docker.io/busybox
  2. Tag the new image with the form <registry_ip>:<port>/<project>/<image>. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry.

    $ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
    Note

    Your regular user must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the docker push in the next step will fail. To test, you can create a new project to push the busybox image.

  3. Push the newly-tagged image to your registry:

    $ docker push 172.30.124.220:5000/openshift/busybox
    ...
    cf2616975b4a: Image successfully pushed
    Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55

2.3.4. Accessing Registry Metrics

The OpenShift Container Registry provides an endpoint for Prometheus metrics. Prometheus is a stand-alone, open source systems monitoring and alerting toolkit.

The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. However, this route must first be enabled; see Extended Registry Configuration for instructions.

The following is a simple example of a metrics query:

$ curl -s -u <user>:<secret> \ 1
    http://172.30.30.30:5000/extensions/v2/metrics | grep openshift | head -n 10

# HELP openshift_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which OpenShift was built.
# TYPE openshift_build_info gauge
openshift_build_info{gitCommit="67275e1",gitVersion="v3.6.0-alpha.1+67275e1-803",major="3",minor="6+"} 1
# HELP openshift_registry_request_duration_seconds Request latency summary in microseconds for each operation
# TYPE openshift_registry_request_duration_seconds summary
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.5"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.9"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.99"} 0
openshift_registry_request_duration_seconds_sum{name="test/origin-pod",operation="blobstore.create"} 0
openshift_registry_request_duration_seconds_count{name="test/origin-pod",operation="blobstore.create"} 5
1
<user> can be arbitrary, but <secret> must match the value specified in the registry configuration.

Another method to access the metrics is to use a cluster role. You still need to enable the endpoint, but you do not need to specify a <secret>. The part of the configuration file responsible for metrics should look like this:

openshift:
  version: 1.0
  metrics:
    enabled: true
...

You must create a cluster role if you do not already have one to access the metrics:

$ cat <<EOF |
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-scraper
rules:
- apiGroups:
  - image.openshift.io
  resources:
  - registry/metrics
  verbs:
  - get
EOF
oc create -f -

To add this role to a user, run the following command:

$ oc adm policy add-cluster-role-to-user prometheus-scraper <username>

See the upstream Prometheus documentation for more advanced queries and recommended visualizers.

2.4. Securing and Exposing the Registry

2.4.1. Overview

By default, the OpenShift Container Platform registry is secured during cluster installation so that it serves traffic via TLS. A passthrough route is also created by default to expose the service externally.

If for any reason your registry has not been secured or exposed, see the following sections for steps on how to manually do so.

2.4.2. Manually Securing the Registry

To manually secure the registry to serve traffic via TLS:

  1. Deploy the registry.
  2. Fetch the service IP and port of the registry:

    $ oc get svc/docker-registry
    NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    docker-registry   ClusterIP   172.30.82.152   <none>        5000/TCP   1d
  3. You can use an existing server certificate, or create a key and server certificate valid for specified IPs and host names, signed by a specified CA. To create a server certificate for the registry service IP and the docker-registry.default.svc.cluster.local host name, run the following command from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts:

    $ oc adm ca create-server-cert \
        --signer-cert=/etc/origin/master/ca.crt \
        --signer-key=/etc/origin/master/ca.key \
        --signer-serial=/etc/origin/master/ca.serial.txt \
        --hostnames='docker-registry.default.svc.cluster.local,docker-registry.default.svc,172.30.124.220' \
        --cert=/etc/secrets/registry.crt \
        --key=/etc/secrets/registry.key

    If the router will be exposed externally, add the public route host name in the --hostnames flag:

    --hostnames='mydocker-registry.example.com,docker-registry.default.svc.cluster.local,172.30.124.220 \

    See Redeploying Registry and Router Certificates for additional details on updating the default certificate so that the route is externally accessible.

    Note

    The oc adm ca create-server-cert command generates a certificate that is valid for two years. This can be altered with the --expire-days option, but for security reasons, it is recommended to not make it greater than this value.

  4. Create the secret for the registry certificates:

    $ oc create secret generic registry-certificates \
        --from-file=/etc/secrets/registry.crt \
        --from-file=/etc/secrets/registry.key
  5. Add the secret to the registry pod’s service accounts (including the default service account):

    $ oc secrets link registry registry-certificates
    $ oc secrets link default  registry-certificates
    Note

    Limiting secrets to only the service accounts that reference them is disabled by default. This means that if serviceAccountConfig.limitSecretReferences is set to false (the default setting) in the master configuration file, linking secrets to a service is not required.

  6. Pause the docker-registry service:

    $ oc rollout pause dc/docker-registry
  7. Add the secret volume to the registry deployment configuration:

    $ oc set volume dc/docker-registry --add --type=secret \
        --secret-name=registry-certificates -m /etc/secrets
  8. Enable TLS by adding the following environment variables to the registry deployment configuration:

    $ oc set env dc/docker-registry \
        REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \
        REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key

    See the Configuring a registry section of the Docker documentation for more information.

  9. Update the scheme used for the registry’s liveness probe from HTTP to HTTPS:

    $ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{
        "name":"registry",
        "livenessProbe":  {"httpGet": {"scheme":"HTTPS"}}
      }]}}}}'
  10. If your registry was initially deployed on OpenShift Container Platform 3.2 or later, update the scheme used for the registry’s readiness probe from HTTP to HTTPS:

    $ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{
        "name":"registry",
        "readinessProbe":  {"httpGet": {"scheme":"HTTPS"}}
      }]}}}}'
  11. Resume the docker-registry service:

    $ oc rollout resume dc/docker-registry
  12. Validate the registry is running in TLS mode. Wait until the latest docker-registry deployment completes and verify the Docker logs for the registry container. You should find an entry for listening on :5000, tls.

    $ oc logs dc/docker-registry | grep tls
    time="2015-05-27T05:05:53Z" level=info msg="listening on :5000, tls" instance.id=deeba528-c478-41f5-b751-dc48e4935fc2
  13. Copy the CA certificate to the Docker certificates directory. This must be done on all nodes in the cluster:

    $ dcertsdir=/etc/docker/certs.d
    $ destdir_addr=$dcertsdir/172.30.124.220:5000
    $ destdir_name=$dcertsdir/docker-registry.default.svc.cluster.local:5000
    
    $ sudo mkdir -p $destdir_addr $destdir_name
    $ sudo cp ca.crt $destdir_addr    1
    $ sudo cp ca.crt $destdir_name
    1
    The ca.crt file is a copy of /etc/origin/master/ca.crt on the master.
  14. When using authentication, some versions of docker also require you to configure your cluster to trust the certificate at the OS level.

    1. Copy the certificate:

      $ cp /etc/origin/master/ca.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
    2. Run:

      $ update-ca-trust enable
  15. Remove the --insecure-registry option only for this particular registry in the /etc/sysconfig/docker file. Then, reload the daemon and restart the docker service to reflect this configuration change:

    $ sudo systemctl daemon-reload
    $ sudo systemctl restart docker
  16. Validate the docker client connection. Running docker push to the registry or docker pull from the registry should succeed. Make sure you have logged into the registry.

    $ docker tag|push <registry/image> <internal_registry/project/image>

    For example:

    $ docker pull busybox
    $ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
    $ docker push 172.30.124.220:5000/openshift/busybox
    ...
    cf2616975b4a: Image successfully pushed
    Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55

2.4.3. Manually Exposing a Secure Registry

Instead of logging in to the OpenShift Container Platform registry from within the OpenShift Container Platform cluster, you can gain external access to it by first securing the registry and then exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.

  1. Each of the following prerequisite steps are performed by default during a typical cluster installation. If they have not been, perform them manually:

  2. A passthrough route should have been created by default for the registry during the initial cluster installation:

    1. Verify whether the route exists:

      $ oc get route/docker-registry -o yaml
      apiVersion: v1
      kind: Route
      metadata:
        name: docker-registry
      spec:
        host: <host> 1
        to:
          kind: Service
          name: docker-registry 2
        tls:
          termination: passthrough 3
      1
      The host for your route. You must be able to resolve this name externally via DNS to the router’s IP address.
      2
      The service name for your registry.
      3
      Specifies this route as a passthrough route.
      Note

      Re-encrypt routes are also supported for exposing the secure registry.

    2. If it does not exist, create the route via the oc create route passthrough command, specifying the registry as the route’s service. By default, the name of the created route is the same as the service name:

      1. Get the docker-registry service details:

        $ oc get svc
        NAME              CLUSTER_IP       EXTERNAL_IP   PORT(S)                 SELECTOR                  AGE
        docker-registry   172.30.69.167    <none>        5000/TCP                docker-registry=default   4h
        kubernetes        172.30.0.1       <none>        443/TCP,53/UDP,53/TCP   <none>                    4h
        router            172.30.172.132   <none>        80/TCP                  router=router             4h
      2. Create the route:

        $ oc create route passthrough    \
            --service=docker-registry    \1
            --hostname=<host>
        route "docker-registry" created     2
        1
        Specifies the registry as the route’s service.
        2
        The route name is identical to the service name.
  3. Next, you must trust the certificates being used for the registry on your host system to allow the host to push and pull images. The certificates referenced were created when you secured your registry.

    $ sudo mkdir -p /etc/docker/certs.d/<host>
    $ sudo cp <ca_certificate_file> /etc/docker/certs.d/<host>
    $ sudo systemctl restart docker
  4. Log in to the registry using the information from securing the registry. However, this time point to the host name used in the route rather than your service IP. When logging in to a secured and exposed registry, make sure you specify the registry in the docker login command:

    # docker login -e user@company.com \
        -u f83j5h6 \
        -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \
        <host>
  5. You can now tag and push images using the route host. For example, to tag and push a busybox image in a project called test:

    $ oc get imagestreams -n test
    NAME      DOCKER REPO   TAGS      UPDATED
    
    $ docker pull busybox
    $ docker tag busybox <host>/test/busybox
    $ docker push <host>/test/busybox
    The push refers to a repository [<host>/test/busybox] (len: 1)
    8c2e06607696: Image already exists
    6ce2e90b0bc7: Image successfully pushed
    cf2616975b4a: Image successfully pushed
    Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31
    
    $ docker pull <host>/test/busybox
    latest: Pulling from <host>/test/busybox
    cf2616975b4a: Already exists
    6ce2e90b0bc7: Already exists
    8c2e06607696: Already exists
    Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31
    Status: Image is up to date for <host>/test/busybox:latest
    
    $ oc get imagestreams -n test
    NAME      DOCKER REPO                       TAGS      UPDATED
    busybox   172.30.11.215:5000/test/busybox   latest    2 seconds ago
    Note

    Your image streams will have the IP address and port of the registry service, not the route name and port. See oc get imagestreams for details.

2.4.4. Manually Exposing a Non-Secure Registry

Instead of securing the registry in order to expose the registry, you can simply expose a non-secure registry for non-production OpenShift Container Platform environments. This allows you to have an external route to the registry without using SSL certificates.

Warning

Only non-production environments should expose a non-secure registry to external access.

To expose a non-secure registry:

  1. Expose the registry:

    # oc expose service docker-registry --hostname=<hostname> -n default

    This creates the following JSON file:

    apiVersion: v1
    kind: Route
    metadata:
      creationTimestamp: null
      labels:
        docker-registry: default
      name: docker-registry
    spec:
      host: registry.example.com
      port:
        targetPort: "5000"
      to:
        kind: Service
        name: docker-registry
    status: {}
  2. Verify that the route has been created successfully:

    # oc get route
    NAME              HOST/PORT                    PATH      SERVICE           LABELS                    INSECURE POLICY   TLS TERMINATION
    docker-registry   registry.example.com            docker-registry   docker-registry=default
  3. Check the health of the registry:

    $ curl -v http://registry.example.com/healthz

    Expect an HTTP 200/OK message.

    After exposing the registry, update your /etc/sysconfig/docker file by adding the port number to the OPTIONS entry. For example:

    OPTIONS='--selinux-enabled --insecure-registry=172.30.0.0/16 --insecure-registry registry.example.com:80'
    Important

    The above options should be added on the client from which you are trying to log in.

    Also, ensure that Docker is running on the client.

When logging in to the non-secured and exposed registry, make sure you specify the registry in the docker login command. For example:

# docker login -e user@company.com \
    -u f83j5h6 \
    -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \
    <host>

2.5. Extended Registry Configuration

2.5.1. Maintaining the Registry IP Address

OpenShift Container Platform refers to the integrated registry by its service IP address, so if you decide to delete and recreate the docker-registry service, you can ensure a completely transparent transition by arranging to re-use the old IP address in the new service. If a new IP address cannot be avoided, you can minimize cluster disruption by rebooting only the masters.

Re-using the Address
To re-use the IP address, you must save the IP address of the old docker-registry service prior to deleting it, and arrange to replace the newly assigned IP address with the saved one in the new docker-registry service.
  1. Make a note of the clusterIP for the service:

    $ oc get svc/docker-registry -o yaml | grep clusterIP:
  2. Delete the service:

    $ oc delete svc/docker-registry dc/docker-registry
  3. Create the registry definition in registry.yaml, replacing <options> with, for example, those used in step 3 of the instructions in the Non-Production Use section:

    $ oc adm registry <options> -o yaml > registry.yaml
  4. Edit registry.yaml, find the Service there, and change its clusterIP to the address noted in step 1.
  5. Create the registry using the modified registry.yaml:

    $ oc create -f registry.yaml
Rebooting the Masters
If you are unable to re-use the IP address, any operation that uses a pull specification that includes the old IP address will fail. To minimize cluster disruption, you must reboot the masters:
# master-restart api
# master-restart controllers

This ensures that the old registry URL, which includes the old IP address, is cleared from the cache.

Note

We recommend against rebooting the entire cluster because that incurs unnecessary downtime for pods and does not actually clear the cache.

2.5.2. Configuring an External Registry Search List

You can use the /etc/containers/registries.conf file to create a list of Docker registries to search for container images.

The /etc/containers/registries.conf file is a list of registry servers that OpenShift Container Platform should search against when a user pulls an image using the image short name, such as: myimage:latest. You can customize the order of the search, specify secure and insecure registries, and define a blocked registry list. OpenShift Container Platform does not search or allow pulls from registries on the blocked list.

For example, if a user wants to pull the myimage:latest image, OpenShift Container Platform searches the registries in the order they appear in the list until it finds the myimage:latest.

The registry search list allows you to curate a set of images and templates that are available for download by OpenShift Container Platform users. You can place these images in one or more Docker registries, add the registry to the list, and pull those images into your cluster.

Note

When using the registry search list, OpenShift Container Platform will not pull images from a registry that is not in the search list.

To configure a registry search list:

  1. Edit the /etc/containers/registries.conf file to add or edit the following parameters as needed:

    [registries.search] 1
    registries = ["reg1.example.com", "reg2.example.com"]
    
    [registries.insecure] 2
    registries = ["reg3.example.com"]
    
    [registries.block] 3
    registries = ['docker.io']
    1
    Specify the secure registries from which users can download images using SSL/TLS.
    2
    Specify the insecure registries from which users can download images without TLS.
    3
    Specify the registries from which users cannot download images.

2.5.3. Setting the Registry Host Name

You can configure the host name and port the registry is known by for both internal and external references. By doing this, image streams will provide hostname based push and pull specifications for images, allowing consumers of the images to be isolated from changes to the registry service IP and potentially allowing image streams and their references to be portable between clusters.

To set the hostname used to reference the registry from within the cluster, set the internalRegistryHostname in the imagePolicyConfig section of the master configuration file. The external host name is controlled by setting the externalRegistryHostname value in the same location.

Image Policy Configuration

imagePolicyConfig:
  internalRegistryHostname: docker-registry.default.svc.cluster.local:5000
  externalRegistryHostname: docker-registry.mycompany.com

The registry itself must be configured with the same internal hostname value. This can be accomplished by setting the REGISTRY_OPENSHIFT_SERVER_ADDR environment variable on the registry deployment configuration, or by setting the value in the OpenShift section of the registry configuration.

Note

If you have enabled TLS for your registry the server certificate must include the hostnames by which you expect the registry to be referenced. See securing the registry for instructions on adding hostnames to the server certificate.

2.5.4. Overriding the Registry Configuration

You can override the integrated registry’s default configuration, found by default at /config.yml in a running registry’s container, with your own custom configuration.

Note

Upstream configuration options in this file may also be overridden using environment variables. The middleware section is an exception as there are just a few options that can be overridden using environment variables. Learn how to override specific configuration options.

To enable management of the registry configuration file directly and deploy an updated configuration using a ConfigMap:

  1. Deploy the registry.
  2. Edit the registry configuration file locally as needed. The initial YAML file deployed on the registry is provided below. Review supported options.

    Registry Configuration File

    version: 0.1
    log:
      level: debug
    http:
      addr: :5000
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /registry
      delete:
        enabled: true
    auth:
      openshift:
        realm: openshift
    middleware:
      registry:
        - name: openshift
      repository:
        - name: openshift
          options:
            acceptschema2: true
            pullthrough: true
            enforcequota: false
            projectcachettl: 1m
            blobrepositorycachettl: 10m
      storage:
        - name: openshift
    openshift:
      version: 1.0
      metrics:
        enabled: false
        secret: <secret>

  3. Create a ConfigMap holding the content of each file in this directory:

    $ oc create configmap registry-config \
        --from-file=</path/to/custom/registry/config.yml>/
  4. Add the registry-config ConfigMap as a volume to the registry’s deployment configuration to mount the custom configuration file at /etc/docker/registry/:

    $ oc set volume dc/docker-registry --add --type=configmap \
        --configmap-name=registry-config -m /etc/docker/registry/
  5. Update the registry to reference the configuration path from the previous step by adding the following environment variable to the registry’s deployment configuration:

    $ oc set env dc/docker-registry \
        REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yml

This may be performed as an iterative process to achieve the desired configuration. For example, during troubleshooting, the configuration may be temporarily updated to put it in debug mode.

To update an existing configuration:

Warning

This procedure will overwrite the currently deployed registry configuration.

  1. Edit the local registry configuration file, config.yml.
  2. Delete the registry-config configmap:

    $ oc delete configmap registry-config
  3. Recreate the configmap to reference the updated configuration file:

    $ oc create configmap registry-config\
        --from-file=</path/to/custom/registry/config.yml>/
  4. Redeploy the registry to read the updated configuration:

    $ oc rollout latest docker-registry
Tip

Maintain configuration files in a source control repository.

2.5.5. Registry Configuration Reference

There are many configuration options available in the upstream docker distribution library. Not all configuration options are supported or enabled. Use this section as a reference when overriding the registry configuration.

Note

Upstream configuration options in this file may also be overridden using environment variables. However, the middleware section may not be overridden using environment variables. Learn how to override specific configuration options.

2.5.5.1. Log

Upstream options are supported.

Example:

log:
  level: debug
  formatter: text
  fields:
    service: registry
    environment: staging

2.5.5.2. Hooks

Mail hooks are not supported.

2.5.5.3. Storage

This section lists the supported registry storage drivers. See the container image registry documentation for more information.

The following list includes storage drivers that need to be configured in the registry’s configuration file:

General registry storage configuration options are supported. See the container image registry documentation for more information.

The following storage options need to be configured through the filesystem driver:

Note

For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.

General Storage Configuration Options

storage:
  delete:
    enabled: true 1
  redirect:
    disable: false
  cache:
    blobdescriptor: inmemory
  maintenance:
    uploadpurging:
      enabled: true
      age: 168h
      interval: 24h
      dryrun: false
    readonly:
      enabled: false

1
This entry is mandatory for image pruning to work properly.

2.5.5.4. Auth

Auth options should not be altered. The openshift extension is the only supported option.

auth:
  openshift:
    realm: openshift

2.5.5.5. Middleware

The repository middleware extension allows to configure OpenShift Container Platform middleware responsible for interaction with OpenShift Container Platform and image proxying.

middleware:
  registry:
    - name: openshift 1
  repository:
    - name: openshift 2
      options:
        acceptschema2: true 3
        pullthrough: true 4
        mirrorpullthrough: true 5
        enforcequota: false 6
        projectcachettl: 1m 7
        blobrepositorycachettl: 10m 8
  storage:
    - name: openshift 9
1 2 9
These entries are mandatory. Their presence ensures required components are loaded. These values should not be changed.
3
Allows you to store manifest schema v2 during a push to the registry. See below for more details.
4
Allows the registry to act as a proxy for remote blobs. See below for more details.
5
Allows the registry cache blobs to be served from remote registries for fast access later. The mirroring starts when the blob is accessed for the first time. The option has no effect if the pullthrough is disabled.
6
Prevents blob uploads exceeding the size limit, which are defined in the targeted project.
7
An expiration timeout for limits cached in the registry. The lower the value, the less time it takes for the limit changes to propagate to the registry. However, the registry will query limits from the server more frequently and, as a consequence, pushes will be slower.
8
An expiration timeout for remembered associations between blob and repository. The higher the value, the higher probability of fast lookup and more efficient registry operation. On the other hand, memory usage will raise as well as a risk of serving image layer to user, who is no longer authorized to access it.
2.5.5.5.1. S3 Driver Configuration

If you want to use a S3 region that is not supported by the integrated registry you are using, then you can specify a regionendpoint to avoid the region validation error.

For more information about using Amazon Simple Storage Service storage, see Amazon S3 as a Storage Back-end.

For example:

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: inmemory
  delete:
    enabled: true
  s3:
    accesskey: BJKMSZBRESWJQXRWMAEQ
    secretkey: 5ah5I91SNXbeoUXXDasFtadRqOdy62JzlnOW1goS
    bucket: docker.myregistry.com
    region: eu-west-3
    regionendpoint: https://s3.eu-west-3.amazonaws.com
 auth:
  openshift:
    realm: openshift
middleware:
  registry:
    - name: openshift
  repository:
    - name: openshift
  storage:
    - name: openshift
Note

Verify the region and regionendpoint fields are consistent between themselves. Otherwise the integrated registry will start, but it can not read or write anything to the S3 storage.

The regionendpoint can also be useful if you use a S3 storage different from the Amazon S3.

2.5.5.5.2. CloudFront Middleware

The CloudFront middleware extension can be added to support AWS, CloudFront CDN storage provider. CloudFront middleware speeds up distribution of image content internationally. The blobs are distributed to several edge locations around the world. The client is always directed to the edge with the lowest latency.

Note

The CloudFront middleware extension can be only used with S3 storage. It is utilized only during blob serving. Therefore, only blob downloads can be speeded up, not uploads.

The following is an example of minimal configuration of S3 storage driver with a CloudFront middleware:

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: inmemory
  delete:
    enabled: true
  s3: 1
    accesskey: BJKMSZBRESWJQXRWMAEQ
    secretkey: 5ah5I91SNXbeoUXXDasFtadRqOdy62JzlnOW1goS
    region: us-east-1
    bucket: docker.myregistry.com
auth:
  openshift:
    realm: openshift
middleware:
  registry:
    - name: openshift
  repository:
    - name: openshift
  storage:
    - name: cloudfront 2
      options:
        baseurl: https://jrpbyn0k5k88bi.cloudfront.net/ 3
        privatekey: /etc/docker/cloudfront-ABCEDFGHIJKLMNOPQRST.pem 4
        keypairid: ABCEDFGHIJKLMNOPQRST 5
    - name: openshift
1
The S3 storage must be configured the same way regardless of CloudFront middleware.
2
The CloudFront storage middleware needs to be listed before OpenShift middleware.
3
The CloudFront base URL. In the AWS management console, this is listed as Domain Name of CloudFront distribution.
4
The location of your AWS private key on the filesystem. This must be not confused with Amazon EC2 key pair. See the AWS documentation on creating CloudFront key pairs for your trusted signers. The file needs to be mounted as a secret into the registry pod.
5
The ID of your Cloudfront key pair.
2.5.5.5.3. Overriding Middleware Configuration Options

The middleware section cannot be overridden using environment variables. There are a few exceptions, however. For example:

middleware:
  repository:
    - name: openshift
      options:
        acceptschema2: true 1
        pullthrough: true 2
        mirrorpullthrough: true 3
        enforcequota: false 4
        projectcachettl: 1m 5
        blobrepositorycachettl: 10m 6
1
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ACCEPTSCHEMA2, which allows for the ability to accept manifest schema v2 on manifest put requests. Recognized values are true and false (which applies to all the other boolean variables below).
2
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PULLTHROUGH, which enables a proxy mode for remote repositories.
3
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_MIRRORPULLTHROUGH, which instructs registry to mirror blobs locally if serving remote blobs.
4
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA, which allows the ability to turn quota enforcement on or off. By default, quota enforcement is off.
5
A configuration option that can be overridden by the environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PROJECTCACHETTL, specifying an eviction timeout for project quota objects. It takes a valid time duration string (for example, 2m). If empty, you get the default timeout. If zero (0m), caching is disabled.
6
A configuration option that can be overridden by the environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_BLOBREPOSITORYCACHETTL, specifying an eviction timeout for associations between blob and containing repository. The format of the value is the same as in projectcachettl case.
2.5.5.5.4. Image Pullthrough

If enabled, the registry will attempt to fetch requested blob from a remote registry unless the blob exists locally. The remote candidates are calculated from DockerImage entries stored in status of the image stream, a client pulls from. All the unique remote registry references in such entries will be tried in turn until the blob is found.

Pullthrough will only occur if an image stream tag exists for the image being pulled. For example, if the image being pulled is docker-registry.default.svc:5000/yourproject/yourimage:prod then the registry will look for an image stream tag named yourimage:prod in the project yourproject. If it finds one, it will attempt to pull the image using the dockerImageReference associated with that image stream tag.

When performing pullthrough, the registry will use pull credentials found in the project associated with the image stream tag that is being referenced. This capability also makes it possible for you to pull images that reside on a registry they do not have credentials to access, as long as you have access to the image stream tag that references the image.

You must ensure that your registry has appropriate certificates to trust any external registries you do a pullthrough against. The certificates need to be placed in the /etc/pki/tls/certs directory on the pod. You can mount the certificates using a configuration map or secret. Note that the entire /etc/pki/tls/certs directory must be replaced. You must include the new certificates and replace the system certificates in your secret or configuration map that you mount.

Note that by default image stream tags use a reference policy type of Source which means that when the image stream reference is resolved to an image pull specification, the specification used will point to the source of the image. For images hosted on external registries, this will be the external registry and as a result the resource will reference and pull the image by the external registry. For example, registry.redhat.io/openshift3/jenkins-2-rhel7 and pullthrough will not apply. To ensure that resources referencing image streams use a pull specification that points to the internal registry, the image stream tag should use a reference policy type of Local. More information is available on Reference Policy.

This feature is on by default. However, it can be disabled using a configuration option.

By default, all the remote blobs served this way are stored locally for subsequent faster access unless mirrorpullthrough is disabled. The downside of this mirroring feature is an increased storage usage.

Note

The mirroring starts when a client tries to fetch at least a single byte of the blob. To pre-fetch a particular image into integrated registry before it is actually needed, you can run the following command:

$ oc get imagestreamtag/${IS}:${TAG} -o jsonpath='{ .image.dockerImageLayers[*].name }' | \
  xargs -n1 -I {} curl -H "Range: bytes=0-1" -u user:${TOKEN} \
  http://${REGISTRY_IP}:${PORT}/v2/default/mysql/blobs/{}
Note

This OpenShift Container Platform mirroring feature should not be confused with the upstream registry pull through cache feature, which is a similar but distinct capability.

2.5.5.5.5. Manifest Schema v2 Support

Each image has a manifest describing its blobs, instructions for running it and additional metadata. The manifest is versioned, with each version having different structure and fields as it evolves over time. The same image can be represented by multiple manifest versions. Each version will have different digest though.

The registry currently supports manifest v2 schema 1 (schema1) and manifest v2 schema 2 (schema2). The former is being obsoleted but will be supported for an extended amount of time.

The default configuration is to store schema2.

You should be wary of compatibility issues with various Docker clients:

  • Docker clients of version 1.9 or older support only schema1. Any manifest this client pulls or pushes will be of this legacy schema.
  • Docker clients of version 1.10 support both schema1 and schema2. And by default, it will push the latter to the registry if it supports newer schema.

The registry, storing an image with schema1 will always return it unchanged to the client. Schema2 will be transferred unchanged only to newer Docker client. For the older one, it will be converted on-the-fly to schema1.

This has significant consequences. For example an image pushed to the registry by a newer Docker client cannot be pulled by the older Docker by its digest. That’s because the stored image’s manifest is of schema2 and its digest can be used to pull only this version of manifest.

Once you’re confident that all the registry clients support schema2, you’ll be safe to enable its support in the registry. See the middleware configuration reference above for particular option.

2.5.5.6. OpenShift

This section reviews the configuration of global settings for features specific to OpenShift Container Platform. In a future release, openshift-related settings in the Middleware section will be obsoleted.

Currently, this section allows you to configure registry metrics collection:

openshift:
  version: 1.0 1
  server:
    addr: docker-registry.default.svc 2
  metrics:
    enabled: false 3
    secret: <secret> 4
  requests:
    read:
      maxrunning: 10 5
      maxinqueue: 10 6
      maxwaitinqueue 2m 7
    write:
      maxrunning: 10 8
      maxinqueue: 10 9
      maxwaitinqueue 2m 10
1
A mandatory entry specifying configuration version of this section. The only supported value is 1.0.
2
The hostname of the registry. Should be set to the same value configured on the master. It can be overridden by the environment variable REGISTRY_OPENSHIFT_SERVER_ADDR.
3
Can be set to true to enable metrics collection. It can be overridden by the boolean environment variable REGISTRY_OPENSHIFT_METRICS_ENABLED.
4
A secret used to authorize client requests. Metrics clients must use it as a bearer token in Authorization header. It can be overridden by the environment variable REGISTRY_OPENSHIFT_METRICS_SECRET.
5
Maximum number of simultaneous pull requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_READ_MAXRUNNING. Zero indicates no limit.
6
Maximum number of queued pull requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_READ_MAXINQUEUE. Zero indicates no limit.
7
Maximum time a pull request can wait in the queue before being rejected. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_READ_MAXWAITINQUEUE. Zero indicates no limit.
8
Maximum number of simultaneous push requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXRUNNING. Zero indicates no limit.
9
Maximum number of queued push requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXINQUEUE. Zero indicates no limit.
10
Maximum time a push request can wait in the queue before being rejected. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXWAITINQUEUE. Zero indicates no limit.

See Accessing Registry Metrics for usage information.

2.5.5.7. Reporting

Reporting is unsupported.

2.5.5.8. HTTP

Upstream options are supported. Learn how to alter these settings via environment variables. Only the tls section should be altered. For example:

http:
  addr: :5000
  tls:
    certificate: /etc/secrets/registry.crt
    key: /etc/secrets/registry.key

2.5.5.9. Notifications

Upstream options are supported. The REST API Reference provides more comprehensive integration options.

Example:

notifications:
  endpoints:
    - name: registry
      disabled: false
      url: https://url:port/path
      headers:
        Accept:
          - text/plain
      timeout: 500
      threshold: 5
      backoff: 1000

2.5.5.10. Redis

Redis is not supported.

2.5.5.11. Health

Upstream options are supported. The registry deployment configuration provides an integrated health check at /healthz.

2.5.5.12. Proxy

Proxy configuration should not be enabled. This functionality is provided by the OpenShift Container Platform repository middleware extension, pullthrough: true.

2.5.5.13. Cache

The integrated registry actively caches data to reduce the number of calls to slow external resources. There are two caches:

  1. The storage cache that is used to cache blobs metadata. This cache does not have an expiration time and the data is there until it is explicitly deleted.
  2. The application cache contains association between blobs and repositories. The data in this cache has an expiration time.

In order to completely turn off the cache, you need to change the configuration:

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: "" 1
openshift:
  version: 1.0
  cache:
    disabled: true 2
    blobrepositoryttl: 10m
1
Disables cache of metadata accessed in the storage backend. Without this cache, the registry server will constantly access the backend for metadata.
2
Disables the cache in which contains the blob and repository associations. Without this cache, the registry server will continually re-query the data from the master API and recompute the associations.

2.6. Known Issues

2.6.1. Overview

The following are the known issues when deploying or using the integrated registry.

2.6.2. Concurrent Build with Registry Pull-through

The local docker-registry deployment takes on additional load. By default, it now caches content from registry.redhat.io. The images from registry.redhat.io for STI builds are now stored in the local registry. Attempts to pull them result in pulls from the local docker-registry. As a result, there are circumstances where extreme numbers of concurrent builds can result in timeouts for the pulls and the build can possibly fail. To alleviate the issue, scale the docker-registry deployment to more than one replica. Check for timeouts in the builder pod’s logs.

2.6.3. Image Push Errors with Scaled Registry Using Shared NFS Volume

When using a scaled registry with a shared NFS volume, you may see one of the following errors during the push of an image:

  • digest invalid: provided digest did not match uploaded content
  • blob upload unknown
  • blob upload invalid

These errors are returned by an internal registry service when Docker attempts to push the image. Its cause originates in the synchronization of file attributes across nodes. Factors such as NFS client side caching, network latency, and layer size can all contribute to potential errors that might occur when pushing an image using the default round-robin load balancing configuration.

You can perform the following steps to minimize the probability of such a failure:

  1. Ensure that the sessionAffinity of your docker-registry service is set to ClientIP:

    $ oc get svc/docker-registry --template='{{.spec.sessionAffinity}}'

    This should return ClientIP, which is the default in recent OpenShift Container Platform versions. If not, change it:

    $ oc patch svc/docker-registry -p '{"spec":{"sessionAffinity": "ClientIP"}}'
  2. Ensure that the NFS export line of your registry volume on your NFS server has the no_wdelay options listed. The no_wdelay option prevents the server from delaying writes, which greatly improves read-after-write consistency, a requirement of the registry.
Important

Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.

2.6.4. Pull of Internally Managed Image Fails with "not found" Error

This error occurs when the pulled image is pushed to an image stream different from the one it is being pulled from. This is caused by re-tagging a built image into an arbitrary image stream:

$ oc tag srcimagestream:latest anyproject/pullimagestream:latest

And subsequently pulling from it, using an image reference such as:

internal.registry.url:5000/anyproject/pullimagestream:latest

During a manual Docker pull, this will produce a similar error:

Error: image anyproject/pullimagestream:latest not found

To prevent this, avoid the tagging of internally managed images completely, or re-push the built image to the desired namespace manually.

2.6.5. Image Push Fails with "500 Internal Server Error" on S3 Storage

There are problems reported happening when the registry runs on S3 storage back-end. Pushing to a container image registry occasionally fails with the following error:

Received unexpected HTTP status: 500 Internal Server Error

To debug this, you need to view the registry logs. In there, look for similar error messages occurring at the time of the failed push:

time="2016-03-30T15:01:21.22287816-04:00" level=error msg="unknown error completing upload: driver.Error{DriverName:\"s3\", Enclosed:(*url.Error)(0xc20901cea0)}" http.request.method=PUT
...
time="2016-03-30T15:01:21.493067808-04:00" level=error msg="response completed with error" err.code=UNKNOWN err.detail="s3: Put https://s3.amazonaws.com/oso-tsi-docker/registry/docker/registry/v2/blobs/sha256/ab/abe5af443833d60cf672e2ac57589410dddec060ed725d3e676f1865af63d2e2/data: EOF" err.message="unknown error" http.request.method=PUT
...
time="2016-04-02T07:01:46.056520049-04:00" level=error msg="error putting into main store: s3: The request signature we calculated does not match the signature you provided. Check your key and signing method." http.request.method=PUT
atest

If you see such errors, contact your Amazon S3 support. There may be a problem in your region or with your particular bucket.

2.6.6. Image Pruning Fails

If you encounter the following error when pruning images:

BLOB sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c error deleting blob

And your registry log contains the following information:

error deleting blob \"sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c\": operation unsupported

It means that your custom configuration file lacks mandatory entries in the storage section, namely storage:delete:enabled set to true. Add them, re-deploy the registry, and repeat your image pruning operation.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.