Configuring Clusters


OpenShift Container Platform 3.10

OpenShift Container Platform 3.10 Installation and Configuration

Red Hat OpenShift Documentation Team

Abstract

OpenShift Installation and Configuration topics cover the basics of installing and configuring OpenShift in your environment. Use these topics for the one-time tasks required to get OpenShift up and running.

Chapter 1. Overview

This guide covers further configuration options available for your OpenShift Container Platform cluster post-installation.

Chapter 2. Setting up the Registry

2.1. Registry Overview

2.1.1. About the Registry

OpenShift Container Platform can build container images from your source code, deploy them, and manage their lifecycle. To enable this, OpenShift Container Platform provides an internal, integrated Docker registry that can be deployed in your OpenShift Container Platform environment to locally manage images.

2.1.2. Integrated or Stand-alone Registries

During an initial installation of a full OpenShift Container Platform cluster, it is likely that the registry was deployed automatically during the installation process. If it was not, or if you want to further customize the configuration of your registry, see Deploying a Registry on Existing Clusters.

While it can be deployed to run as an integrated part of your full OpenShift Container Platform cluster, the OpenShift Container Platform registry can alternatively be installed separately as a stand-alone container image registry.

To install a stand-alone registry, follow Installing a Stand-alone Registry. This installation path deploys an all-in-one cluster running a registry and specialized web console.

2.1.3. Red Hat Quay Registries

If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced registry features in Red Hat Quay include geo-replication, image scanning, and the ability to rollback images.

Visit the Quay.io site to set up your own hosted Quay registry account. After that, the Quay Tutorial helps you login to the Quay registry and start managing your images. Alternatively, refer to Getting Started with Red Hat Quay for information on setting up your own Red Hat Quay registry.

At the moment, you access your Red Hat Quay registry from OpenShift as you would any remote container image registry. To learn how to set up credentials to access Red Hat Quay as a secured registry, refer to Allowing Pods to Reference Images from Other Secured Registries.

2.2. Deploying a Registry on Existing Clusters

2.2.1. Overview

If the integrated registry was not previously deployed automatically during the initial installation of your OpenShift Container Platform cluster, or if it is no longer running successfully and you need to redeploy it on your existing cluster, see the following sections for options on deploying a new registry.

Note

This topic is not required if you installed a stand-alone registry.

2.2.2. Deploying the Registry

To deploy the integrated Docker registry, use the oc adm registry command as a user with cluster administrator privileges. For example:

$ oc adm registry --config=/etc/origin/master/admin.kubeconfig \1
    --service-account=registry \2
    --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' 3
1
--config is the path to the CLI configuration file for the cluster administrator.
2
--service-account is the service account used to run the registry’s pod.
3
Required to pull the correct image for OpenShift Container Platform.

This creates a service and a deployment configuration, both called docker-registry. Once deployed successfully, a pod is created with a name similar to docker-registry-1-cpty9.

To see a full list of options that you can specify when creating the registry:

$ oc adm registry --help

The value for --fs-group must be permitted by the SCC used by the registry (typically, the restricted SCC).

2.2.3. Deploying the Registry as a DaemonSet

Use the oc adm registry command to deploy the registry as a DaemonSet with the --daemonset option.

Daemonsets ensure that when nodes are created, they contain copies of a specified pod. When the nodes are removed, the pods are garbage collected.

For more information on DaemonSets, see Using Daemonsets.

2.2.4. Registry Compute Resources

By default, the registry is created with no settings for compute resource requests or limits. For production, it is highly recommended that the deployment configuration for the registry be updated to set resource requests and limits for the registry pod. Otherwise, the registry pod will be considered a BestEffort pod.

See Compute Resources for more information on configuring requests and limits.

2.2.5. Storage for the Registry

The registry stores container images and metadata. If you simply deploy a pod with the registry, it uses an ephemeral volume that is destroyed if the pod exits. Any images anyone has built or pushed into the registry would disappear.

This section lists the supported registry storage drivers. See the Docker registry documentation for more information.

The following list includes storage drivers that need to be configured in the registry’s configuration file:

General registry storage configuration options are supported. See the Docker registry documentation for more information.

The following storage options need to be configured through the filesystem driver:

Note

For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.

2.2.5.1. Production Use

For production use, attach a remote volume or define and use the persistent storage method of your choice.

For example, to use an existing persistent volume claim:

$ oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \
     --claim-name=<pvc_name> --overwrite
Important

Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.

2.2.5.1.1. Use Amazon S3 as a Storage Back-end

There is also an option to use Amazon Simple Storage Service storage with the internal Docker registry. It is a secure cloud storage manageable through AWS Management Console. To use it, the registry’s configuration file must be manually edited and mounted to the registry pod. However, before you start with the configuration, look at upstream’s recommended steps.

Take a default YAML configuration file as a base and replace the filesystem entry in the storage section with s3 entry such as below. The resulting storage section may look like this:

storage:
  cache:
    layerinfo: inmemory
  delete:
    enabled: true
  s3:
    accesskey: awsaccesskey 1
    secretkey: awssecretkey 2
    region: us-west-1
    regionendpoint: http://myobjects.local
    bucket: bucketname
    encrypt: true
    keyid: mykeyid
    secure: true
    v4auth: false
    chunksize: 5242880
    rootdirectory: /s3/object/name/prefix
1
Replace with your Amazon access key.
2
Replace with your Amazon secret key.

All of the s3 configuration options are documented in upstream’s driver reference documentation.

Overriding the registry configuration will take you through the additional steps on mounting the configuration file into pod.

Warning

When the registry runs on the S3 storage back-end, there are reported issues.

If you want to use a S3 region that is not supported by the integrated registry you are using, see S3 Driver Configuration.

2.2.5.2. Non-Production Use

For non-production use, you can use the --mount-host=<path> option to specify a directory for the registry to use for persistent storage. The registry volume is then created as a host-mount at the specified <path>.

Important

The --mount-host option mounts a directory from the node on which the registry container lives. If you scale up the docker-registry deployment configuration, it is possible that your registry pods and containers will run on different nodes, which can result in two or more registry containers, each with its own local storage. This will lead to unpredictable behavior, as subsequent requests to pull the same image repeatedly may not always succeed, depending on which container the request ultimately goes to.

The --mount-host option requires that the registry container run in privileged mode. This is automatically enabled when you specify --mount-host. However, not all pods are allowed to run privileged containers by default. If you still want to use this option, create the registry and specify that it use the registry service account that was created during installation:

$ oc adm registry --service-account=registry \
    --config=/etc/origin/master/admin.kubeconfig \
    --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \
    --mount-host=<path>
Important

The Docker registry pod runs as user 1001. This user must be able to write to the host directory. You may need to change directory ownership to user ID 1001 with this command:

$ sudo chown 1001:root <path>

2.2.6. Enabling the Registry Console

OpenShift Container Platform provides a web-based interface to the integrated registry. This registry console is an optional component for browsing and managing images. It is deployed as a stateless service running as a pod.

Note

If you installed OpenShift Container Platform as a stand-alone registry, the registry console is already deployed and secured automatically during installation.

Important

If Cockpit is already running, you’ll need to shut it down before proceeding in order to avoid a port conflict (9090 by default) with the registry console.

2.2.6.1. Deploying the Registry Console
Important

You must first have exposed the registry.

  1. Create a passthrough route in the default project. You will need this when creating the registry console application in the next step.

    $ oc create route passthrough --service registry-console \
        --port registry-console \
        -n default
  2. Deploy the registry console application. Replace <openshift_oauth_url> with the URL of the OpenShift Container Platform OAuth provider, which is typically the master.

    $ oc new-app -n default --template=registry-console \
        -p OPENSHIFT_OAUTH_PROVIDER_URL="https://<openshift_oauth_url>:8443" \
        -p REGISTRY_HOST=$(oc get route docker-registry -n default --template='{{ .spec.host }}') \
        -p COCKPIT_KUBE_URL=$(oc get route registry-console -n default --template='https://{{ .spec.host }}')
    Note

    If the redirection URL is wrong when you are trying to log in to the registry console, check your OAuth client with oc get oauthclients.

  3. Finally, use a web browser to view the console using the route URI.
2.2.6.2. Securing the Registry Console

By default, the registry console generates self-signed TLS certificates if deployed manually per the steps in Deploying the Registry Console. See Troubleshooting the Registry Console for more information.

Use the following steps to add your organization’s signed certificates as a secret volume. This assumes your certificates are available on the oc client host.

  1. Create a .cert file containing the certificate and key. Format the file with:

    • One or more BEGIN CERTIFICATE blocks for the server certificate and the intermediate certificate authorities
    • A block containing a BEGIN PRIVATE KEY or similar for the key. The key must not be encrypted

      For example:

      -----BEGIN CERTIFICATE-----
      MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV
      BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls
      ...
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV
      BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls
      ...
      -----END CERTIFICATE-----
      -----BEGIN PRIVATE KEY-----
      MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCyOJ5garOYw0sm
      8TBCDSqQ/H1awGMzDYdB11xuHHsxYS2VepPMzMzryHR137I4dGFLhvdTvJUH8lUS
      ...
      -----END PRIVATE KEY-----
    • The secured registry should contain the following Subject Alternative Names (SAN) list:

      • Two service hostnames.

        For example:

        docker-registry.default.svc.cluster.local
        docker-registry.default.svc
      • Service IP address.

        For example:

        172.30.124.220

        Use the following command to get the Docker registry service IP address:

        oc get service docker-registry --template='{{.spec.clusterIP}}'
      • Public hostname.

        For example:

        docker-registry-default.apps.example.com

        Use the following command to get the Docker registry public hostname:

        oc get route docker-registry --template '{{.spec.host}}'

        For example, the server certificate should contain SAN details similar to the following:

        X509v3 Subject Alternative Name:
                       DNS:docker-registry-public.openshift.com, DNS:docker-registry.default.svc, DNS:docker-registry.default.svc.cluster.local, DNS:172.30.2.98, IP Address:172.30.2.98

        The registry console loads a certificate from the /etc/cockpit/ws-certs.d directory. It uses the last file with a .cert extension in alphabetical order. Therefore, the .cert file should contain at least two PEM blocks formatted in the OpenSSL style.

        If no certificate is found, a self-signed certificate is created using the openssl command and stored in the 0-self-signed.cert file.

  2. Create the secret:

    $ oc create secret generic console-secret \
        --from-file=/path/to/console.cert
  3. Add the secrets to the registry-console deployment configuration:

    $ oc volume dc/registry-console --add --type=secret \
        --secret-name=console-secret -m /etc/cockpit/ws-certs.d

    This triggers a new deployment of the registry console to include your signed certificates.

2.2.6.3. Troubleshooting the Registry Console
2.2.6.3.1. Debug Mode

The registry console debug mode is enabled using an environment variable. The following command redeploys the registry console in debug mode:

$ oc set env dc registry-console G_MESSAGES_DEBUG=cockpit-ws,cockpit-wrapper

Enabling debug mode allows more verbose logging to appear in the registry console’s pod logs.

2.2.6.3.2. Display SSL Certificate Path

To check which certificate the registry console is using, a command can be run from inside the console pod.

  1. List the pods in the default project and find the registry console’s pod name:

    $ oc get pods -n default
    NAME                       READY     STATUS    RESTARTS   AGE
    registry-console-1-rssrw   1/1       Running   0          1d
  2. Using the pod name from the previous command, get the certificate path that the cockpit-ws process is using. This example shows the console using the auto-generated certificate:

    $ oc exec registry-console-1-rssrw remotectl certificate
    certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert

2.3. Accessing the Registry

2.3.1. Viewing Logs

To view the logs for the Docker registry, use the oc logs command with the deployment configuration:

$ oc logs dc/docker-registry
2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown"
2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002
2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002
2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler"
2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002

2.3.2. File Storage

Tag and image metadata is stored in OpenShift Container Platform, but the registry stores layer and signature data in a volume that is mounted into the registry container at /registry. As oc exec does not work on privileged containers, to view a registry’s contents you must manually SSH into the node housing the registry pod’s container, then run docker exec on the container itself:

  1. List the current pods to find the pod name of your Docker registry:

    # oc get pods

    Then, use oc describe to find the host name for the node running the container:

    # oc describe pod <pod_name>
  2. Log into the desired node:

    # ssh node.example.com
  3. List the running containers from the default project on the node host and identify the container ID for the Docker registry:

    # docker ps --filter=name=registry_docker-registry.*_default_
  4. List the registry contents using the oc rsh command:

    # oc rsh dc/docker-registry find /registry
    /registry/docker
    /registry/docker/registry
    /registry/docker/registry/v2
    /registry/docker/registry/v2/blobs 1
    /registry/docker/registry/v2/blobs/sha256
    /registry/docker/registry/v2/blobs/sha256/ed
    /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810
    /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/data 2
    /registry/docker/registry/v2/blobs/sha256/a3
    /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
    /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/data
    /registry/docker/registry/v2/blobs/sha256/f7
    /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845
    /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/data
    /registry/docker/registry/v2/repositories 3
    /registry/docker/registry/v2/repositories/p1
    /registry/docker/registry/v2/repositories/p1/pause 4
    /registry/docker/registry/v2/repositories/p1/pause/_manifests
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures 5
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810
    /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/link 6
    /registry/docker/registry/v2/repositories/p1/pause/_uploads 7
    /registry/docker/registry/v2/repositories/p1/pause/_layers 8
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/link 9
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845
    /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/link
    1
    This directory stores all layers and signatures as blobs.
    2
    This file contains the blob’s contents.
    3
    This directory stores all the image repositories.
    4
    This directory is for a single image repository p1/pause.
    5
    This directory contains signatures for a particular image manifest revision.
    6
    This file contains a reference back to a blob (which contains the signature data).
    7
    This directory contains any layers that are currently being uploaded and staged for the given repository.
    8
    This directory contains links to all the layers this repository references.
    9
    This file contains a reference to a specific layer that has been linked into this repository via an image.

2.3.3. Accessing the Registry Directly

For advanced usage, you can access the registry directly to invoke docker commands. This allows you to push images to or pull them from the integrated registry directly using operations like docker push or docker pull. To do so, you must be logged in to the registry using the docker login command. The operations you can perform depend on your user permissions, as described in the following sections.

2.3.3.1. User Prerequisites

To access the registry directly, the user that you use must satisfy the following, depending on your intended usage:

  • For any direct access, you must have a regular user for your preferred identity provider. A regular user can generate an access token required for logging in to the registry. System users, such as system:admin, cannot obtain access tokens and, therefore, cannot access the registry directly.

    For example, if you are using HTPASSWD authentication, you can create one using the following command:

    # htpasswd /etc/origin/master/htpasswd <user_name>
  • For pulling images, for example when using the docker pull command, the user must have the registry-viewer role. To add this role:

    $ oc policy add-role-to-user registry-viewer <user_name>
  • For writing or pushing images, for example when using the docker push command, the user must have the registry-editor role. To add this role:

    $ oc policy add-role-to-user registry-editor <user_name>

For more information on user permissions, see Managing Role Bindings.

2.3.3.2. Logging in to the Registry
Note

Ensure your user satisfies the prerequisites for accessing the registry directly.

To log in to the registry directly:

  1. Ensure you are logged in to OpenShift Container Platform as a regular user:

    $ oc login
  2. Log in to the Docker registry by using your access token:

    docker login -u openshift -p $(oc whoami -t) <registry_ip>:<port>
Note

You can pass any value for the username, the token contains all necessary information. Passing a username that contains colons will result in a login failure.

2.3.3.3. Pushing and Pulling Images

After logging in to the registry, you can perform docker pull and docker push operations against your registry.

Important

You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.

In the following examples, we use:

Component

Value

<registry_ip>

172.30.124.220

<port>

5000

<project>

openshift

<image>

busybox

<tag>

omitted (defaults to latest)

  1. Pull an arbitrary image:

    $ docker pull docker.io/busybox
  2. Tag the new image with the form <registry_ip>:<port>/<project>/<image>. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry.

    $ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
    Note

    Your regular user must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the docker push in the next step will fail. To test, you can create a new project to push the busybox image.

  3. Push the newly-tagged image to your registry:

    $ docker push 172.30.124.220:5000/openshift/busybox
    ...
    cf2616975b4a: Image successfully pushed
    Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55

2.3.4. Accessing Registry Metrics

The OpenShift Container Registry provides an endpoint for Prometheus metrics. Prometheus is a stand-alone, open source systems monitoring and alerting toolkit.

The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. However, this route must first be enabled; see Extended Registry Configuration for instructions.

The following is a simple example of a metrics query:

$ curl -s -u <user>:<secret> \ 1
    http://172.30.30.30:5000/extensions/v2/metrics | grep openshift | head -n 10

# HELP openshift_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which OpenShift was built.
# TYPE openshift_build_info gauge
openshift_build_info{gitCommit="67275e1",gitVersion="v3.6.0-alpha.1+67275e1-803",major="3",minor="6+"} 1
# HELP openshift_registry_request_duration_seconds Request latency summary in microseconds for each operation
# TYPE openshift_registry_request_duration_seconds summary
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.5"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.9"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.99"} 0
openshift_registry_request_duration_seconds_sum{name="test/origin-pod",operation="blobstore.create"} 0
openshift_registry_request_duration_seconds_count{name="test/origin-pod",operation="blobstore.create"} 5
1
<user> can be arbitrary, but <secret> must match the value specified in the registry configuration.

Another method to access the metrics is to use a cluster role. You still need to enable the endpoint, but you do not need to specify a <secret>. The part of the configuration file responsible for metrics should look like this:

openshift:
  version: 1.0
  metrics:
    enabled: true
...

You must create a cluster role if you do not already have one to access the metrics:

$ cat <<EOF |
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus-scraper
rules:
- apiGroups:
  - image.openshift.io
  resources:
  - registry/metrics
  verbs:
  - get
EOF
oc create -f -

To add this role to a user, run the following command:

$ oc adm policy add-cluster-role-to-user prometheus-scraper <username>

See the upstream Prometheus documentation for more advanced queries and recommended visualizers.

2.4. Securing and Exposing the Registry

2.4.1. Overview

By default, the OpenShift Container Platform registry is secured during cluster installation so that it serves traffic via TLS. A passthrough route is also created by default to expose the service externally.

If for any reason your registry has not been secured or exposed, see the following sections for steps on how to manually do so.

2.4.2. Manually Securing the Registry

To manually secure the registry to serve traffic via TLS:

  1. Deploy the registry.
  2. Fetch the service IP and port of the registry:

    $ oc get svc/docker-registry
    NAME              LABELS                                    SELECTOR                  IP(S)            PORT(S)
    docker-registry   docker-registry=default                   docker-registry=default   172.30.124.220   5000/TCP
  3. You can use an existing server certificate, or create a key and server certificate valid for specified IPs and host names, signed by a specified CA. To create a server certificate for the registry service IP and the docker-registry.default.svc.cluster.local host name, run the following command from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts:

    $ oc adm ca create-server-cert \
        --signer-cert=/etc/origin/master/ca.crt \
        --signer-key=/etc/origin/master/ca.key \
        --signer-serial=/etc/origin/master/ca.serial.txt \
        --hostnames='docker-registry.default.svc.cluster.local,docker-registry.default.svc,172.30.124.220' \
        --cert=/etc/secrets/registry.crt \
        --key=/etc/secrets/registry.key

    If the router will be exposed externally, add the public route host name in the --hostnames flag:

    --hostnames='mydocker-registry.example.com,docker-registry.default.svc.cluster.local,172.30.124.220 \

    See Redeploying Registry and Router Certificates for additional details on updating the default certificate so that the route is externally accessible.

    Note

    The oc adm ca create-server-cert command generates a certificate that is valid for two years. This can be altered with the --expire-days option, but for security reasons, it is recommended to not make it greater than this value.

  4. Create the secret for the registry certificates:

    $ oc create secret generic registry-certificates \
        --from-file=/etc/secrets/registry.crt \
        --from-file=/etc/secrets/registry.key
  5. Add the secret to the registry pod’s service accounts (including the default service account):

    $ oc secrets link registry registry-certificates
    $ oc secrets link default  registry-certificates
    Note

    Limiting secrets to only the service accounts that reference them is disabled by default. This means that if serviceAccountConfig.limitSecretReferences is set to false (the default setting) in the master configuration file, linking secrets to a service is not required.

  6. Pause the docker-registry service:

    $ oc rollout pause dc/docker-registry
  7. Add the secret volume to the registry deployment configuration:

    $ oc volume dc/docker-registry --add --type=secret \
        --secret-name=registry-certificates -m /etc/secrets
  8. Enable TLS by adding the following environment variables to the registry deployment configuration:

    $ oc set env dc/docker-registry \
        REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \
        REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key

    See the Configuring a registry section of the Docker documentation for more information.

  9. Update the scheme used for the registry’s liveness probe from HTTP to HTTPS:

    $ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{
        "name":"registry",
        "livenessProbe":  {"httpGet": {"scheme":"HTTPS"}}
      }]}}}}'
  10. If your registry was initially deployed on OpenShift Container Platform 3.2 or later, update the scheme used for the registry’s readiness probe from HTTP to HTTPS:

    $ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{
        "name":"registry",
        "readinessProbe":  {"httpGet": {"scheme":"HTTPS"}}
      }]}}}}'
  11. Resume the docker-registry service:

    $ oc rollout resume dc/docker-registry
  12. Validate the registry is running in TLS mode. Wait until the latest docker-registry deployment completes and verify the Docker logs for the registry container. You should find an entry for listening on :5000, tls.

    $ oc logs dc/docker-registry | grep tls
    time="2015-05-27T05:05:53Z" level=info msg="listening on :5000, tls" instance.id=deeba528-c478-41f5-b751-dc48e4935fc2
  13. Copy the CA certificate to the Docker certificates directory. This must be done on all nodes in the cluster:

    $ dcertsdir=/etc/docker/certs.d
    $ destdir_addr=$dcertsdir/172.30.124.220:5000
    $ destdir_name=$dcertsdir/docker-registry.default.svc.cluster.local:5000
    
    $ sudo mkdir -p $destdir_addr $destdir_name
    $ sudo cp ca.crt $destdir_addr    1
    $ sudo cp ca.crt $destdir_name
    1
    The ca.crt file is a copy of /etc/origin/master/ca.crt on the master.
  14. When using authentication, some versions of docker also require you to configure your cluster to trust the certificate at the OS level.

    1. Copy the certificate:

      $ cp /etc/origin/master/ca.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
    2. Run:

      $ update-ca-trust enable
  15. Remove the --insecure-registry option only for this particular registry in the /etc/sysconfig/docker file. Then, reload the daemon and restart the docker service to reflect this configuration change:

    $ sudo systemctl daemon-reload
    $ sudo systemctl restart docker
  16. Validate the docker client connection. Running docker push to the registry or docker pull from the registry should succeed. Make sure you have logged into the registry.

    $ docker tag|push <registry/image> <internal_registry/project/image>

    For example:

    $ docker pull busybox
    $ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
    $ docker push 172.30.124.220:5000/openshift/busybox
    ...
    cf2616975b4a: Image successfully pushed
    Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55

2.4.3. Manually Exposing a Secure Registry

Instead of logging in to the OpenShift Container Platform registry from within the OpenShift Container Platform cluster, you can gain external access to it by first securing the registry and then exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.

  1. Each of the following prerequisite steps are performed by default during a typical cluster installation. If they have not been, perform them manually:

  2. A passthrough route should have been created by default for the registry during the initial cluster installation:

    1. Verify whether the route exists:

      $ oc get route/docker-registry -o yaml
      apiVersion: v1
      kind: Route
      metadata:
        name: docker-registry
      spec:
        host: <host> 1
        to:
          kind: Service
          name: docker-registry 2
        tls:
          termination: passthrough 3
      1
      The host for your route. You must be able to resolve this name externally via DNS to the router’s IP address.
      2
      The service name for your registry.
      3
      Specifies this route as a passthrough route.
      Note

      Re-encrypt routes are also supported for exposing the secure registry.

    2. If it does not exist, create the route via the oc create route passthrough command, specifying the registry as the route’s service. By default, the name of the created route is the same as the service name:

      1. Get the docker-registry service details:

        $ oc get svc
        NAME              CLUSTER_IP       EXTERNAL_IP   PORT(S)                 SELECTOR                  AGE
        docker-registry   172.30.69.167    <none>        5000/TCP                docker-registry=default   4h
        kubernetes        172.30.0.1       <none>        443/TCP,53/UDP,53/TCP   <none>                    4h
        router            172.30.172.132   <none>        80/TCP                  router=router             4h
      2. Create the route:

        $ oc create route passthrough    \
            --service=docker-registry    \1
            --hostname=<host>
        route "docker-registry" created     2
        1
        Specifies the registry as the route’s service.
        2
        The route name is identical to the service name.
  3. Next, you must trust the certificates being used for the registry on your host system to allow the host to push and pull images. The certificates referenced were created when you secured your registry.

    $ sudo mkdir -p /etc/docker/certs.d/<host>
    $ sudo cp <ca_certificate_file> /etc/docker/certs.d/<host>
    $ sudo systemctl restart docker
  4. Log in to the registry using the information from securing the registry. However, this time point to the host name used in the route rather than your service IP. When logging in to a secured and exposed registry, make sure you specify the registry in the docker login command:

    # docker login -e user@company.com \
        -u f83j5h6 \
        -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \
        <host>
  5. You can now tag and push images using the route host. For example, to tag and push a busybox image in a project called test:

    $ oc get imagestreams -n test
    NAME      DOCKER REPO   TAGS      UPDATED
    
    $ docker pull busybox
    $ docker tag busybox <host>/test/busybox
    $ docker push <host>/test/busybox
    The push refers to a repository [<host>/test/busybox] (len: 1)
    8c2e06607696: Image already exists
    6ce2e90b0bc7: Image successfully pushed
    cf2616975b4a: Image successfully pushed
    Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31
    
    $ docker pull <host>/test/busybox
    latest: Pulling from <host>/test/busybox
    cf2616975b4a: Already exists
    6ce2e90b0bc7: Already exists
    8c2e06607696: Already exists
    Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31
    Status: Image is up to date for <host>/test/busybox:latest
    
    $ oc get imagestreams -n test
    NAME      DOCKER REPO                       TAGS      UPDATED
    busybox   172.30.11.215:5000/test/busybox   latest    2 seconds ago
    Note

    Your image streams will have the IP address and port of the registry service, not the route name and port. See oc get imagestreams for details.

2.4.4. Manually Exposing a Non-Secure Registry

Instead of securing the registry in order to expose the registry, you can simply expose a non-secure registry for non-production OpenShift Container Platform environments. This allows you to have an external route to the registry without using SSL certificates.

Warning

Only non-production environments should expose a non-secure registry to external access.

To expose a non-secure registry:

  1. Expose the registry:

    # oc expose service docker-registry --hostname=<hostname> -n default

    This creates the following JSON file:

    apiVersion: v1
    kind: Route
    metadata:
      creationTimestamp: null
      labels:
        docker-registry: default
      name: docker-registry
    spec:
      host: registry.example.com
      port:
        targetPort: "5000"
      to:
        kind: Service
        name: docker-registry
    status: {}
  2. Verify that the route has been created successfully:

    # oc get route
    NAME              HOST/PORT                    PATH      SERVICE           LABELS                    INSECURE POLICY   TLS TERMINATION
    docker-registry   registry.example.com            docker-registry   docker-registry=default
  3. Check the health of the registry:

    $ curl -v http://registry.example.com/healthz

    Expect an HTTP 200/OK message.

    After exposing the registry, update your /etc/sysconfig/docker file by adding the port number to the OPTIONS entry. For example:

    OPTIONS='--selinux-enabled --insecure-registry=172.30.0.0/16 --insecure-registry registry.example.com:80'
    Important

    The above options should be added on the client from which you are trying to log in.

    Also, ensure that Docker is running on the client.

When logging in to the non-secured and exposed registry, make sure you specify the registry in the docker login command. For example:

# docker login -e user@company.com \
    -u f83j5h6 \
    -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \
    <host>

2.5. Extended Registry Configuration

2.5.1. Maintaining the Registry IP Address

OpenShift Container Platform refers to the integrated registry by its service IP address, so if you decide to delete and recreate the docker-registry service, you can ensure a completely transparent transition by arranging to re-use the old IP address in the new service. If a new IP address cannot be avoided, you can minimize cluster disruption by rebooting only the masters.

Re-using the Address
To re-use the IP address, you must save the IP address of the old docker-registry service prior to deleting it, and arrange to replace the newly assigned IP address with the saved one in the new docker-registry service.
  1. Make a note of the clusterIP for the service:

    $ oc get svc/docker-registry -o yaml | grep clusterIP:
  2. Delete the service:

    $ oc delete svc/docker-registry dc/docker-registry
  3. Create the registry definition in registry.yaml, replacing <options> with, for example, those used in step 3 of the instructions in the Non-Production Use section:

    $ oc adm registry <options> -o yaml > registry.yaml
  4. Edit registry.yaml, find the Service there, and change its clusterIP to the address noted in step 1.
  5. Create the registry using the modified registry.yaml:

    $ oc create -f registry.yaml
Rebooting the Masters
If you are unable to re-use the IP address, any operation that uses a pull specification that includes the old IP address will fail. To minimize cluster disruption, you must reboot the masters:
# master-restart api
# master-restart controllers

This ensures that the old registry URL, which includes the old IP address, is cleared from the cache.

Note

We recommend against rebooting the entire cluster because that incurs unnecessary downtime for pods and does not actually clear the cache.

2.5.2. Whitelisting Docker Registries

You can specify a whitelist of docker registries, allowing you to curate a set of images and templates that are available for download by OpenShift Container Platform users. This curated set can be placed in one or more docker registries, and then added to the whitelist. When using a whitelist, only the specified registries are accessible within OpenShift Container Platform, and all other registries are denied access by default.

To configure a whitelist:

  1. Edit the /etc/sysconfig/docker file to block all registries:

    BLOCK_REGISTRY='--block-registry=all'

    You may need to uncomment the BLOCK_REGISTRY line.

  2. In the same file, add registries to which you want to allow access:

    ADD_REGISTRY='--add-registry=<registry1> --add-registry=<registry2>'

    Allowing Access to Registries

    ADD_REGISTRY='--add-registry=registry.access.redhat.com'

    This example would restrict access to images available on the Red Hat Customer Portal.

Once the whitelist is configured, if a user tries to pull from a docker registry that is not on the whitelist, they will receive an error message stating that this registry is not allowed.

2.5.3. Setting the Registry Hostname

You can configure the hostname and port the registry is known by for both internal and external references. By doing this, image streams will provide hostname based push and pull specifications for images, allowing consumers of the images to be isolated from changes to the registry service ip and potentially allowing image streams and their references to be portable between clusters.

To set the hostname used to reference the registry from within the cluster, set the internalRegistryHostname in the imagePolicyConfig section of the master configuration file. The external hostname is controlled by setting the externalRegistryHostname value in the same location.

Image Policy Configuration

imagePolicyConfig:
  internalRegistryHostname: docker-registry.default.svc.cluster.local:5000
  externalRegistryHostname: docker-registry.mycompany.com

The registry itself must be configured with the same internal hostname value. This can be accomplished by setting the REGISTRY_OPENSHIFT_SERVER_ADDR environment variable on the registry deployment configuration, or by setting the value in the OpenShift section of the registry configuration.

Note

If you have enabled TLS for your registry the server certificate must include the hostnames by which you expect the registry to be referenced. See securing the registry for instructions on adding hostnames to the server certificate.

2.5.4. Overriding the Registry Configuration

You can override the integrated registry’s default configuration, found by default at /config.yml in a running registry’s container, with your own custom configuration.

Note

Upstream configuration options in this file may also be overridden using environment variables. The middleware section is an exception as there are just a few options that can be overridden using environment variables. Learn how to override specific configuration options.

To enable management of the registry configuration file directly and deploy an updated configuration using a ConfigMap:

  1. Deploy the registry.
  2. Edit the registry configuration file locally as needed. The initial YAML file deployed on the registry is provided below. Review supported options.

    Registry Configuration File

    version: 0.1
    log:
      level: debug
    http:
      addr: :5000
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /registry
      delete:
        enabled: true
    auth:
      openshift:
        realm: openshift
    middleware:
      registry:
        - name: openshift
      repository:
        - name: openshift
          options:
            acceptschema2: true
            pullthrough: true
            enforcequota: false
            projectcachettl: 1m
            blobrepositorycachettl: 10m
      storage:
        - name: openshift
    openshift:
      version: 1.0
      metrics:
        enabled: false
        secret: <secret>

  3. Create a ConfigMap holding the content of each file in this directory:

    $ oc create configmap registry-config \
        --from-file=</path/to/custom/registry/config.yml>/
  4. Add the registry-config ConfigMap as a volume to the registry’s deployment configuration to mount the custom configuration file at /etc/docker/registry/:

    $ oc volume dc/docker-registry --add --type=configmap \
        --configmap-name=registry-config -m /etc/docker/registry/
  5. Update the registry to reference the configuration path from the previous step by adding the following environment variable to the registry’s deployment configuration:

    $ oc set env dc/docker-registry \
        REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yml

This may be performed as an iterative process to achieve the desired configuration. For example, during troubleshooting, the configuration may be temporarily updated to put it in debug mode.

To update an existing configuration:

Warning

This procedure will overwrite the currently deployed registry configuration.

  1. Edit the local registry configuration file, config.yml.
  2. Delete the registry-config configmap:

    $ oc delete configmap registry-config
  3. Recreate the configmap to reference the updated configuration file:

    $ oc create configmap registry-config\
        --from-file=</path/to/custom/registry/config.yml>/
  4. Redeploy the registry to read the updated configuration:

    $ oc rollout latest docker-registry
Tip

Maintain configuration files in a source control repository.

2.5.5. Registry Configuration Reference

There are many configuration options available in the upstream docker distribution library. Not all configuration options are supported or enabled. Use this section as a reference when overriding the registry configuration.

Note

Upstream configuration options in this file may also be overridden using environment variables. However, the middleware section may not be overridden using environment variables. Learn how to override specific configuration options.

2.5.5.1. Log

Upstream options are supported.

Example:

log:
  level: debug
  formatter: text
  fields:
    service: registry
    environment: staging
2.5.5.2. Hooks

Mail hooks are not supported.

2.5.5.3. Storage

This section lists the supported registry storage drivers. See the Docker registry documentation for more information.

The following list includes storage drivers that need to be configured in the registry’s configuration file:

General registry storage configuration options are supported. See the Docker registry documentation for more information.

The following storage options need to be configured through the filesystem driver:

Note

For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.

General Storage Configuration Options

storage:
  delete:
    enabled: true 1
  redirect:
    disable: false
  cache:
    blobdescriptor: inmemory
  maintenance:
    uploadpurging:
      enabled: true
      age: 168h
      interval: 24h
      dryrun: false
    readonly:
      enabled: false

1
This entry is mandatory for image pruning to work properly.
2.5.5.4. Auth

Auth options should not be altered. The openshift extension is the only supported option.

auth:
  openshift:
    realm: openshift
2.5.5.5. Middleware

The repository middleware extension allows to configure OpenShift Container Platform middleware responsible for interaction with OpenShift Container Platform and image proxying.

middleware:
  registry:
    - name: openshift 1
  repository:
    - name: openshift 2
      options:
        acceptschema2: true 3
        pullthrough: true 4
        mirrorpullthrough: true 5
        enforcequota: false 6
        projectcachettl: 1m 7
        blobrepositorycachettl: 10m 8
  storage:
    - name: openshift 9
1 2 9
These entries are mandatory. Their presence ensures required components are loaded. These values should not be changed.
3
Allows you to store manifest schema v2 during a push to the registry. See below for more details.
4
Allows the registry to act as a proxy for remote blobs. See below for more details.
5
Allows the registry cache blobs to be served from remote registries for fast access later. The mirroring starts when the blob is accessed for the first time. The option has no effect if the pullthrough is disabled.
6
Prevents blob uploads exceeding the size limit, which are defined in the targeted project.
7
An expiration timeout for limits cached in the registry. The lower the value, the less time it takes for the limit changes to propagate to the registry. However, the registry will query limits from the server more frequently and, as a consequence, pushes will be slower.
8
An expiration timeout for remembered associations between blob and repository. The higher the value, the higher probability of fast lookup and more efficient registry operation. On the other hand, memory usage will raise as well as a risk of serving image layer to user, who is no longer authorized to access it.
2.5.5.5.1. S3 Driver Configuration

If you want to use a S3 region that is not supported by the integrated registry you are using, then you can specify a regionendpoint to avoid the region validation error.

For more information about using Amazon Simple Storage Service storage, see Amazon S3 as a Storage Back-end.

For example:

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: inmemory
  delete:
    enabled: true
  s3:
    accesskey: BJKMSZBRESWJQXRWMAEQ
    secretkey: 5ah5I91SNXbeoUXXDasFtadRqOdy62JzlnOW1goS
    bucket: docker.myregistry.com
    region: eu-west-3
    regionendpoint: https://s3.eu-west-3.amazonaws.com
 auth:
  openshift:
    realm: openshift
middleware:
  registry:
    - name: openshift
  repository:
    - name: openshift
  storage:
    - name: openshift
Note

Verify the region and regionendpoint fields are consistent between themselves. Otherwise the integrated registry will start, but it can not read or write anything to the S3 storage.

The regionendpoint can also be useful if you use a S3 storage different from the Amazon S3.

2.5.5.5.2. CloudFront Middleware

The CloudFront middleware extension can be added to support AWS, CloudFront CDN storage provider. CloudFront middleware speeds up distribution of image content internationally. The blobs are distributed to several edge locations around the world. The client is always directed to the edge with the lowest latency.

Note

The CloudFront middleware extension can be only used with S3 storage. It is utilized only during blob serving. Therefore, only blob downloads can be speeded up, not uploads.

The following is an example of minimal configuration of S3 storage driver with a CloudFront middleware:

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: inmemory
  delete:
    enabled: true
  s3: 1
    accesskey: BJKMSZBRESWJQXRWMAEQ
    secretkey: 5ah5I91SNXbeoUXXDasFtadRqOdy62JzlnOW1goS
    region: us-east-1
    bucket: docker.myregistry.com
auth:
  openshift:
    realm: openshift
middleware:
  registry:
    - name: openshift
  repository:
    - name: openshift
  storage:
    - name: cloudfront 2
      options:
        baseurl: https://jrpbyn0k5k88bi.cloudfront.net/ 3
        privatekey: /etc/docker/cloudfront-ABCEDFGHIJKLMNOPQRST.pem 4
        keypairid: ABCEDFGHIJKLMNOPQRST 5
    - name: openshift
1
The S3 storage must be configured the same way regardless of CloudFront middleware.
2
The CloudFront storage middleware needs to be listed before OpenShift middleware.
3
The CloudFront base URL. In the AWS management console, this is listed as Domain Name of CloudFront distribution.
4
The location of your AWS private key on the filesystem. This must be not confused with Amazon EC2 key pair. See the AWS documentation on creating CloudFront key pairs for your trusted signers. The file needs to be mounted as a secret into the registry pod.
5
The ID of your Cloudfront key pair.
2.5.5.5.3. Overriding Middleware Configuration Options

The middleware section cannot be overridden using environment variables. There are a few exceptions, however. For example:

middleware:
  repository:
    - name: openshift
      options:
        acceptschema2: true 1
        pullthrough: true 2
        mirrorpullthrough: true 3
        enforcequota: false 4
        projectcachettl: 1m 5
        blobrepositorycachettl: 10m 6
1
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ACCEPTSCHEMA2, which allows for the ability to accept manifest schema v2 on manifest put requests. Recognized values are true and false (which applies to all the other boolean variables below).
2
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PULLTHROUGH, which enables a proxy mode for remote repositories.
3
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_MIRRORPULLTHROUGH, which instructs registry to mirror blobs locally if serving remote blobs.
4
A configuration option that can be overridden by the boolean environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA, which allows the ability to turn quota enforcement on or off. By default, quota enforcement is off.
5
A configuration option that can be overridden by the environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PROJECTCACHETTL, specifying an eviction timeout for project quota objects. It takes a valid time duration string (for example, 2m). If empty, you get the default timeout. If zero (0m), caching is disabled.
6
A configuration option that can be overridden by the environment variable REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_BLOBREPOSITORYCACHETTL, specifying an eviction timeout for associations between blob and containing repository. The format of the value is the same as in projectcachettl case.
2.5.5.5.4. Image Pullthrough

If enabled, the registry will attempt to fetch requested blob from a remote registry unless the blob exists locally. The remote candidates are calculated from DockerImage entries stored in status of the image stream, a client pulls from. All the unique remote registry references in such entries will be tried in turn until the blob is found.

Pullthrough will only occur if an image stream tag exists for the image being pulled. For example, if the image being pulled is docker-registry.default.svc:5000/yourproject/yourimage:prod then the registry will look for an image stream tag named yourimage:prod in the project yourproject. If it finds one, it will attempt to pull the image using the dockerImageReference associated with that image stream tag.

When performing pullthrough, the registry will use pull credentials found in the project associated with the image stream tag that is being referenced. This capability also makes it possible for you to pull images that reside on a registry they do not have credentials to access, as long as you have access to the image stream tag that references the image.

You must ensure that your registry has appropriate certificates to trust any external registries you do a pullthrough against. The certificates need to be placed in the /etc/pki/tls/certs directory on the pod. You can mount the certificates using a configuration map or secret. Note that the entire /etc/pki/tls/certs directory must be replaced. You must include the new certificates and replace the system certificates in your secret or configuration map that you mount.

Note that by default image stream tags use a reference policy type of Source which means that when the image stream reference is resolved to an image pull specification, the specification used will point to the source of the image. For images hosted on external registries, this will be the external registry and as a result the resource will reference and pull the image by the external registry. For example, registry.access.redhat.com/openshift3/jenkins-2-rhel7 and pullthrough will not apply. To ensure that resources referencing image streams use a pull specification that points to the internal registry, the image stream tag should use a reference policy type of Local. More information is available on Reference Policy.

This feature is on by default. However, it can be disabled using a configuration option.

By default, all the remote blobs served this way are stored locally for subsequent faster access unless mirrorpullthrough is disabled. The downside of this mirroring feature is an increased storage usage.

Note

The mirroring starts when a client tries to fetch at least a single byte of the blob. To pre-fetch a particular image into integrated registry before it is actually needed, you can run the following command:

$ oc get imagestreamtag/${IS}:${TAG} -o jsonpath='{ .image.dockerImageLayers[*].name }' | \
  xargs -n1 -I {} curl -H "Range: bytes=0-1" -u user:${TOKEN} \
  http://${REGISTRY_IP}:${PORT}/v2/default/mysql/blobs/{}
Note

This OpenShift Container Platform mirroring feature should not be confused with the upstream registry pull through cache feature, which is a similar but distinct capability.

2.5.5.5.5. Manifest Schema v2 Support

Each image has a manifest describing its blobs, instructions for running it and additional metadata. The manifest is versioned, with each version having different structure and fields as it evolves over time. The same image can be represented by multiple manifest versions. Each version will have different digest though.

The registry currently supports manifest v2 schema 1 (schema1) and manifest v2 schema 2 (schema2). The former is being obsoleted but will be supported for an extended amount of time.

You should be wary of compatibility issues with various Docker clients:

  • Docker clients of version 1.9 or older support only schema1. Any manifest this client pulls or pushes will be of this legacy schema.
  • Docker clients of version 1.10 support both schema1 and schema2. And by default, it will push the latter to the registry if it supports newer schema.

The registry, storing an image with schema1 will always return it unchanged to the client. Schema2 will be transferred unchanged only to newer Docker client. For the older one, it will be converted on-the-fly to schema1.

This has significant consequences. For example an image pushed to the registry by a newer Docker client cannot be pulled by the older Docker by its digest. That’s because the stored image’s manifest is of schema2 and its digest can be used to pull only this version of manifest.

For this reason, the registry is configured by default not to store schema2. This ensures that any docker client will be able to pull from the registry any image pushed there regardless of client’s version.

Once you’re confident that all the registry clients support schema2, you’ll be safe to enable its support in the registry. See the middleware configuration reference above for particular option.

2.5.5.6. OpenShift

This section reviews the configuration of global settings for features specific to OpenShift Container Platform. In a future release, openshift-related settings in the Middleware section will be obsoleted.

Currently, this section allows you to configure registry metrics collection:

openshift:
  version: 1.0 1
  server:
    addr: docker-registry.default.svc 2
  metrics:
    enabled: false 3
    secret: <secret> 4
  requests:
    read:
      maxrunning: 10 5
      maxinqueue: 10 6
      maxwaitinqueue 2m 7
    write:
      maxrunning: 10 8
      maxinqueue: 10 9
      maxwaitinqueue 2m 10
1
A mandatory entry specifying configuration version of this section. The only supported value is 1.0.
2
The hostname of the registry. Should be set to the same value configured on the master. It can be overridden by the environment variable REGISTRY_OPENSHIFT_SERVER_ADDR.
3
Can be set to true to enable metrics collection. It can be overridden by the boolean environment variable REGISTRY_OPENSHIFT_METRICS_ENABLED.
4
A secret used to authorize client requests. Metrics clients must use it as a bearer token in Authorization header. It can be overridden by the environment variable REGISTRY_OPENSHIFT_METRICS_SECRET.
5
Maximum number of simultaneous pull requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_READ_MAXRUNNING. Zero indicates no limit.
6
Maximum number of queued pull requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_READ_MAXINQUEUE. Zero indicates no limit.
7
Maximum time a pull request can wait in the queue before being rejected. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_READ_MAXWAITINQUEUE. Zero indicates no limit.
8
Maximum number of simultaneous push requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXRUNNING. Zero indicates no limit.
9
Maximum number of queued push requests. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXINQUEUE. Zero indicates no limit.
10
Maximum time a push request can wait in the queue before being rejected. It can be overridden by the environment variable REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXWAITINQUEUE. Zero indicates no limit.

See Accessing Registry Metrics for usage information.

2.5.5.7. Reporting

Reporting is unsupported.

2.5.5.8. HTTP

Upstream options are supported. Learn how to alter these settings via environment variables. Only the tls section should be altered. For example:

http:
  addr: :5000
  tls:
    certificate: /etc/secrets/registry.crt
    key: /etc/secrets/registry.key
2.5.5.9. Notifications

Upstream options are supported. The REST API Reference provides more comprehensive integration options.

Example:

notifications:
  endpoints:
    - name: registry
      disabled: false
      url: https://url:port/path
      headers:
        Accept:
          - text/plain
      timeout: 500
      threshold: 5
      backoff: 1000
2.5.5.10. Redis

Redis is not supported.

2.5.5.11. Health

Upstream options are supported. The registry deployment configuration provides an integrated health check at /healthz.

2.5.5.12. Proxy

Proxy configuration should not be enabled. This functionality is provided by the OpenShift Container Platform repository middleware extension, pullthrough: true.

2.5.5.13. Cache

The integrated registry actively caches data to reduce the number of calls to slow external resources. There are two caches:

  1. The storage cache that is used to cache blobs metadata. This cache does not have an expiration time and the data is there until it is explicitly deleted.
  2. The application cache contains association between blobs and repositories. The data in this cache has an expiration time.

In order to completely turn off the cache, you need to change the configuration:

version: 0.1
log:
  level: debug
http:
  addr: :5000
storage:
  cache: {} 1
openshift:
  version: 1.0
  cache:
    disabled: true 2
    blobrepositoryttl: 10m
1
Disables cache of metadata accessed in the storage backend. Without this cache, the registry server will constantly access the backend for metadata.
2
Disables the cache in which contains the blob and repository associations. Without this cache, the registry server will continually re-query the data from the master API and recompute the associations.

2.6. Known Issues

2.6.1. Overview

The following are the known issues when deploying or using the integrated registry.

2.6.2. Concurrent Build with Registry Pull-through

The local docker-registry deployment takes on additional load. By default, it now caches content from registry.access.redhat.com. The images from registry.access.redhat.com for STI builds are now stored in the local registry. Attempts to pull them result in pulls from the local docker-registry. As a result, there are circumstances where extreme numbers of concurrent builds can result in timeouts for the pulls and the build can possibly fail. To alleviate the issue, scale the docker-registry deployment to more than one replica. Check for timeouts in the builder pod’s logs.

2.6.3. Image Push Errors with Scaled Registry Using Shared NFS Volume

When using a scaled registry with a shared NFS volume, you may see one of the following errors during the push of an image:

  • digest invalid: provided digest did not match uploaded content
  • blob upload unknown
  • blob upload invalid

These errors are returned by an internal registry service when Docker attempts to push the image. Its cause originates in the synchronization of file attributes across nodes. Factors such as NFS client side caching, network latency, and layer size can all contribute to potential errors that might occur when pushing an image using the default round-robin load balancing configuration.

You can perform the following steps to minimize the probability of such a failure:

  1. Ensure that the sessionAffinity of your docker-registry service is set to ClientIP:

    $ oc get svc/docker-registry --template='{{.spec.sessionAffinity}}'

    This should return ClientIP, which is the default in recent OpenShift Container Platform versions. If not, change it:

    $ oc patch svc/docker-registry -p '{"spec":{"sessionAffinity": "ClientIP"}}'
  2. Ensure that the NFS export line of your registry volume on your NFS server has the no_wdelay options listed. The no_wdelay option prevents the server from delaying writes, which greatly improves read-after-write consistency, a requirement of the registry.
Important

Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.

Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.

2.6.4. Pull of Internally Managed Image Fails with "not found" Error

This error occurs when the pulled image is pushed to an image stream different from the one it is being pulled from. This is caused by re-tagging a built image into an arbitrary image stream:

$ oc tag srcimagestream:latest anyproject/pullimagestream:latest

And subsequently pulling from it, using an image reference such as:

internal.registry.url:5000/anyproject/pullimagestream:latest

During a manual Docker pull, this will produce a similar error:

Error: image anyproject/pullimagestream:latest not found

To prevent this, avoid the tagging of internally managed images completely, or re-push the built image to the desired namespace manually.

2.6.5. Image Push Fails with "500 Internal Server Error" on S3 Storage

There are problems reported happening when the registry runs on S3 storage back-end. Pushing to a Docker registry occasionally fails with the following error:

Received unexpected HTTP status: 500 Internal Server Error

To debug this, you need to view the registry logs. In there, look for similar error messages occurring at the time of the failed push:

time="2016-03-30T15:01:21.22287816-04:00" level=error msg="unknown error completing upload: driver.Error{DriverName:\"s3\", Enclosed:(*url.Error)(0xc20901cea0)}" http.request.method=PUT
...
time="2016-03-30T15:01:21.493067808-04:00" level=error msg="response completed with error" err.code=UNKNOWN err.detail="s3: Put https://s3.amazonaws.com/oso-tsi-docker/registry/docker/registry/v2/blobs/sha256/ab/abe5af443833d60cf672e2ac57589410dddec060ed725d3e676f1865af63d2e2/data: EOF" err.message="unknown error" http.request.method=PUT
...
time="2016-04-02T07:01:46.056520049-04:00" level=error msg="error putting into main store: s3: The request signature we calculated does not match the signature you provided. Check your key and signing method." http.request.method=PUT
atest

If you see such errors, contact your Amazon S3 support. There may be a problem in your region or with your particular bucket.

2.6.6. Image Pruning Fails

If you encounter the following error when pruning images:

BLOB sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c error deleting blob

And your registry log contains the following information:

error deleting blob \"sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c\": operation unsupported

It means that your custom configuration file lacks mandatory entries in the storage section, namely storage:delete:enabled set to true. Add them, re-deploy the registry, and repeat your image pruning operation.

Chapter 3. Setting up a Router

3.1. Router Overview

3.1.1. About Routers

There are many ways to get traffic into the cluster. The most common approach is to use the OpenShift Container Platform router as the ingress point for external traffic destined for services in your OpenShift Container Platform installation.

OpenShift Container Platform provides and supports the following router plug-ins:

  • The HAProxy template router is the default plug-in. It uses the openshift3/ose-haproxy-router image to run an HAProxy instance alongside the template router plug-in inside a container on OpenShift Container Platform. It currently supports HTTP(S) traffic and TLS-enabled traffic via SNI. The router’s container listens on the host network interface, unlike most containers that listen only on private IPs. The router proxies external requests for route names to the IPs of actual pods identified by the service associated with the route.
  • The F5 router integrates with an existing F5 BIG-IP® system in your environment to synchronize routes. F5 BIG-IP® version 11.4 or newer is required in order to have the F5 iControl REST API.
Note

The F5 router plug-in is available starting in OpenShift Container Platform 3.0.2.

3.1.2. Router Service Account

Before deploying an OpenShift Container Platform cluster, you must have a service account for the router, which is automatically created during cluster installation. This service account has permissions to a security context constraint (SCC) that allows it to specify host ports.

3.1.2.1. Permission to Access Labels

When namespace labels are used, for example in creating router shards, the service account for the router must have cluster-reader permission.

$ oc adm policy add-cluster-role-to-user \
    cluster-reader \
    system:serviceaccount:default:router

With a service account in place, you can proceed to installing a default HAProxy Router, a customized HAProxy Router or F5 Router.

3.2. Using the Default HAProxy Router

3.2.1. Overview

The oc adm router command is provided with the administrator CLI to simplify the tasks of setting up routers in a new installation. The oc adm router command creates the service and deployment configuration objects. Use the --service-account option to specify the service account the router will use to contact the master.

The router service account can be created in advance or created by the oc adm router --service-account command.

Every form of communication between OpenShift Container Platform components is secured by TLS and uses various certificates and authentication methods. The --default-certificate .pem format file can be supplied or one is created by the oc adm router command. When routes are created, the user can provide route certificates that the router will use when handling the route.

Important

When deleting a router, ensure the deployment configuration, service, and secret are deleted as well.

Routers are deployed on specific nodes. This makes it easier for the cluster administrator and external network manager to coordinate which IP address will run a router and which traffic the router will handle. The routers are deployed on specific nodes by using node selectors.

Important

Routers use host networking by default, and they directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where ports 80/443 are available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers.

Important

It is recommended to use separate distinct openshift-router service account with your router. This can be provided using the --service-account flag to the oc adm router command.

$ oc adm router --dry-run --service-account=router 1
1
--service-account is the name of a service account for the openshift-router.
Important

Router pods created using oc adm router have default resource requests that a node must satisfy for the router pod to be deployed. In an effort to increase the reliability of infrastructure components, the default resource requests are used to increase the QoS tier of the router pods above pods without resource requests. The default values represent the observed minimum resources required for a basic router to be deployed and can be edited in the routers deployment configuration and you may want to increase them based on the load of the router.

3.2.2. Creating a Router

If the router does not exist, run the following to create a router:

$ oc adm router <router_name> --replicas=<number> --service-account=router

--replicas is usually 1 unless a high availability configuration is being created.

To find the host IP address of the router:

$ oc get po <router-pod>  --template={{.status.hostIP}}

You can also use router shards to ensure that the router is filtered to specific namespaces or routes, or set any environment variables after router creation. In this case create a router for each shard.

3.2.3. Other Basic Router Commands

Checking the Default Router
The default router service account, named router, is automatically created during cluster installations. To verify that this account already exists:
$ oc adm router --dry-run --service-account=router
Viewing the Default Router
To see what the default router would look like if created:
$ oc adm router --dry-run -o yaml --service-account=router
Deploying the Router to a Labeled Node
To deploy the router to any node(s) that match a specified node label:
$ oc adm router <router_name> --replicas=<number> --selector=<label> \
    --service-account=router

For example, if you want to create a router named router and have it placed on a node labeled with node-role.kubernetes.io/infra=true:

$ oc adm router router --replicas=1 --selector='node-role.kubernetes.io/infra=true' \
  --service-account=router

During cluster installation, the openshift_router_selector and openshift_registry_selector Ansible settings are set to node-role.kubernetes.io/infra=true by default. The default router and registry will only be automatically deployed if a node exists that matches the node-role.kubernetes.io/infra=true label.

For information on updating labels, see Updating Labels on Nodes.

Multiple instances are created on different hosts according to the scheduler policy.

Using a Different Router Image
To use a different router image and view the router configuration that would be used:
$ oc adm router <router_name> -o <format> --images=<image> \
    --service-account=router

For example:

$ oc adm router region-west -o yaml --images=myrepo/somerouter:mytag \
    --service-account=router

3.2.4. Filtering Routes to Specific Routers

Using the ROUTE_LABELS environment variable, you can filter routes so that they are used only by specific routers.

For example, if you have multiple routers, and 100 routes, you can attach labels to the routes so that a portion of them are handled by one router, whereas the rest are handled by another.

  1. After creating a router, use the ROUTE_LABELS environment variable to tag the router:

    $ oc env dc/<router=name>  ROUTE_LABELS="key=value"
  2. Add the label to the desired routes:

    oc label route <route=name> key=value
  3. To verify that the label has been attached to the route, check the route configuration:

    $ oc describe route/<route_name>
Setting the Maximum Number of Concurrent Connections
The router can handle a maximum number of 20000 connections by default. You can change that limit depending on your needs. Having too few connections prevents the health check from working, which causes unnecessary restarts. You need to configure the system to support the maximum number of connections. The limits shown in 'sysctl fs.nr_open' and 'sysctl fs.file-max' must be large enough. Otherwise, HAproxy will not start.

When the router is created, the --max-connections= option sets the desired limit:

$ oc adm router --max-connections=10000   ....

Edit the ROUTER_MAX_CONNECTIONS environment variable in the router’s deployment configuration to change the value. The router pods are restarted with the new value. If ROUTER_MAX_CONNECTIONS is not present, the default value of 20000, is used.

Note

A connection includes the frontend and internal backend. This counts as two connections. Be sure to set ROUTER_MAX_CONNECTIONS to double than the number of connections you intend to create.

3.2.5. HAProxy Strict SNI

The HAProxy strict-sni can be controlled through the ROUTER_STRICT_SNI environment variable in the router’s deployment configuration. It can also be set when the router is created by using the --strict-sni command line option.

$ oc adm router --strict-sni

3.2.6. TLS Cipher Suites

Set the router cipher suite using the --ciphers option when creating a router:

$ oc adm router --ciphers=modern   ....

The values are: modern, intermediate, or old, with intermediate as the default. Alternatively, a set of ":" separated ciphers can be provided. The ciphers must be from the set displayed by:

$ openssl ciphers

Alternatively, use the ROUTER_CIPHERS environment variable for an existing router.

3.2.7. Highly-Available Routers

You can set up a highly-available router on your OpenShift Container Platform cluster using IP failover. This setup has multiple replicas on different nodes so the failover software can switch to another replica if the current one fails.

3.2.8. Customizing the Router Service Ports

You can customize the service ports that a template router binds to by setting the environment variables ROUTER_SERVICE_HTTP_PORT and ROUTER_SERVICE_HTTPS_PORT. This can be done by creating a template router, then editing its deployment configuration.

The following example creates a router deployment with 0 replicas and customizes the router service HTTP and HTTPS ports, then scales it appropriately (to 1 replica).

$ oc adm router --replicas=0 --ports='10080:10080,10443:10443' 1
$ oc set env dc/router ROUTER_SERVICE_HTTP_PORT=10080  \
                   ROUTER_SERVICE_HTTPS_PORT=10443
$ oc scale dc/router --replicas=1
1
Ensures exposed ports are appropriately set for routers that use the container networking mode --host-network=false.
Important

If you do customize the template router service ports, you will also need to ensure that the nodes where the router pods run have those custom ports opened in the firewall (either via Ansible or iptables, or any other custom method that you use via firewall-cmd).

The following is an example using iptables to open the custom router service ports.

$ iptables -A INPUT -p tcp --dport 10080 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 10443 -j ACCEPT

3.2.9. Working With Multiple Routers

An administrator can create multiple routers with the same definition to serve the same set of routes. Each router will be on a different node and will have a different IP address. The network administrator will need to get the desired traffic to each node.

Multiple routers can be grouped to distribute routing load in the cluster and separate tenants to different routers or shards. Each router or shard in the group admits routes based on the selectors in the router. An administrator can create shards over the whole cluster using ROUTE_LABELS. A user can create shards over a namespace (project) by using NAMESPACE_LABELS.

3.2.10. Adding a Node Selector to a Deployment Configuration

Making specific routers deploy on specific nodes requires two steps:

  1. Add a label to the desired node:

    $ oc label node 10.254.254.28 "router=first"
  2. Add a node selector to the router deployment configuration:

    $ oc edit dc <deploymentConfigName>

    Add the template.spec.nodeSelector field with a key and value corresponding to the label:

    ...
      template:
        metadata:
          creationTimestamp: null
          labels:
            router: router1
        spec:
          nodeSelector:      1
            router: "first"
    ...
    1
    The key and value are router and first, respectively, corresponding to the router=first label.

3.2.11. Using Router Shards

Router sharding uses NAMESPACE_LABELS and ROUTE_LABELS, to filter router namespaces and routes. This enables you to distribute subsets of routes over multiple router deployments. By using non-overlapping subsets, you can effectively partition the set of routes. Alternatively, you can define shards comprising overlapping subsets of routes.

By default, a router selects all routes from all projects (namespaces). Sharding involves adding labels to routes or namespaces and label selectors to routers. Each router shard comprises the routes that are selected by a specific set of label selectors or belong to the namespaces that are selected by a specific set of label selectors.

Note

The router service account must have the [cluster reader] permission set to allow access to labels in other namespaces.

Router Sharding and DNS

Because an external DNS server is needed to route requests to the desired shard, the administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router.

Consider the following example:

  • Router A lives on host 192.168.0.5 and has routes with *.foo.com.
  • Router B lives on host 192.168.1.9 and has routes with *.example.com.

Separate DNS entries must resolve *.foo.com to the node hosting Router A and *.example.com to the node hosting Router B:

  • *.foo.com A IN 192.168.0.5
  • *.example.com A IN 192.168.1.9

Router Sharding Examples

This section describes router sharding using namespace and route labels.

Figure 3.1. Router Sharding Based on Namespace Labels

Router Sharding Based on Namespace Labels
  1. Configure a router with a namespace label selector:

    $ oc set env dc/router NAMESPACE_LABELS="router=r1"
  2. Because the router has a selector on the namespace, the router will handle routes only for matching namespaces. In order to make this selector match a namespace, label the namespace accordingly:

    $ oc label namespace default "router=r1"
  3. Now, if you create a route in the default namespace, the route is available in the default router:

    $ oc create -f route1.yaml
  4. Create a new project (namespace) and create a route, route2:

    $ oc new-project p1
    $ oc create -f route2.yaml

    Notice the route is not available in your router.

  5. Label namespace p1 with router=r1

    $ oc label namespace p1 "router=r1"

Adding this label makes the route available in the router.

Example

A router deployment finops-router is configured with the label selector NAMESPACE_LABELS="name in (finance, ops)", and a router deployment dev-router is configured with the label selector NAMESPACE_LABELS="name=dev".

If all routes are in namespaces labeled name=finance, name=ops, and name=dev, then this configuration effectively distributes your routes between the two router deployments.

In the above scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards.

The criteria for route selection govern how the routes are distributed. It is possible to have overlapping subsets of routes across router deployments.

Example

In addition to finops-router and dev-router in the example above, you also have devops-router, which is configured with a label selector NAMESPACE_LABELS="name in (dev, ops)".

The routes in namespaces labeled name=dev or name=ops now are serviced by two different router deployments. This becomes a case in which you have defined overlapping subsets of routes, as illustrated in the procedure in Router Sharding Based on Namespace Labels.

In addition, this enables you to create more complex routing rules, allowing the diversion of higher priority traffic to the dedicated finops-router while sending lower priority traffic to devops-router.

Router Sharding Based on Route Labels

NAMESPACE_LABELS allows filtering of the projects to service and selecting all the routes from those projects, but you may want to partition routes based on other criteria associated with the routes themselves. The ROUTE_LABELS selector allows you to slice-and-dice the routes themselves.

Example

A router deployment prod-router is configured with the label selector ROUTE_LABELS="mydeployment=prod", and a router deployment devtest-router is configured with the label selector ROUTE_LABELS="mydeployment in (dev, test)".

This configuration partitions routes between the two router deployments according to the routes' labels, irrespective of their namespaces.

The example assumes you have all the routes you want to be serviced tagged with a label "mydeployment=<tag>".

3.2.11.1. Creating Router Shards

This section describes an advanced example of router sharding. Suppose there are 26 routes, named a — z, with various labels:

Possible labels on routes

sla=high       geo=east     hw=modest     dept=finance
sla=medium     geo=west     hw=strong     dept=dev
sla=low                                   dept=ops

These labels express the concepts including service level agreement, geographical location, hardware requirements, and department. The routes can have at most one label from each column. Some routes may have other labels or no labels at all.

Name(s)SLAGeoHWDeptOther Labels

a

high

east

modest

finance

type=static

b

 

west

strong

 

type=dynamic

c, d, e

low

 

modest

 

type=static

g — k

medium

 

strong

dev

 

l — s

high

 

modest

ops

 

t — z

 

west

  

type=dynamic

Here is a convenience script mkshard that illustrates how oc adm router, oc set env, and oc scale can be used together to make a router shard.

#!/bin/bash
# Usage: mkshard ID SELECTION-EXPRESSION
id=$1
sel="$2"
router=router-shard-$id           1
oc adm router $router --replicas=0  2
dc=dc/router-shard-$id            3
oc set env   $dc ROUTE_LABELS="$sel"  4
oc scale $dc --replicas=3         5
1
The created router has name router-shard-<id>.
2
Specify no scaling for now.
3
The deployment configuration for the router.
4
Set the selection expression using oc set env. The selection expression is the value of the ROUTE_LABELS environment variable.
5
Scale it up.

Running mkshard several times creates several routers:

RouterSelection ExpressionRoutes

router-shard-1

sla=high

a, l — s

router-shard-2

geo=west

b, t — z

router-shard-3

dept=dev

g — k

3.2.11.2. Modifying Router Shards

Because a router shard is a construct based on labels, you can modify either the labels (via oc label) or the selection expression (via oc set env).

This section extends the example started in the Creating Router Shards section, demonstrating how to change the selection expression.

Here is a convenience script modshard that modifies an existing router to use a new selection expression:

#!/bin/bash
# Usage: modshard ID SELECTION-EXPRESSION...
id=$1
shift
router=router-shard-$id       1
dc=dc/$router                 2
oc scale $dc --replicas=0     3
oc set env   $dc "$@"             4
oc scale $dc --replicas=3     5
1
The modified router has name router-shard-<id>.
2
The deployment configuration where the modifications occur.
3
Scale it down.
4
Set the new selection expression using oc set env. Unlike mkshard from the Creating Router Shards section, the selection expression specified as the non-ID arguments to modshard must include the environment variable name as well as its value.
5
Scale it back up.
Note

In modshard, the oc scale commands are not necessary if the deployment strategy for router-shard-<id> is Rolling.

For example, to expand the department for router-shard-3 to include ops as well as dev:

$ modshard 3 ROUTE_LABELS='dept in (dev, ops)'

The result is that router-shard-3 now selects routes g — s (the combined sets of g — k and l — s).

This example takes into account that there are only three departments in this example scenario, and specifies a department to leave out of the shard, thus achieving the same result as the preceding example:

$ modshard 3 ROUTE_LABELS='dept != finance'

This example specifies three comma-separated qualities, and results in only route b being selected:

$ modshard 3 ROUTE_LABELS='hw=strong,type=dynamic,geo=west'

Similarly to ROUTE_LABELS, which involves a route’s labels, you can select routes based on the labels of the route’s namespace using the NAMESPACE_LABELS environment variable. This example modifies router-shard-3 to serve routes whose namespace has the label frequency=weekly:

$ modshard 3 NAMESPACE_LABELS='frequency=weekly'

The last example combines ROUTE_LABELS and NAMESPACE_LABELS to select routes with label sla=low and whose namespace has the label frequency=weekly:

$ modshard 3 \
    NAMESPACE_LABELS='frequency=weekly' \
    ROUTE_LABELS='sla=low'

3.2.12. Finding the Host Name of the Router

When exposing a service, a user can use the same route from the DNS name that external users use to access the application. The network administrator of the external network must make sure the host name resolves to the name of a router that has admitted the route. The user can set up their DNS with a CNAME that points to this host name. However, the user may not know the host name of the router. When it is not known, the cluster administrator can provide it.

The cluster administrator can use the --router-canonical-hostname option with the router’s canonical host name when creating the router. For example:

# oc adm router myrouter --router-canonical-hostname="rtr.example.com"

This creates the ROUTER_CANONICAL_HOSTNAME environment variable in the router’s deployment configuration containing the host name of the router.

For routers that already exist, the cluster administrator can edit the router’s deployment configuration and add the ROUTER_CANONICAL_HOSTNAME environment variable:

spec:
  template:
    spec:
      containers:
        - env:
          - name: ROUTER_CANONICAL_HOSTNAME
            value: rtr.example.com

The ROUTER_CANONICAL_HOSTNAME value is displayed in the route status for all routers that have admitted the route. The route status is refreshed every time the router is reloaded.

When a user creates a route, all of the active routers evaluate the route and, if conditions are met, admit it. When a router that defines the ROUTER_CANONICAL_HOSTNAME environment variable admits the route, the router places the value in the routerCanonicalHostname field in the route status. The user can examine the route status to determine which, if any, routers have admitted the route, select a router from the list, and find the host name of the router to pass along to the network administrator.

status:
  ingress:
    conditions:
      lastTransitionTime: 2016-12-07T15:20:57Z
      status: "True"
      type: Admitted
      host: hello.in.mycloud.com
      routerCanonicalHostname: rtr.example.com
      routerName: myrouter
      wildcardPolicy: None

oc describe inclues the host name when available:

$ oc describe route/hello-route3
...
Requested Host: hello.in.mycloud.com exposed on router myroute (host rtr.example.com) 12 minutes ago

Using the above information, the user can ask the DNS administrator to set up a CNAME from the route’s host, hello.in.mycloud.com, to the router’s canonical hostname, rtr.example.com. This results in any traffic to hello.in.mycloud.com reaching the user’s application.

3.2.13. Customizing the Default Routing Subdomain

You can customize the suffix used as the default routing subdomain for your environment by modifying the master configuration file (the /etc/origin/master/master-config.yaml file by default). Routes that do not specify a host name would have one generated using this default routing subdomain.

The following example shows how you can set the configured suffix to v3.openshift.test:

routingConfig:
  subdomain: v3.openshift.test
Note

This change requires a restart of the master if it is running.

With the OpenShift Container Platform master(s) running the above configuration, the generated host name for the example of a route named no-route-hostname without a host name added to a namespace mynamespace would be:

no-route-hostname-mynamespace.v3.openshift.test

3.2.14. Forcing Route Host Names to a Custom Routing Subdomain

If an administrator wants to restrict all routes to a specific routing subdomain, they can pass the --force-subdomain option to the oc adm router command. This forces the router to override any host names specified in a route and generate one based on the template provided to the --force-subdomain option.

The following example runs a router, which overrides the route host names using a custom subdomain template ${name}-${namespace}.apps.example.com.

$ oc adm router --force-subdomain='${name}-${namespace}.apps.example.com'

3.2.15. Using Wildcard Certificates

A TLS-enabled route that does not include a certificate uses the router’s default certificate instead. In most cases, this certificate should be provided by a trusted certificate authority, but for convenience you can use the OpenShift Container Platform CA to create the certificate. For example:

$ CA=/etc/origin/master
$ oc adm ca create-server-cert --signer-cert=$CA/ca.crt \
      --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \
      --hostnames='*.cloudapps.example.com' \
      --cert=cloudapps.crt --key=cloudapps.key
Note

The oc adm ca create-server-cert command generates a certificate that is valid for two years. This can be altered with the --expire-days option, but for security reasons, it is recommended to not make it greater than this value.

Run oc adm commands only from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts.

The router expects the certificate and key to be in PEM format in a single file:

$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem

From there you can use the --default-cert flag:

$ oc adm router --default-cert=cloudapps.router.pem --service-account=router
Note

Browsers only consider wildcards valid for subdomains one level deep. So in this example, the certificate would be valid for a.cloudapps.example.com but not for a.b.cloudapps.example.com.

3.2.16. Manually Redeploy Certificates

To manually redeploy the router certificates:

  1. Check to see if a secret containing the default router certificate was added to the router:

    $ oc volumes dc/router
    
    deploymentconfigs/router
      secret/router-certs as server-certificate
        mounted at /etc/pki/tls/private

    If the certificate is added, skip the following step and overwrite the secret.

  2. Make sure that you have a default certificate directory set for the following variable DEFAULT_CERTIFICATE_DIR:

    $ oc env dc/router --list
    
    DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private

    If not, create the directory using the following command:

    $ oc env dc/router DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private
  3. Export the certificate to PEM format:

    $ cat custom-router.key custom-router.crt custom-ca.crt > custom-router.crt
  4. Overwrite or create a router certificate secret:

    If the certificate secret was added to the router, overwrite the secret. If not, create a new secret.

    To overwrite the secret, run the following command:

    $ oc create secret generic router-certs --from-file=tls.crt=custom-router.crt --from-file=tls.key=custom-router.key --type=kubernetes.io/tls -o json --dry-run | oc replace -f -

    To create a new secret, run the following commands:

    $ oc create secret generic router-certs --from-file=tls.crt=custom-router.crt --from-file=tls.key=custom-router.key --type=kubernetes.io/tls
    
    $ oc volume dc/router --add --mount-path=/etc/pki/tls/private --secret-name='router-certs' --name router-certs
  5. Deploy the router.

    $ oc rollout latest dc/router

3.2.17. Using Secured Routes

Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:

# openssl rsa -in <passwordProtectedKey.key> -out <new.key>

Here is an example of how to use a secure edge terminated route with TLS termination occurring on the router before traffic is proxied to the destination. The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end.

First, start up a router instance:

# oc adm router --replicas=1 --service-account=router

Next, create a private key, csr and certificate for our edge secured route. The instructions on how to do that would be specific to your certificate authority and provider. For a simple self-signed certificate for a domain named www.example.test, see the example shown below:

# sudo openssl genrsa -out example-test.key 2048
#
# sudo openssl req -new -key example-test.key -out example-test.csr  \
  -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=www.example.test"
#
# sudo openssl x509 -req -days 366 -in example-test.csr  \
      -signkey example-test.key -out example-test.crt

Generate a route using the above certificate and key.

$ oc create route edge --service=my-service \
    --hostname=www.example.test \
    --key=example-test.key --cert=example-test.crt
route "my-service" created

Look at its definition.

$ oc get route/my-service -o yaml
apiVersion: v1
kind: Route
metadata:
  name:  my-service
spec:
  host: www.example.test
  to:
    kind: Service
    name: my-service
  tls:
    termination: edge
    key: |
      -----BEGIN PRIVATE KEY-----
      [...]
      -----END PRIVATE KEY-----
    certificate: |
      -----BEGIN CERTIFICATE-----
      [...]
      -----END CERTIFICATE-----

Make sure your DNS entry for www.example.test points to your router instance(s) and the route to your domain should be available. The example below uses curl along with a local resolver to simulate the DNS lookup:

# routerip="4.1.1.1"  #  replace with IP address of one of your router instances.
# curl -k --resolve www.example.test:443:$routerip https://www.example.test/

3.2.18. Using Wildcard Routes (for a Subdomain)

The HAProxy router has support for wildcard routes, which are enabled by setting the ROUTER_ALLOW_WILDCARD_ROUTES environment variable to true. Any routes with a wildcard policy of Subdomain that pass the router admission checks will be serviced by the HAProxy router. Then, the HAProxy router exposes the associated service (for the route) per the route’s wildcard policy.

Important

To change a route’s wildcard policy, you must remove the route and recreate it with the updated wildcard policy. Editing only the route’s wildcard policy in a route’s .yaml file does not work.

$ oc adm router --replicas=0 ...
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true
$ oc scale dc/router --replicas=1

Learn how to configure the web console for wildcard routes.

Using a Secure Wildcard Edge Terminated Route

This example reflects TLS termination occurring on the router before traffic is proxied to the destination. Traffic sent to any hosts in the subdomain example.org (*.example.org) is proxied to the exposed service.

The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end for all hosts that match the subdomain (*.example.org).

  1. Start up a router instance:

    $ oc adm router --replicas=0 --service-account=router
    $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true
    $ oc scale dc/router --replicas=1
  2. Create a private key, certificate signing request (CSR), and certificate for the edge secured route.

    The instructions on how to do this are specific to your certificate authority and provider. For a simple self-signed certificate for a domain named *.example.test, see this example:

    # sudo openssl genrsa -out example-test.key 2048
    #
    # sudo openssl req -new -key example-test.key -out example-test.csr  \
      -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=*.example.test"
    #
    # sudo openssl x509 -req -days 366 -in example-test.csr  \
          -signkey example-test.key -out example-test.crt
  3. Generate a wildcard route using the above certificate and key:

    $ cat > route.yaml  <<REOF
    apiVersion: v1
    kind: Route
    metadata:
      name:  my-service
    spec:
      host: www.example.test
      wildcardPolicy: Subdomain
      to:
        kind: Service
        name: my-service
      tls:
        termination: edge
        key: "$(perl -pe 's/\n/\\n/' example-test.key)"
        certificate: "$(perl -pe 's/\n/\\n/' example-test.cert)"
    REOF
    $ oc create -f route.yaml

    Ensure your DNS entry for *.example.test points to your router instance(s) and the route to your domain is available.

    This example uses curl with a local resolver to simulate the DNS lookup:

    # routerip="4.1.1.1"  #  replace with IP address of one of your router instances.
    # curl -k --resolve www.example.test:443:$routerip https://www.example.test/
    # curl -k --resolve abc.example.test:443:$routerip https://abc.example.test/
    # curl -k --resolve anyname.example.test:443:$routerip https://anyname.example.test/

For routers that allow wildcard routes (ROUTER_ALLOW_WILDCARD_ROUTES set to true), there are some caveats to the ownership of a subdomain associated with a wildcard route.

Prior to wildcard routes, ownership was based on the claims made for a host name with the namespace with the oldest route winning against any other claimants. For example, route r1 in namespace ns1 with a claim for one.example.test would win over another route r2 in namespace ns2 for the same host name one.example.test if route r1 was older than route r2.

In addition, routes in other namespaces were allowed to claim non-overlapping hostnames. For example, route rone in namespace ns1 could claim www.example.test and another route rtwo in namespace d2 could claim c3po.example.test.

This is still the case if there are no wildcard routes claiming that same subdomain (example.test in the above example).

However, a wildcard route needs to claim all of the host names within a subdomain (host names of the form \*.example.test). A wildcard route’s claim is allowed or denied based on whether or not the oldest route for that subdomain (example.test) is in the same namespace as the wildcard route. The oldest route can be either a regular route or a wildcard route.

For example, if there is already a route eldest that exists in the ns1 namespace that claimed a host named owner.example.test and, if at a later point in time, a new wildcard route wildthing requesting for routes in that subdomain (example.test) is added, the claim by the wildcard route will only be allowed if it is the same namespace (ns1) as the owning route.

The following examples illustrate various scenarios in which claims for wildcard routes will succeed or fail.

In the example below, a router that allows wildcard routes will allow non-overlapping claims for hosts in the subdomain example.test as long as a wildcard route has not claimed a subdomain.

$ oc adm router ...
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true

$ oc project ns1
$ oc expose service myservice --hostname=owner.example.test
$ oc expose service myservice --hostname=aname.example.test
$ oc expose service myservice --hostname=bname.example.test

$ oc project ns2
$ oc expose service anotherservice --hostname=second.example.test
$ oc expose service anotherservice --hostname=cname.example.test

$ oc project otherns
$ oc expose service thirdservice --hostname=emmy.example.test
$ oc expose service thirdservice --hostname=webby.example.test

In the example below, a router that allows wildcard routes will not allow the claim for owner.example.test or aname.example.test to succeed since the owning namespace is ns1.

$ oc adm router ...
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true

$ oc project ns1
$ oc expose service myservice --hostname=owner.example.test
$ oc expose service myservice --hostname=aname.example.test

$ oc project ns2
$ oc expose service secondservice --hostname=bname.example.test
$ oc expose service secondservice --hostname=cname.example.test

$ # Router will not allow this claim with a different path name `/p1` as
$ # namespace `ns1` has an older route claiming host `aname.example.test`.
$ oc expose service secondservice --hostname=aname.example.test --path="/p1"

$ # Router will not allow this claim as namespace `ns1` has an older route
$ # claiming host name `owner.example.test`.
$ oc expose service secondservice --hostname=owner.example.test

$ oc project otherns

$ # Router will not allow this claim as namespace `ns1` has an older route
$ # claiming host name `aname.example.test`.
$ oc expose service thirdservice --hostname=aname.example.test

In the example below, a router that allows wildcard routes will allow the claim for `\*.example.test to succeed since the owning namespace is ns1 and the wildcard route belongs to that same namespace.

$ oc adm router ...
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true

$ oc project ns1
$ oc expose service myservice --hostname=owner.example.test

$ # Reusing the route.yaml from the previous example.
$ # spec:
$ #   host: www.example.test
$ #   wildcardPolicy: Subdomain

$ oc create -f route.yaml   #  router will allow this claim.

In the example below, a router that allows wildcard routes will not allow the claim for `\*.example.test to succeed since the owning namespace is ns1 and the wildcard route belongs to another namespace cyclone.

$ oc adm router ...
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true

$ oc project ns1
$ oc expose service myservice --hostname=owner.example.test

$ # Switch to a different namespace/project.
$ oc project cyclone

$ # Reusing the route.yaml from a prior example.
$ # spec:
$ #   host: www.example.test
$ #   wildcardPolicy: Subdomain

$ oc create -f route.yaml   #  router will deny (_NOT_ allow) this claim.

Similarly, once a namespace with a wildcard route claims a subdomain, only routes within that namespace can claim any hosts in that same subdomain.

In the example below, once a route in namespace ns1 with a wildcard route claims subdomain example.test, only routes in the namespace ns1 are allowed to claim any hosts in that same subdomain.

$ oc adm router ...
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true

$ oc project ns1
$ oc expose service myservice --hostname=owner.example.test

$ oc project otherns

$ # namespace `otherns` is allowed to claim for other.example.test
$ oc expose service otherservice --hostname=other.example.test

$ oc project ns1

$ # Reusing the route.yaml from the previous example.
$ # spec:
$ #   host: www.example.test
$ #   wildcardPolicy: Subdomain

$ oc create -f route.yaml   #  Router will allow this claim.

$ #  In addition, route in namespace otherns will lose its claim to host
$ #  `other.example.test` due to the wildcard route claiming the subdomain.

$ # namespace `ns1` is allowed to claim for deux.example.test
$ oc expose service mysecondservice --hostname=deux.example.test

$ # namespace `ns1` is allowed to claim for deux.example.test with path /p1
$ oc expose service mythirdservice --hostname=deux.example.test --path="/p1"

$ oc project otherns

$ # namespace `otherns` is not allowed to claim for deux.example.test
$ # with a different path '/otherpath'
$ oc expose service otherservice --hostname=deux.example.test --path="/otherpath"

$ # namespace `otherns` is not allowed to claim for owner.example.test
$ oc expose service yetanotherservice --hostname=owner.example.test

$ # namespace `otherns` is not allowed to claim for unclaimed.example.test
$ oc expose service yetanotherservice --hostname=unclaimed.example.test

In the example below, different scenarios are shown, in which the owner routes are deleted and ownership is passed within and across namespaces. While a route claiming host eldest.example.test in the namespace ns1 exists, wildcard routes in that namespace can claim subdomain example.test. When the route for host eldest.example.test is deleted, the next oldest route senior.example.test would become the oldest route and would not affect any other routes. Once the route for host senior.example.test is deleted, the next oldest route junior.example.test becomes the oldest route and block the wildcard route claimant.

$ oc adm router ...
$ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true

$ oc project ns1
$ oc expose service myservice --hostname=eldest.example.test
$ oc expose service seniorservice --hostname=senior.example.test

$ oc project otherns

$ # namespace `otherns` is allowed to claim for other.example.test
$ oc expose service juniorservice --hostname=junior.example.test

$ oc project ns1

$ # Reusing the route.yaml from the previous example.
$ # spec:
$ #   host: www.example.test
$ #   wildcardPolicy: Subdomain

$ oc create -f route.yaml   #  Router will allow this claim.

$ #  In addition, route in namespace otherns will lose its claim to host
$ #  `junior.example.test` due to the wildcard route claiming the subdomain.

$ # namespace `ns1` is allowed to claim for dos.example.test
$ oc expose service mysecondservice --hostname=dos.example.test

$ # Delete route for host `eldest.example.test`, the next oldest route is
$ # the one claiming `senior.example.test`, so route claims are unaffacted.
$ oc delete route myservice

$ # Delete route for host `senior.example.test`, the next oldest route is
$ # the one claiming `junior.example.test` in another namespace, so claims
$ # for a wildcard route would be affected. The route for the host
$ # `dos.example.test` would be unaffected as there are no other wildcard
$ # claimants blocking it.
$ oc delete route seniorservice

3.2.19. Using the Container Network Stack

The OpenShift Container Platform router runs inside a container and the default behavior is to use the network stack of the host (i.e., the node where the router container runs). This default behavior benefits performance because network traffic from remote clients does not need to take multiple hops through user space to reach the target service and container.

Additionally, this default behavior enables the router to get the actual source IP address of the remote connection rather than getting the node’s IP address. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.

This host network behavior is controlled by the --host-network router command line option, and the default behaviour is the equivalent of using --host-network=true. If you wish to run the router with the container network stack, use the --host-network=false option when creating the router. For example:

$ oc adm router --service-account=router --host-network=false

Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router.

Note

Running with the container network stack means that the router sees the source IP address of a connection to be the NATed IP address of the node, rather than the actual remote IP address.

Note

On OpenShift Container Platform clusters using multi-tenant network isolation, routers on a non-default namespace with the --host-network=false option will load all routes in the cluster, but routes across the namespaces will not be reachable due to network isolation. With the --host-network=true option, routes bypass the container network and it can access any pod in the cluster. If isolation is needed in this case, then do not add routes across the namespaces.

3.2.20. Exposing Router Metrics

The HAProxy router metrics are, by default, exposed or published in Prometheus format for consumption by external metrics collection and aggregation systems (e.g. Prometheus, statsd). Metrics are also available directly from the HAProxy router in its own HTML format for viewing in a browser or CSV download. These metrics include the HAProxy native metrics and some controller metrics.

When you create a router using the following command, OpenShift Container Platform makes metrics available in Prometheus format on the stats port, by default 1936.

$ oc adm router --service-account=router
  • To extract the raw statistics in Prometheus format run the following command:

    curl <user>:<password>@<router_IP>:<STATS_PORT>

    For example:

    $ curl admin:sLzdR6SgDJ@10.254.254.35:1936/metrics

    You can get the information you need to access the metrics from the router service annotations:

    $ oc edit service <router-name>
    
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        prometheus.io/port: "1936"
        prometheus.io/scrape: "true"
        prometheus.openshift.io/password: IImoDqON02
        prometheus.openshift.io/username: admin

    The prometheus.io/port is the stats port, by default 1936. You might need to configure your firewall to permit access. Use the previous user name and password to access the metrics. The path is /metrics.

    $ curl <user>:<password>@<router_IP>:<STATS_PORT>
    for example:
    $ curl admin:sLzdR6SgDJ@10.254.254.35:1936/metrics
    ...
    # HELP haproxy_backend_connections_total Total number of connections.
    # TYPE haproxy_backend_connections_total gauge
    haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0
    haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0
    haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0
    ...
    # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value.
    # TYPE haproxy_exporter_server_threshold gauge
    haproxy_exporter_server_threshold{type="current"} 11
    haproxy_exporter_server_threshold{type="limit"} 500
    ...
    # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes.
    # TYPE haproxy_frontend_bytes_in_total gauge
    haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0
    haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0
    haproxy_frontend_bytes_in_total{frontend="public"} 119070
    ...
    # HELP haproxy_server_bytes_in_total Current total of incoming bytes.
    # TYPE haproxy_server_bytes_in_total gauge
    haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0
    haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0
    haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0
    haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0
    ...
  • To get metrics in a browser:

    1. Delete the following environment variables from the router deployment configuration file:

      $ oc edit dc router
      
      - name: ROUTER_LISTEN_ADDR
        value: 0.0.0.0:1936
      - name: ROUTER_METRICS_TYPE
        value: haproxy
    2. Launch the stats window using the following URL in a browser, where the STATS_PORT value is 1936 by default:

      http://admin:<Password>@<router_IP>:<STATS_PORT>

      You can get the stats in CSV format by adding ;csv to the URL:

      For example:

      http://admin:<Password>@<router_IP>:1936;csv

      To get the router IP, admin name, and password:

      oc describe pod <router_pod>
  • To suppress metrics collection:

    $ oc adm router --service-account=router --stats-port=0

3.2.21. ARP Cache Tuning for Large-scale Clusters

In OpenShift Container Platform clusters with large numbers of routes (greater than the value of net.ipv4.neigh.default.gc_thresh3, which is 65536 by default), you must increase the default values of sysctl variables on each node in the cluster running the router pod to allow more entries in the ARP cache.

When the problem is occuring, the kernel messages would be similar to the following:

[ 1738.811139] net_ratelimit: 1045 callbacks suppressed
[ 1743.823136] net_ratelimit: 293 callbacks suppressed

When this issue occurs, the oc commands might start to fail with the following error:

Unable to connect to the server: dial tcp: lookup <hostname> on <ip>:<port>: write udp <ip>:<port>-><ip>:<port>: write: invalid argument

To verify the actual amount of ARP entries for IPv4, run the following:

# ip -4 neigh show nud all | wc -l

If the number begins to approach the net.ipv4.neigh.default.gc_thresh3 threshold, increase the values. Get the current value by running:

# sysctl net.ipv4.neigh.default.gc_thresh1
net.ipv4.neigh.default.gc_thresh1 = 128
# sysctl net.ipv4.neigh.default.gc_thresh2
net.ipv4.neigh.default.gc_thresh2 = 512
# sysctl net.ipv4.neigh.default.gc_thresh3
net.ipv4.neigh.default.gc_thresh3 = 1024

The following sysctl sets the variables to the OpenShift Container Platform current default values.

# sysctl net.ipv4.neigh.default.gc_thresh1=8192
# sysctl net.ipv4.neigh.default.gc_thresh2=32768
# sysctl net.ipv4.neigh.default.gc_thresh3=65536

To make these settings permanent, see this document.

3.2.22. Protecting Against DDoS Attacks

Add timeout http-request to the default HAProxy router image to protect the deployment against distributed denial-of-service (DDoS) attacks (for example, slowloris):

# and the haproxy stats socket is available at /var/run/haproxy.stats
global
  stats socket ./haproxy.stats level admin

defaults
  option http-server-close
  mode http
  timeout http-request 5s
  timeout connect 5s 1
  timeout server 10s
  timeout client 30s
1
timeout http-request is set up to 5 seconds. HAProxy gives a client 5 seconds *to send its whole HTTP request. Otherwise, HAProxy shuts the connection with *an error.

Also, when the environment variable ROUTER_SLOWLORIS_TIMEOUT is set, it limits the amount of time a client has to send the whole HTTP request. Otherwise, HAProxy will shut down the connection.

Setting the environment variable allows information to be captured as part of the router’s deployment configuration and does not require manual modification of the template, whereas manually adding the HAProxy setting requires you to rebuild the router pod and maintain your router template file.

Using annotations implements basic DDoS protections in the HAProxy template router, including the ability to limit the:

  • number of concurrent TCP connections
  • rate at which a client can request TCP connections
  • rate at which HTTP requests can be made

These are enabled on a per route basis because applications can have extremely different traffic patterns.

Table 3.1. HAProxy Template Router Settings
SettingDescription

haproxy.router.openshift.io/rate-limit-connections

Enables the settings be configured (when set to true, for example).

haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp

The number of concurrent TCP connections that can be made by the same IP address on this route.

haproxy.router.openshift.io/rate-limit-connections.rate-tcp

The number of TCP connections that can be opened by a client IP.

haproxy.router.openshift.io/rate-limit-connections.rate-http

The number of HTTP requests that a client IP can make in a 3-second period.

3.3. Deploying a Customized HAProxy Router

3.3.1. Overview

The default HAProxy router is intended to satisfy the needs of most users. However, it does not expose all of the capability of HAProxy. Therefore, users may need to modify the router for their own needs.

You may need to implement new features within the application back-ends, or modify the current operation. The router plug-in provides all the facilities necessary to make this customization.

The router pod uses a template file to create the needed HAProxy configuration file. The template file is a golang template. When processing the template, the router has access to OpenShift Container Platform information, including the router’s deployment configuration, the set of admitted routes, and some helper functions.

When the router pod starts, and every time it reloads, it creates an HAProxy configuration file, and then it starts HAProxy. The HAProxy configuration manual describes all of the features of HAProxy and how to construct a valid configuration file.

A configMap can be used to add the new template to the router pod. With this approach, the router deployment configuration is modified to mount the configMap as a volume in the router pod. The TEMPLATE_FILE environment variable is set to the full path name of the template file in the router pod.

Alternatively, you can build a custom router image and use it when deploying some or all of your routers. There is no need for all routers to run the same image. To do this, modify the haproxy-template.config file, and rebuild the router image. The new image is pushed to the cluster’s Docker repository, and the router’s deployment configuration image: field is updated with the new name. When the cluster is updated, the image needs to be rebuilt and pushed.

In either case, the router pod starts with the template file.

3.3.2. Obtaining the Router Configuration Template

The HAProxy template file is fairly large and complex. For some changes, it may be easier to modify the existing template rather than writing a complete replacement. You can obtain a haproxy-config.template file from a running router by running this on master, referencing the router pod:

# oc get po
NAME                       READY     STATUS    RESTARTS   AGE
router-2-40fc3             1/1       Running   0          11d
# oc rsh router-2-40fc3 cat haproxy-config.template > haproxy-config.template
# oc rsh router-2-40fc3 cat haproxy.config > haproxy.config

Alternatively, you can log onto the node that is running the router:

# docker run --rm --interactive=true --tty --entrypoint=cat \
    registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7 haproxy-config.template

The image name is from docker images.

Save this content to a file for use as the basis of your customized template. The saved haproxy.config shows what is actually running.

3.3.3. Modifying the Router Configuration Template

3.3.3.1. Background

The template is based on the golang template. It can reference any of the environment variables in the router’s deployment configuration, any configuration information that is described below, and router provided helper functions.

The structure of the template file mirrors the resulting HAProxy configuration file. As the template is processed, anything not surrounded by {{" something "}} is directly copied to the configuration file. Passages that are surrounded by {{" something "}} are evaluated. The resulting text, if any, is copied to the configuration file.

3.3.3.2. Go Template Actions

The define action names the file that will contain the processed template.

{{define "/var/lib/haproxy/conf/haproxy.config"}}pipeline{{end}}
Table 3.2. Template Router Functions
FunctionMeaning

processEndpointsForAlias(alias ServiceAliasConfig, svc ServiceUnit, action string) []Endpoint

Returns the list of valid endpoints. When action is "shuffle", the order of endpoints is randomized.

env(variable, default …​string) string

Tries to get the named environment variable from the pod. If it is not defined or empty, it returns the optional second argument. Otherwise, it returns an empty string.

matchPattern(pattern, s string) bool

The first argument is a string that contains the regular expression, the second argument is the variable to test. Returns a Boolean value indicating whether the regular expression provided as the first argument matches the string provided as the second argument.

isInteger(s string) bool

Determines if a given variable is an integer.

firstMatch(s string, allowedValues …​string) bool

Compares a given string to a list of allowed strings. Returns first match scanning left to right through the list.

matchValues(s string, allowedValues …​string) bool

Compares a given string to a list of allowed strings. Returns "true" if the string is an allowed value, otherwise returns false.

generateRouteRegexp(hostname, path string, wildcard bool) string

Generates a regular expression matching the route hosts (and paths). The first argument is the host name, the second is the path, and the third is a wildcard Boolean.

genCertificateHostName(hostname string, wildcard bool) string

Generates host name to use for serving/matching certificates. First argument is the host name and the second is the wildcard Boolean.

isTrue(s string) bool

Determines if a given variable contains "true".

These functions are provided by the HAProxy template router plug-in.

3.3.3.3. Router Provided Information

This section reviews the OpenShift Container Platform information that the router makes available to the template. The router configuration parameters are the set of data that the HAProxy router plug-in is given. The fields are accessed by (dot) .Fieldname.

The tables below the Router Configuration Parameters expand on the definitions of the various fields. In particular, .State has the set of admitted routes.

Table 3.3. Router Configuration Parameters
FieldTypeDescription

WorkingDir

string

The directory that files will be written to, defaults to /var/lib/containers/router

State

map[string](ServiceAliasConfig)`

The routes.

ServiceUnits

map[string]ServiceUnit

The service lookup.

DefaultCertificate

string

Full path name to the default certificate in pem format.

PeerEndpoints

`[]Endpoint

Peers.

StatsUser

string

User name to expose stats with (if the template supports it).

StatsPassword

string

Password to expose stats with (if the template supports it).

StatsPort

int

Port to expose stats with (if the template supports it).

BindPorts

bool

Whether the router should bind the default ports.

Table 3.4. Router ServiceAliasConfig (A Route)
FieldTypeDescription

Name

string

The user-specified name of the route.

Namespace

string

The namespace of the route.

Host

string

The host name. For example, www.example.com.

Path

string

Optional path. For example, www.example.com/myservice where myservice is the path.

TLSTermination

routeapi.TLSTerminationType

The termination policy for this back-end; drives the mapping files and router configuration.

Certificates

map[string]Certificate

Certificates used for securing this back-end. Keyed by the certificate ID.

Status

ServiceAliasConfigStatus

Indicates the status of configuration that needs to be persisted.

PreferPort

string

Indicates the port the user wants to expose. If empty, a port will be selected for the service.

InsecureEdgeTerminationPolicy

routeapi.InsecureEdgeTerminationPolicyType

Indicates desired behavior for insecure connections to an edge-terminated route: none (or disable), allow, or redirect.

RoutingKeyName

string

Hash of the route + namespace name used to obscure the cookie ID.

IsWildcard

bool

Indicates this service unit needing wildcard support.

Annotations

map[string]string

Annotations attached to this route.

ServiceUnitNames

map[string]int32

Collection of services that support this route, keyed by service name and valued on the weight attached to it with respect to other entries in the map.

ActiveServiceUnits

int

Count of the ServiceUnitNames with a non-zero weight.

The ServiceAliasConfig is a route for a service. Uniquely identified by host + path. The default template iterates over routes using {{range $cfgIdx, $cfg := .State }}. Within such a {{range}} block, the template can refer to any field of the current ServiceAliasConfig using $cfg.Field.

Table 3.5. Router ServiceUnit
FieldTypeDescription

Name

string

Name corresponds to a service name + namespace. Uniquely identifies the ServiceUnit.

EndpointTable

[]Endpoint

Endpoints that back the service. This translates into a final back-end implementation for routers.

ServiceUnit is an encapsulation of a service, the endpoints that back that service, and the routes that point to the service. This is the data that drives the creation of the router configuration files

Table 3.6. Router Endpoint
FieldType

ID

string

IP

string

Port

string

TargetName

string

PortName

string

IdHash

string

NoHealthCheck

bool

Endpoint is an internal representation of a Kubernetes endpoint.

Table 3.7. Router Certificate, ServiceAliasConfigStatus
FieldTypeDescription

Certificate

string

Represents a public/private key pair. It is identified by an ID, which will become the file name. A CA certificate will not have a PrivateKey set.

ServiceAliasConfigStatus

string

Indicates that the necessary files for this configuration have been persisted to disk. Valid values: "saved", "".

Table 3.8. Router Certificate Type
FieldTypeDescription

ID

string

 

Contents

string

The certificate.

PrivateKey

string

The private key.

Table 3.9. Router TLSTerminationType
FieldTypeDescription

TLSTerminationType

string

Dictates where the secure communication will stop.

InsecureEdgeTerminationPolicyType

string

Indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.

TLSTerminationType and InsecureEdgeTerminationPolicyType dictate where the secure communication will stop.

Table 3.10. Router TLSTerminationType Values
ConstantValueMeaning

TLSTerminationEdge

edge

Terminate encryption at the edge router.

TLSTerminationPassthrough

passthrough

Terminate encryption at the destination, the destination is responsible for decrypting traffic.

TLSTerminationReencrypt

reencrypt

Terminate encryption at the edge router and re-encrypt it with a new certificate supplied by the destination.

Table 3.11. Router InsecureEdgeTerminationPolicyType Values
TypeMeaning

Allow

Traffic is sent to the server on the insecure port (default).

Disable

No traffic is allowed on the insecure port.

Redirect

Clients are redirected to the secure port.

None ("") is the same as Disable.

3.3.3.4. Annotations

Each route can have annotations attached. Each annotation is just a name and a value.

apiVersion: v1
kind: Route
metadata:
  annotations:
    haproxy.router.openshift.io/timeout: 5500ms
[...]

The name can be anything that does not conflict with existing Annotations. The value is any string. The string can have multiple tokens separated by a space. For example, aa bb cc. The template uses {{index}} to extract the value of an annotation. For example:

{{$balanceAlgo := index $cfg.Annotations "haproxy.router.openshift.io/balance"}}

This is an example of how this could be used for mutual client authorization.

{{ with $cnList := index $cfg.Annotations "whiteListCertCommonName" }}
  {{   if ne $cnList "" }}
    acl test ssl_c_s_dn(CN) -m str {{ $cnList }}
    http-request deny if !test
  {{   end }}
{{ end }}

Then, you can handle the white-listed CNs with this command.

$ oc annotate route <route-name> --overwrite whiteListCertCommonName="CN1 CN2 CN3"

See Route-specific Annotations for more information.

3.3.3.5. Environment Variables

The template can use any environment variables that exist in the router pod. The environment variables can be set in the deployment configuration. New environment variables can be added.

They are referenced by the env function:

{{env "ROUTER_MAX_CONNECTIONS" "20000"}}

The first string is the variable, and the second string is the default when the variable is missing or nil. When ROUTER_MAX_CONNECTIONS is not set or is nil, 20000 is used. Environment variables are a map where the key is the environment variable name and the content is the value of the variable.

See Route-specific Environment variables for more information.

3.3.3.6. Example Usage

Here is a simple template based on the HAProxy template file.

Start with a comment:

{{/*
  Here is a small example of how to work with templates
  taken from the HAProxy template file.
*/}}

The template can create any number of output files. Use a define construct to create an output file. The file name is specified as an argument to define, and everything inside the define block up to the matching end is written as the contents of that file.

{{ define "/var/lib/haproxy/conf/haproxy.config" }}
global
{{ end }}

The above will copy global to the /var/lib/haproxy/conf/haproxy.config file, and then close the file.

Set up logging based on environment variables.

{{ with (env "ROUTER_SYSLOG_ADDRESS" "") }}
  log {{.}} {{env "ROUTER_LOG_FACILITY" "local1"}} {{env "ROUTER_LOG_LEVEL" "warning"}}
{{ end }}

The env function extracts the value for the environment variable. If the environment variable is not defined or nil, the second argument is returned.

The with construct sets the value of "." (dot) within the with block to whatever value is provided as an argument to with. The with action tests Dot for nil. If not nil, the clause is processed up to the end. In the above, assume ROUTER_SYSLOG_ADDRESS contains /var/log/msg, ROUTER_LOG_FACILITY is not defined, and ROUTER_LOG_LEVEL contains info. The following will be copied to the output file:

  log /var/log/msg local1 info

Each admitted route ends up generating lines in the configuration file. Use range to go through the admitted routes:

{{ range $cfgIdx, $cfg := .State }}
  backend be_http_{{$cfgIdx}}
{{end}}

.State is a map of ServiceAliasConfig, where the key is the route name. range steps through the map and, for each pass, it sets $cfgIdx with the key, and sets `$cfg to point to the ServiceAliasConfig that describes the route. If there are two routes named myroute and hisroute, the above will copy the following to the output file:

  backend be_http_myroute
  backend be_http_hisroute

Route Annotations, $cfg.Annotations, is also a map with the annotation name as the key and the content string as the value. The route can have as many annotations as desired and the use is defined by the template author. The user codes the annotation into the route and the template author customized the HAProxy template to handle the annotation.

The common usage is to index the annotation to get the value.

{{$balanceAlgo := index $cfg.Annotations "haproxy.router.openshift.io/balance"}}

The index extracts the value for the given annotation, if any. Therefore, `$balanceAlgo will contain the string associated with the annotation or nil. As above, you can test for a non-nil string and act on it with the with construct.

{{ with $balanceAlgo }}
  balance $balanceAlgo
{{ end }}

Here when $balanceAlgo is not nil, balance $balanceAlgo is copied to the output file.

In a second example, you want to set a server timeout based on a timeout value set in an annotation.

$value := index $cfg.Annotations "haproxy.router.openshift.io/timeout"

The $value can now be evaluated to make sure it contains a properly constructed string. The matchPattern function accepts a regular expression and returns true if the argument satisfies the expression.

matchPattern "[1-9][0-9]*(us\|ms\|s\|m\|h\|d)?" $value

This would accept 5000ms but not 7y. The results can be used in a test.

{{if (matchPattern "[1-9][0-9]*(us\|ms\|s\|m\|h\|d)?" $value) }}
  timeout server  {{$value}}
{{ end }}

It can also be used to match tokens:

matchPattern "roundrobin|leastconn|source" $balanceAlgo

Alternatively matchValues can be used to match tokens:

matchValues $balanceAlgo "roundrobin" "leastconn" "source"

3.3.4. Using a ConfigMap to Replace the Router Configuration Template

You can use a ConfigMap to customize the router instance without rebuilding the router image. The haproxy-config.template, reload-haproxy, and other scripts can be modified as well as creating and modifying router environment variables.

  1. Copy the haproxy-config.template that you want to modify as described above. Modify it as desired.
  2. Create a ConfigMap:

    $ oc create configmap customrouter --from-file=haproxy-config.template

    The customrouter ConfigMap now contains a copy of the modified haproxy-config.template file.

  3. Modify the router deployment configuration to mount the ConfigMap as a file and point the TEMPLATE_FILE environment variable to it. This can be done via oc set env and oc volume commands, or alternatively by editing the router deployment configuration.

    Using oc commands
    $ oc volume dc/router --add --overwrite \
        --name=config-volume \
        --mount-path=/var/lib/haproxy/conf/custom \
        --source='{"configMap": { "name": "customrouter"}}'
    $ oc set env dc/router \
        TEMPLATE_FILE=/var/lib/haproxy/conf/custom/haproxy-config.template
    Editing the Router Deployment Configuration

    Use oc edit dc router to edit the router deployment configuration with a text editor.

    ...
            - name: STATS_USERNAME
              value: admin
            - name: TEMPLATE_FILE  1
              value: /var/lib/haproxy/conf/custom/haproxy-config.template
            image: openshift/origin-haproxy-routerp
    ...
            terminationMessagePath: /dev/termination-log
            volumeMounts: 2
            - mountPath: /var/lib/haproxy/conf/custom
              name: config-volume
          dnsPolicy: ClusterFirst
    ...
          terminationGracePeriodSeconds: 30
          volumes: 3
          - configMap:
              name: customrouter
            name: config-volume
    ...
    1
    In the spec.container.env field, add the TEMPLATE_FILE environment variable to point to the mounted haproxy-config.template file.
    2
    Add the spec.container.volumeMounts field to create the mount point.
    3
    Add a new spec.volumes field to mention the ConfigMap.

    Save the changes and exit the editor. This restarts the router.

3.3.5. Using Stick Tables

The following example customization can be used in a highly-available routing setup to use stick-tables that synchronize between peers.

Adding a Peer Section

In order to synchronize stick-tables amongst peers you must a define a peers section in your HAProxy configuration. This section determines how HAProxy will identify and connect to peers. The plug-in provides data to the template under the .PeerEndpoints variable to allow you to easily identify members of the router service. You may add a peer section to the haproxy-config.template file inside the router image by adding:

{{ if (len .PeerEndpoints) gt 0 }}
peers openshift_peers
  {{ range $endpointID, $endpoint := .PeerEndpoints }}
    peer {{$endpoint.TargetName}} {{$endpoint.IP}}:1937
  {{ end }}
{{ end }}

Changing the Reload Script

When using stick-tables, you have the option of telling HAProxy what it should consider the name of the local host in the peer section. When creating endpoints, the plug-in attempts to set the TargetName to the value of the endpoint’s TargetRef.Name. If TargetRef is not set, it will set the TargetName to the IP address. The TargetRef.Name corresponds with the Kubernetes host name, therefore you can add the -L option to the reload-haproxy script to identify the local host in the peer section.

peer_name=$HOSTNAME 1

if [ -n "$old_pid" ]; then
  /usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name -sf $old_pid
else
  /usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name
fi
1
Must match an endpoint target name that is used in the peer section.

Modifying Back Ends

Finally, to use the stick-tables within back ends, you can modify the HAProxy configuration to use the stick-tables and peer set. The following is an example of changing the existing back end for TCP connections to use stick-tables:

            {{ if eq $cfg.TLSTermination "passthrough" }}
backend be_tcp_{{$cfgIdx}}
  balance leastconn
  timeout check 5000ms
  stick-table type ip size 1m expire 5m{{ if (len $.PeerEndpoints) gt 0 }} peers openshift_peers {{ end }}
  stick on src
                {{ range $endpointID, $endpoint := $serviceUnit.EndpointTable }}
  server {{$endpointID}} {{$endpoint.IP}}:{{$endpoint.Port}} check inter 5000ms
                {{ end }}
            {{ end }}

After this modification, you can rebuild your router.

3.3.6. Rebuilding Your Router

In order to rebuild the router, you need copies of several files that are present on a running router. Make a work directory and copy the files from the router:

# mkdir - myrouter/conf
# cd myrouter
# oc get po
NAME                       READY     STATUS    RESTARTS   AGE
router-2-40fc3             1/1       Running   0          11d
# oc rsh router-2-40fc3 cat haproxy-config.template > conf/haproxy-config.template
# oc rsh router-2-40fc3 cat error-page-503.http > conf/error-page-503.http
# oc rsh router-2-40fc3 cat default_pub_keys.pem > conf/default_pub_keys.pem
# oc rsh router-2-40fc3 cat ../Dockerfile > Dockerfile
# oc rsh router-2-40fc3 cat ../reload-haproxy > reload-haproxy

You can edit or replace any of these files. However, conf/haproxy-config.template and reload-haproxy are the most likely to be modified.

After updating the files:

# docker build -t openshift/origin-haproxy-router-myversion .
# docker tag openshift/origin-haproxy-router-myversion 172.30.243.98:5000/openshift/haproxy-router-myversion 1
# docker push 172.30.243.98:5000/openshift/origin-haproxy-router-pc:latest 2
1
Tag the version with the repository. In this case the repository is 172.30.243.98:5000.
2
Push the tagged version to the repository. It may be necessary to docker login to the repository first.

To use the new router, edit the router deployment configuration either by changing the image: string or by adding the --images=<repo>/<image>:<tag> flag to the oc adm router command.

When debugging the changes, it is helpful to set imagePullPolicy: Always in the deployment configuration to force an image pull on each pod creation. When debugging is complete, you can change it back to imagePullPolicy: IfNotPresent to avoid the pull on each pod start.

3.4. Configuring the HAProxy Router to Use the PROXY Protocol

3.4.1. Overview

By default, the HAProxy router expects incoming connections to unsecure, edge, and re-encrypt routes to use HTTP. However, you can configure the router to expect incoming requests by using the PROXY protocol instead. This topic describes how to configure the HAProxy router and an external load balancer to use the PROXY protocol.

3.4.2. Why Use the PROXY Protocol?

When an intermediary service such as a proxy server or load balancer forwards an HTTP request, it appends the source address of the connection to the request’s "Forwarded" header in order to provide this information to subsequent intermediaries and to the back-end service to which the request is ultimately forwarded. However, if the connection is encrypted, intermediaries cannot modify the "Forwarded" header. In this case, the HTTP header will not accurately communicate the original source address when the request is forwarded.

To solve this problem, some load balancers encapsulate HTTP requests using the PROXY protocol as an alternative to simply forwarding HTTP. Encapsulation enables the load balancer to add information to the request without modifying the forwarded request itself. In particular, this means that the load balancer can communicate the source address even when forwarding an encrypted connection.

The HAProxy router can be configured to accept the PROXY protocol and decapsulate the HTTP request. Because the router terminates encryption for edge and re-encrypt routes, the router can then update the "Forwarded" HTTP header (and related HTTP headers) in the request, appending any source address that is communicated using the PROXY protocol.

Warning

The PROXY protocol and HTTP are incompatible and cannot be mixed. If you use a load balancer in front of the router, both must use either the PROXY protocol or HTTP. Configuring one to use one protocol and the other to use the other protocol will cause routing to fail.

3.4.3. Using the PROXY Protocol

By default, the HAProxy router does not use the PROXY protocol. The router can be configured using the ROUTER_USE_PROXY_PROTOCOL environment variable to expect the PROXY protocol for incoming connections:

Enable the PROXY Protocol

$ oc env dc/router ROUTER_USE_PROXY_PROTOCOL=true

Set the variable to any value other than true or TRUE to disable the PROXY protocol:

Disable the PROXY Protocol

$ oc env dc/router ROUTER_USE_PROXY_PROTOCOL=false

If you enable the PROXY protocol in the router, you must configure your load balancer in front of the router to use the PROXY protocol as well. Following is an example of configuring Amazon’s Elastic Load Balancer (ELB) service to use the PROXY protocol. This example assumes that ELB is forwarding ports 80 (HTTP), 443 (HTTPS), and 5000 (for the image registry) to the router running on one or more EC2 instances.

Configure Amazon ELB to Use the PROXY Protocol

  1. To simplify subsequent steps, first set some shell variables:

    $ lb='infra-lb' 1
    $ instances=( 'i-079b4096c654f563c' ) 2
    $ secgroups=( 'sg-e1760186' ) 3
    $ subnets=( 'subnet-cf57c596' ) 4
    1
    The name of your ELB.
    2
    The instance or instances on which the router is running.
    3
    The security group or groups for this ELB.
    4
    The subnet or subnets for this ELB.
  2. Next, create the ELB with the appropriate listeners, security groups, and subnets.

    Note

    You must configure all listeners to use the TCP protocol, not the HTTP protocol.

    $ aws elb create-load-balancer --load-balancer-name "$lb" \
       --listeners \
        'Protocol=TCP,LoadBalancerPort=80,InstanceProtocol=TCP,InstancePort=80' \
        'Protocol=TCP,LoadBalancerPort=443,InstanceProtocol=TCP,InstancePort=443' \
        'Protocol=TCP,LoadBalancerPort=5000,InstanceProtocol=TCP,InstancePort=5000' \
       --security-groups $secgroups \
       --subnets $subnets
    {
        "DNSName": "infra-lb-2006263232.us-east-1.elb.amazonaws.com"
    }
  3. Register your router instance or instances with the ELB:

    $ aws elb register-instances-with-load-balancer --load-balancer-name "$lb" \
       --instances $instances
    {
        "Instances": [
            {
                "InstanceId": "i-079b4096c654f563c"
            }
        ]
    }
  4. Configure the ELB’s health check:

    $ aws elb configure-health-check --load-balancer-name "$lb" \
       --health-check 'Target=HTTP:1936/healthz,Interval=30,UnhealthyThreshold=2,HealthyThreshold=2,Timeout=5'
    {
        "HealthCheck": {
            "HealthyThreshold": 2,
            "Interval": 30,
            "Target": "HTTP:1936/healthz",
            "Timeout": 5,
            "UnhealthyThreshold": 2
        }
    }
  5. Finally, create a load-balancer policy with the ProxyProtocol attribute enabled, and configure it on the ELB’s TCP ports 80 and 443:

    $ aws elb create-load-balancer-policy --load-balancer-name "$lb" \
       --policy-name "${lb}-ProxyProtocol-policy" \
       --policy-type-name 'ProxyProtocolPolicyType' \
       --policy-attributes 'AttributeName=ProxyProtocol,AttributeValue=true'
    $ for port in 80 443
      do
        aws elb set-load-balancer-policies-for-backend-server \
         --load-balancer-name "$lb" \
         --instance-port "$port" \
         --policy-names "${lb}-ProxyProtocol-policy"
      done

Verify the Configuration

You can examine the load balancer as follows to verify that the configuration is correct:

$ aws elb describe-load-balancers --load-balancer-name "$lb" |
    jq '.LoadBalancerDescriptions| [.[]|.ListenerDescriptions]'
[
  [
    {
      "Listener": {
        "InstancePort": 80,
        "LoadBalancerPort": 80,
        "Protocol": "TCP",
        "InstanceProtocol": "TCP"
      },
      "PolicyNames": ["infra-lb-ProxyProtocol-policy"] 1
    },
    {
      "Listener": {
        "InstancePort": 443,
        "LoadBalancerPort": 443,
        "Protocol": "TCP",
        "InstanceProtocol": "TCP"
      },
      "PolicyNames": ["infra-lb-ProxyProtocol-policy"] 2
    },
    {
      "Listener": {
        "InstancePort": 5000,
        "LoadBalancerPort": 5000,
        "Protocol": "TCP",
        "InstanceProtocol": "TCP"
      },
      "PolicyNames": [] 3
    }
  ]
]
1
The listener for TCP port 80 should have the policy for using the PROXY protocol.
2
The listener for TCP port 443 should have the same policy.
3
The listener for TCP port 5000 should not have the policy.

Alternatively, if you already have an ELB configured, but it is not configured to use the PROXY protocol, you will need to change the existing listener for TCP port 80 to use the TCP protocol instead of HTTP (TCP port 443 should already be using the TCP protocol):

$ aws elb delete-load-balancer-listeners --load-balancer-name "$lb" \
   --load-balancer-ports 80
$ aws elb create-load-balancer-listeners --load-balancer-name "$lb" \
   --listeners 'Protocol=TCP,LoadBalancerPort=80,InstanceProtocol=TCP,InstancePort=80'

Verify the Protocol Updates

Verify that the protocol has been updated as follows:

$ aws elb describe-load-balancers --load-balancer-name "$lb" |
   jq '[.LoadBalancerDescriptions[]|.ListenerDescriptions]'
[
  [
    {
      "Listener": {
        "InstancePort": 443,
        "LoadBalancerPort": 443,
        "Protocol": "TCP",
        "InstanceProtocol": "TCP"
      },
      "PolicyNames": []
    },
    {
      "Listener": {
        "InstancePort": 5000,
        "LoadBalancerPort": 5000,
        "Protocol": "TCP",
        "InstanceProtocol": "TCP"
      },
      "PolicyNames": []
    },
    {
      "Listener": {
        "InstancePort": 80,
        "LoadBalancerPort": 80,
        "Protocol": "TCP", 1
        "InstanceProtocol": "TCP"
      },
      "PolicyNames": []
    }
  ]
]
1
All listeners, including the listener for TCP port 80, should be using the TCP protocol.

Then, create a load-balancer policy and add it to the ELB as described in Step 5 above.

3.5. Using the F5 Router Plug-in

3.5.1. Overview

Note

The F5 router plug-in is available starting in OpenShift Container Platform 3.0.2.

Warning

The F5 router plug-in will be deprecated in OpenShift Container Platform version 3.11. The functionality of the F5 router plug-in is replaced in the F5 BIG-IP® Controller for OpenShift. For more information, see F5 BIG-IP Controller for OpenShift. For information about migrating existing deployments from the F5 router plug-in to the BIG-IP Controller for OpenShift, see Replace the F5 Router with the F5 BIG-IP Controller for OpenShift.

The F5 router plug-in is provided as a container image and run as a pod, just like the default HAProxy router.

Important

Support relationships between F5 and Red Hat provide a full scope of support for both models of F5 integration, F5 router plug-in and the F5 BIG-IP Controller for OpenShift. If you are currently using the F5 router plug-in, Red Hat support will provide the initial support and work with F5 support if necessary. If you are currently using the F5 BIG-IP Controller for OpenShift, F5 will provide the inital support and work with Red Hat if necessary.

3.5.2. Prerequisites and Supportability

When deploying the F5 router plug-in, ensure you meet the following requirements:

  • A F5 host IP with:

    • Credentials for API access
    • SSH access via a private key
  • An F5 user with Advanced Shell access
  • A virtual server for HTTP routes:

    • HTTP profile must be http.
  • A virtual server with HTTP profile routes:

    • HTTP profile must be http
    • SSL Profile (client) must be clientssl
    • SSL Profile (server) must be serverssl
  • For edge integration (not recommended):

    • A working ramp node
    • A working tunnel to the ramp node
  • For native integration:

    • A host-internal IP capable of communicating with all nodes on the port 4789/UDP
    • The sdn-services add-on license installed on the F5 host.

The F5 router plug-in for OpenShift Container Platform supports only the following F5 BIG-IP versions:

  • 11.x
  • 12.x

The F5 BIG-IP Controller for OpenShift supports the OpenShift Container Platform versions found in the F5 BIG-IP Controller for OpenShift releases and versioningpage in the F5 documentation.

Important

The following features are not supported with F5 BIG-IP using the F5 router plug-in:

  • Wildcard routes together with re-encrypt routes - you must supply a certificate and a key in the route. If you provide a certificate, a key, and a certificate authority (CA), the CA is never used.
  • A pool is created for all services, even for the ones with no associated route.
  • Idling applications
  • Unencrypted HTTP traffic in redirect mode, with edge TLS termination. (insecureEdgeTerminationPolicy: Redirect)
  • Sharding, that is, having multiple vservers on the F5.
  • SSL cipher (ROUTER_CIPHERS=modern/old)
  • Customizing the endpoint health checks for time-intervals and the type of checks.
  • Serving F5 metrics by using a metrics server.
  • Specifying a different target port (PreferPort/TargetPort) rather than the ones specified in the service.
  • Customizing the source IP whitelists, that is, allowing traffic for a route only from specific IP addresses.
  • Customizing timeout values, such as max connect time, or tcp FIN timeout.
  • HA mode for the F5 BIG-IP.
3.5.2.1. Configuring the Virtual Servers

As a prerequisite to working with the F5 router plug-in, two virtual servers (one virtual server each for HTTP and HTTPS profiles, respectively) need to be set up in the F5 BIG-IP appliance.

To set up a virtual server in the F5 BIG-IP appliance, follow the instructions from F5.

While creating the virtual server, ensure the following settings are in place:

  • For the HTTP server, set the ServicePort to 'http'/80.
  • For the HTTPS server, set the ServicePort to 'https'/443.
  • In the basic configuration, set the HTTP profile to /Common/http for both of the virtual servers.
  • For the HTTPS server, create a default client-ssl profile and select it for the SSL Profile (Client).

    • To create the default client SSL profile, follow the instructions from F5, especially the Configuring the fallback (default) client SSL profile section, which discusses that the certificate/key pair is the default that will be served in the case that custom certificates are not provided for a route or server name.

3.5.3. Deploying the F5 Router Plug-in

Important

The F5 router must be run in privileged mode, because route certificates are copied using the scp command:

$ oc adm policy remove-scc-from-user hostnetwork -z router
$ oc adm policy add-scc-to-user privileged -z router

Deploy the F5 router plug-in with the oc adm router command, but provide additional flags (or environment variables) specifying the following parameters for the F5 BIG-IP host:

FlagDescription

--type=f5-router

Specifies to launch an F5 router plug-in instead of the default haproxy-router. (the default --type is haproxy-router).

--external-host

Specifies the F5 BIG-IP host’s management interface’s host name or IP address.

--external-host-username

Specifies the F5 BIG-IP user name (typically admin). The F5 BIG-IP user account must have access to the Advanced Shell (Bash) on the F5 BIG-IP system.

--external-host-password

Specifies the F5 BIG-IP password.

--external-host-http-vserver

Specifies the name of the F5 virtual server for HTTP connections. This must be configured by the user prior to launching the router pod.

--external-host-https-vserver

Specifies the name of the F5 virtual server for HTTPS connections. This must be configured by the user prior to launching the router pod.

--external-host-private-key

Specifies the path to the SSH private key file for the F5 BIG-IP host. Required to upload and delete key and certificate files for routes.

--external-host-insecure

A Boolean flag that indicates that the F5 router plug-in does not use strict certificate verification with the F5 BIG-IP host.

--external-host-partition-path

Specifies the F5 BIG-IP® partition path (the default is /Common).

For example:

$ oc adm router \
    --type=f5-router \
    --external-host=10.0.0.2 \
    --external-host-username=admin \
    --external-host-password=mypassword \
    --external-host-http-vserver=ose-vserver \
    --external-host-https-vserver=https-ose-vserver \
    --external-host-private-key=/path/to/key \
    --host-network=false \
    --service-account=router

As with the HAProxy router, the oc adm router command creates the service and deployment configuration objects, and thus the replication controllers and pod(s) in which the F5 router plug-in itself runs. The replication controller restarts the F5 router plug-in in case of crashes. Because the F5 router plug-in is watching routes, endpoints, and nodes and configuring F5 BIG-IP accordingly, running the F5 router in this way, along with an appropriately configured F5 BIG-IP deployment, satisfies high-availability requirements.

3.5.4. F5 Router Plug-in Partition Paths

Partition paths allow you to store your OpenShift Container Platform routing configuration in a custom F5 BIG-IP administrative partition, instead of the default /Common partition. You can use custom administrative partitions to secure F5 BIG-IP environments. This means that an OpenShift Container Platform-specific configuration stored in F5 BIG-IP system objects reside within a logical container, allowing administrators to define access control policies on that specific administrative partition.

See the F5 BIG-IP documentation for more information about administrative partitions.

To configure your OpenShift Container Platform for partition paths:

  1. Optionally, perform some cleaning steps:

    1. Ensure F5 is configured to be able to switch to the /Common and /Custom paths.
    2. Delete the static FDB of vxlan5000. See the F5 BIG-IP® documentation for more information.
  2. Configure a virtual server for the custom partition.
  3. To specify a partition path, deploy the F5 router plug-in using the --external-host-partition-path flag:

    $ oc adm router --external-host-partition-path=/OpenShift/zone1 ...

3.5.5. Setting Up F5 Router Plug-in

Note

This section reviews how to set up F5 native integration with OpenShift Container Platform. The concepts of the F5 appliance and OpenShift Container Platform connection and data flow of the F5 router plug-in are discussed in the F5 Router Plug-in section of the Routes topic.

Note

Only F5 BIG-IP appliance versions 11.x and 12.x work with the F5 router plug-in presented in this section. You also need sdn-services add-on license for the integration to work properly. For version 11.x, follow the instructions to set up a ramp node.

With F5 router plug-in for OpenShift Container Platform, you do not need to configure a ramp node for F5 to be able to reach the pods on the overlay network as created by OpenShift SDN.

The F5 router plug-in pod needs to be launched with enough information so that it can successfully directly connect to pods.

  1. Create a ghost hostsubnet on the OpenShift Container Platform cluster:

    $ cat > f5-hostsubnet.yaml << EOF
    {
        "kind": "HostSubnet",
        "apiVersion": "v1",
        "metadata": {
            "name": "openshift-f5-node",
            "annotations": {
            "pod.network.openshift.io/assign-subnet": "true",
    	"pod.network.openshift.io/fixed-vnid-host": "0"  1
            }
        },
        "host": "openshift-f5-node",
        "hostIP": "10.3.89.213"  2
    } EOF
    $ oc create -f f5-hostsubnet.yaml
    1
    Make F5 global.
    2
    The internal IP of the F5 appliance.
  2. Determine the subnet allocated for the ghost hostsubnet just created:

    $ oc get hostsubnets
    NAME                    HOST                    HOST IP       SUBNET
    openshift-f5-node       openshift-f5-node       10.3.89.213   10.131.0.0/23
    openshift-master-node   openshift-master-node   172.17.0.2    10.129.0.0/23
    openshift-node-1        openshift-node-1        172.17.0.3    10.128.0.0/23
    openshift-node-2        openshift-node-2        172.17.0.4    10.130.0.0/23
  3. Check the SUBNET for the newly created hostsubnet. In this example, 10.131.0.0/23.
  4. Get the entire pod network’s CIDR:

    $ oc get clusternetwork

    This value will be something like 10.128.0.0/14, noting the mask (14 in this example).

  5. To construct the gateway address, pick any IP address from the hostsubnet (for example, 10.131.0.5). Use the mask of the pod network (14). The gateway address becomes: 10.131.0.5/14.
  6. Launch the F5 router plug-in pod, following these instructions. Additionally, allow the access to 'node' cluster resource for the service account and use the two new additional options for VXLAN native integration.

    $ # Add policy to allow router to access nodes using the sdn-reader role
    $ oc adm policy add-cluster-role-to-user system:sdn-reader system:serviceaccount:default:router
    $ # Launch the F5 router plug-in pod with vxlan-gw and F5's internal IP as extra arguments
    $ #--external-host-internal-ip=10.3.89.213
    $ #--external-host-vxlan-gw=10.131.0.5/14
    $ oc adm router \
        --type=f5-router \
        --external-host=10.3.89.90 \
        --external-host-username=admin \
        --external-host-password=mypassword \
        --external-host-http-vserver=ose-vserver \
        --external-host-https-vserver=https-ose-vserver \
        --external-host-private-key=/path/to/key \
        --service-account=router \
        --host-network=false \
        --external-host-internal-ip=10.3.89.213 \
        --external-host-vxlan-gw=10.131.0.5/14
    Note

    The external-host-username is a F5 BIG-IP user account with access to the Advanced Shell (Bash) on the F5 BIG-IP system.

Chapter 4. Deploying Red Hat CloudForms

4.1. Deploying Red Hat CloudForms on OpenShift Container Platform

4.1.1. Introduction

The OpenShift Container Platform installer includes the Ansible role openshift-management and playbooks for deploying Red Hat CloudForms 4.6 (CloudForms Management Engine 5.9, or CFME) on OpenShift Container Platform.

Warning

The current implementation is incompatible with the Technology Preview deployment process of Red Hat CloudForms 4.5 as described in OpenShift Container Platform 3.6 documentation.

When deploying Red Hat CloudForms on OpenShift Container Platform, there are two major decisions to make:

  1. Do you want an external or a containerized (also referred to as podified) PostgreSQL database?
  2. Which storage class will back your persistent volumes (PVs)?

For the first decision, you can deploy Red Hat CloudForms in one of two ways, depending on the location of the PostgreSQL database to be used by Red Hat CloudForms:

Deployment VariantDescription

Fully containerized

All application services and the PostgreSQL database are run as pods on OpenShift Container Platform.

External database

The application utilizes an externally-hosted PostgreSQL database server, while all other services are ran as pods on OpenShift Container Platform.

For the second decision, the openshift-management role provides customization options for overriding many default deployment parameters. This includes the following storage class options to back your PVs:

Storage ClassDescription

NFS (default)

Local, on cluster

NFS External

NFS somewhere else, like a storage appliance

Cloud Provider

Use automatic storage provisioning from your cloud provider (Google Cloud Engine, Amazon Web Services, or Microsoft Azure)

Preconfigured (advanced)

Assumes you created everything ahead of time

Topics in this guide include the requirements for running Red Hat CloudForms on OpenShift Container Platform, descriptions of the available configuration variables, and instructions on running the installer either during your initial OpenShift Container Platform installation or after your cluster has been provisioned.

4.2. Requirements for Red Hat CloudForms on OpenShift Container Platform

 
The default requirements are listed in the table below. These can be overridden by customizing template parameters.

Important

The application performance will suffer, or possibly even fail to deploy, if these requirements are not satisfied.

Table 4.1. Default Requirements
ItemRequirementDescriptionCustomization Parameter

Application Memory

≥ 4.0 Gi

Minimum required memory for the application

APPLICATION_MEM_REQ

Application Storage

≥ 5.0 Gi

Minimum PV size required for the application

APPLICATION_VOLUME_CAPACITY

PostgreSQL Memory

≥ 6.0 Gi

Minimum required memory for the database

POSTGRESQL_MEM_REQ

PostgreSQL Storage

≥ 15.0 Gi

Minimum PV size required for the database

DATABASE_VOLUME_CAPACITY

Cluster Hosts

≥ 3

Number of hosts in your cluster

N/A

To sum up these requirements:

  • You must have several cluster nodes.
  • Your cluster nodes must have lots of memory available.
  • You must have several GiB’s of storage available, either locally or on your cloud provider.
  • PV sizes can be changed by providing override values to template parameters.

4.3. Configuring Role Variables

4.3.1. Overview

The following sections describe role variables that may be used in your Ansible inventory file, which is used to control the behavior of the Red Hat CloudForms installation when running the installer.

4.3.2. General Variables

VariableRequiredDefaultDescription

openshift_management_install_management

No

false

Boolean, set to true to install the application.

openshift_management_app_template

Yes

cfme-template

The deployment variant of Red Hat CloudForms to install. Set cfme-template for a containerized database or cfme-template-ext-db for an external database.

openshift_management_project

No

openshift-management

Namespace (project) for the Red Hat CloudForms installation.

openshift_management_project_description

No

CloudForms Management Engine

Namespace (project) description.

openshift_management_username

No

admin

Default management user name. Changing this value does not change the user name; only change this value if you have changed the name already and are running integration scripts (such as the script to add container providers).

openshift_management_password

No

smartvm

Default management password. Changing this value does not change the password; only change this value if you have changed the password already and are running integration scripts (such as the script to add container providers).

4.3.3. Customizing Template Parameters

You can use the openshift_management_template_parameters Ansible role variable to specify any template parameters you want to override in the application or PV templates.

For example, if you wanted to reduce the memory requirement of the PostgreSQL pod, then you could set the following:

openshift_management_template_parameters={'POSTGRESQL_MEM_REQ': '1Gi'}

When the Red Hat CloudForms template is processed, 1Gi will be used for the value of the POSTGRESQL_MEM_REQ template parameter.

Not all template parameters are present in both template variants (containerized or external database). For example, while the podified database template has a POSTGRESQL_MEM_REQ parameter, no such parameter is present in the external db template, as there is no need for this information due to there being no databases that require pods.

Therefore, be very careful if you are overriding template parameters. Including parameters not defined in a template will cause errors. If you do receive an error during the Ensure the Management App is created task, run the uninstall scripts first before running the installer again.

4.3.4. Database Variables

4.3.4.1. Containerized (Podified) Database

Any POSTGRES_* or DATABASE_* template parameters in the cfme-template.yaml file may be customized through the openshift_management_template_parameters hash in your inventory file..

4.3.4.2. External Database

Any POSTGRES_* or DATABASE_* template parameters in the cfme-template-ext-db.yaml file may be customized through the openshift_management_template_parameters hash in your inventory file..

External PostgreSQL databases require you to provide database connection parameters. You must set the required connection keys in the openshift_management_template_parameters parameter in your inventory. The following keys are required:

  • DATABASE_USER
  • DATABASE_PASSWORD
  • DATABASE_IP
  • DATABASE_PORT (Most PostgreSQL servers run on port 5432)
  • DATABASE_NAME
Note

Ensure your external database is running PostgreSQL 9.5 or you may not be able to deploy the CloudForms application successfully.

Your inventory would contain a line similar to:

[OSEv3:vars]
openshift_management_app_template=cfme-template-ext-db 1
openshift_management_template_parameters={'DATABASE_USER': 'root', 'DATABASE_PASSWORD': 'mypassword', 'DATABASE_IP': '10.10.10.10', 'DATABASE_PORT': '5432', 'DATABASE_NAME': 'cfme'}
1
Set openshift_management_app_template parameter to cfme-template-ext-db.

4.3.5. Storage Class Variables

VariableRequiredDefaultDescription

openshift_management_storage_class

No

nfs

Storage type to use. Options are nfs, nfs_external, preconfigured, or cloudprovider.

openshift_management_storage_nfs_external_hostname

No

false

If you are using an external NFS server, such as a NetApp appliance, then you must set the host name here. Leave the value as false if you are not using external NFS. Additionally, external NFS requires that you create the NFS exports that will back the application PV and optionally the database PV.

openshift_management_storage_nfs_base_dir

No

/exports/

If you are using external NFS, then you can set the base path to the exports location here. For local NFS, you can also change this value if you want to change the default path used for local NFS exports.

openshift_management_storage_nfs_local_hostname

No

false

If you do not have an [nfs] group in your inventory, or want to simply manually define the local NFS host in your cluster, set this parameter to the host name of the preferred NFS server. The server must be a part of your OpenShift Container Platform cluster.

4.3.5.1. NFS (Default)

The NFS storage class is best suited for proof-of-concept and test deployments. It is also the default storage class for deployments. No additional configuration is required for this choice.

This storage class configures NFS on a cluster host (by default, the first master in the inventory file) to back the required PVs. The application requires a PV, and the database (which may be hosted externally) may require a second. PV minimum required sizes are 5GiB for the Red Hat CloudForms application, and 15GiB for the PostgreSQL database (20GiB minimum available space on a volume or partition if used specifically for NFS purposes).

Customization is provided through the following role variables:

  • openshift_management_storage_nfs_base_dir
  • openshift_management_storage_nfs_local_hostname
4.3.5.2. NFS External

External NFS leans on pre-configured NFS servers to provide exports for the required PVs. For external NFS you must have a cfme-app and optionally a cfme-db (for containerized database) exports.

Configuration is provided through the following role variables:

  • openshift_management_storage_nfs_external_hostname
  • openshift_management_storage_nfs_base_dir

The openshift_management_storage_nfs_external_hostname parameter must be set to the host name or IP of your external NFS server.

If /exports is not the parent directory to your exports then you must set the base directory via the openshift_management_storage_nfs_base_dir parameter.

For example, if your server export is /exports/hosted/prod/cfme-app, then you must set openshift_management_storage_nfs_base_dir=/exports/hosted/prod.

4.3.5.3. Cloud Provider

If you are using OpenShift Container Platform cloud provider integration for your storage class, Red Hat CloudForms can also use the cloud provider storage to back its required PVs. For this functionality to work, you must have configured the openshift_cloudprovider_kind variable (for AWS or GCE) and all associated parameters specific to your chosen cloud provider.

When the application is created using this storage class, the required PVs are automatically provisioned using the configured cloud provider storage integration.

There are no additional variables to configure the behavior of this storage class.

4.3.5.4. Preconfigured (Advanced)

The preconfigured storage class implies that you know exactly what you are doing and that all storage requirements have been taken care ahead of time. Typically this means that you have already created the correctly sized PVs. The installer will do nothing to modify any storage settings.

There are no additional variables to configure the behavior of this storage class.

4.4. Running the Installer

4.4.1. Deploying Red Hat CloudForms During or After OpenShift Container Platform Installation

You can choose to deploy Red Hat CloudForms either during initial OpenShift Container Platform installation or after the cluster has been provisioned:

  1. Ensure that openshift_management_install_management is set to true in your inventory file under the [OSEv3:vars] section:

    [OSEv3:vars]
    openshift_management_install_management=true
  2. Set any other Red Hat CloudForms role variables in your inventory file as described in Configuring Role Variables. Resources to assist in this are provided in Example Inventory Files.
  3. Choose which playbook to run depending on whether OpenShift Container Platform is already provisioned:

    1. If you want to install Red Hat CloudForms at the same time you install your OpenShift Container Platform cluster, call the standard config.yml playbook as described in Running the Installation Playbooks to begin the OpenShift Container Platform cluster and Red Hat CloudForms installation.
    2. If you want to install Red Hat CloudForms on an already provisioned OpenShift Container Platform cluster, call the Red Hat CloudForms playbook directly to begin the installation:

      # ansible-playbook -v [-i /path/to/inventory] \
          /usr/share/ansible/openshift-ansible/playbooks/openshift-management/config.yml

4.4.2. Example Inventory Files

The following sections show example snippets of inventory files showing various configurations of Red Hat CloudForms on OpenShift Container Platform that can help you get started.

Note

See Configuring Role Variables for complete variable descriptions.

4.4.2.1. All Defaults

This example is the simplest, using all of the default values and choices. This results in a fully-containerized (podified) Red Hat CloudForms installation. All application components, as well as the PostgreSQL database, are created as pods in OpenShift Container Platform:

[OSEv3:vars]
openshift_management_app_template=cfme-template
4.4.2.2. External NFS Storage

This is as the previous example, except that instead of using local NFS services in the cluster, it uses an existing, external NFS server (such as a storage appliance). Note the two new parameters:

[OSEv3:vars]
openshift_management_app_template=cfme-template
openshift_management_storage_class=nfs_external 1
openshift_management_storage_nfs_external_hostname=nfs.example.com 2
1
Set to nfs_external.
2
Set to the host name of the NFS server.

If the external NFS host exports directories under a different parent directory, such as /exports/hosted/prod, add the following additional variable:

openshift_management_storage_nfs_base_dir=/exports/hosted/prod
4.4.2.3. Override PV Sizes

This example overrides the persistent volume (PV) sizes. PV sizes must be set via openshift_management_template_parameters, which ensures that the application and database are able to make claims on created PVs without interfering with each other:

[OSEv3:vars]
openshift_management_app_template=cfme-template
openshift_management_template_parameters={'APPLICATION_VOLUME_CAPACITY': '10Gi', 'DATABASE_VOLUME_CAPACITY': '25Gi'}
4.4.2.4. Override Memory Requirements

In a test or proof-of-concept installation, you may need to reduce the application and database memory requirements to fit within your capacity. Note that reducing memory limits can result in reduced performance or a complete failure to initialize the application:

[OSEv3:vars]
openshift_management_app_template=cfme-template
openshift_management_template_parameters={'APPLICATION_MEM_REQ': '3000Mi', 'POSTGRESQL_MEM_REQ': '1Gi', 'ANSIBLE_MEM_REQ': '512Mi'}

This example instructs the installer to process the application template with the parameter APPLICATION_MEM_REQ set to 3000Mi, POSTGRESQL_MEM_REQ set to 1Gi, and ANSIBLE_MEM_REQ set to 512Mi.

These parameters can be combined with the parameters displayed in the previous example Override PV Sizes.

4.4.2.5. External PostgreSQL Database

To use an external database, you must change the openshift_management_app_template parameter value to cfme-template-ext-db.

Additionally, database connection information must be supplied using the openshift_management_template_parameters variable. See Configuring Role Variables for more details.

[OSEv3:vars]
openshift_management_app_template=cfme-template-ext-db
openshift_management_template_parameters={'DATABASE_USER': 'root', 'DATABASE_PASSWORD': 'mypassword', 'DATABASE_IP': '10.10.10.10', 'DATABASE_PORT': '5432', 'DATABASE_NAME': 'cfme'}
Important

Ensure your are running PostgreSQL 9.5 or you may not be able to deploy the application successfully.

4.5. Enabling Container Provider Integration

4.5.1. Adding a Single Container Provider

After deploying Red Hat CloudForms on OpenShift Container Platform as described in Running the Installer, there are two methods for enabling container provider integration. You can manually add OpenShift Container Platform as a container provider, or you can try the playbooks included with this role.

4.5.1.1. Adding Manually

See the following Red Hat CloudForms documentation for steps on manually adding your OpenShift Container Platform cluster as a container provider:

4.5.1.2. Adding Automatically

Automated container provider integration can be accomplished using the playbooks included with this role.

This playbook:

  1. Gathers the necessary authentication secrets.
  2. Finds the public routes to the Red Hat CloudForms application and the cluster API.
  3. Makes a REST call to add the OpenShift Container Platform cluster as a container provider.

To run the container provider playbook:

# ansible-playbook -v [-i /path/to/inventory] \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-management/add_container_provider.yml

4.5.2. Multiple Container Providers

As well as providing playbooks to integrate your current OpenShift Container Platform cluster into your Red Hat CloudForms deployment, this role includes a script which allows you to add multiple container platforms as container providers in any arbitrary Red Hat CloudForms server. The container platforms can be OpenShift Container Platform or OpenShift Origin.

Using the multiple provider script requires manual configuration and setting an EXTRA_VARS parameter on the CLI when running the playbook.

4.5.2.1. Preparing the Script

To prepare the multiple provider script, complete the following manual configuration:

  1. Copy the /usr/share/ansible/openshift-ansible/roles/openshift_management/files/examples/container_providers.yml example somewhere, such as /tmp/cp.yml. You will be modifying this file.
  2. If you changed your Red Hat CloudForms name or password, update the hostname, user, and password parameters in the management_server key in the container_providers.yml file that you copied.
  3. Fill in an entry under the container_providers key for each container platform cluster you want to add as container providers.

    1. The following parameters must be configured:

      • auth_key - This is the token of a service account that has cluster-admin privileges.
      • hostname - This is the host name that points to the cluster API. Each container provider must have a unique host name.
      • name - This is the name of the cluster to be displayed in the Red Hat CloudForms server container providers overview page. This must be unique.
      Tip

      To obtain the auth_key bearer token from your clusters:

      $ oc serviceaccounts get-token -n management-infra management-admin
    2. The following parameters may be optionally configured:

      • port - Update this key if your container platform cluster runs the API on a port other than 8443.
      • endpoint - You may enable SSL verification (verify_ssl) or change the validation setting to ssl-with-validation. Support for custom trusted CA certificates is not currently available.
4.5.2.1.1. Example

As an example, consider the following scenario:

  • You copied the container_providers.yml file to /tmp/cp.yml.
  • You want to add two OpenShift Container Platform clusters.
  • Your Red Hat CloudForms server runs on mgmt.example.com

For this scenario, you would customize /tmp/cp.yml as follows:

container_providers:
  - connection_configurations:
      - authentication: {auth_key: "<token>", authtype: bearer, type: AuthToken} 1
        endpoint: {role: default, security_protocol: ssl-without-validation, verify_ssl: 0}
    hostname: "<provider_hostname1>"
    name: <display_name1>
    port: 8443
    type: "ManageIQ::Providers::Openshift::ContainerManager"
  - connection_configurations:
      - authentication: {auth_key: "<token>", authtype: bearer, type: AuthToken} 2
        endpoint: {role: default, security_protocol: ssl-without-validation, verify_ssl: 0}
    hostname: "<provider_hostname2>"
    name: <display_name2>
    port: 8443
    type: "ManageIQ::Providers::Openshift::ContainerManager"
management_server:
  hostname: "<hostname>"
  user: <user_name>
  password: <password>
1 2
Replace <token> with the management token for this cluster.
4.5.2.2. Running the Playbook

To run the multiple-providers integration script, you must provide the path to the container providers configuration file as an EXTRA_VARS parameter to the ansible-playbook command. Use the -e (or --extra-vars) parameter to set container_providers_config to the configuration file path:

# ansible-playbook -v [-i /path/to/inventory] \
    -e container_providers_config=/tmp/cp.yml \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-management/add_many_container_providers.yml

After the playbook completes, you should find two new container providers in your Red Hat CloudForms service. Navigate to the Compute → Containers → Providers page to see an overview.

4.5.3. Refreshing Providers

After adding either a single or multiple container providers, the new provider(s) must be refreshed in Red Hat CloudForms to get all the latest data about the container provider and the containers being managed. This involves navigating to each provider in the Red Hat CloudForms web console and clicking a refresh button for each.

See the following Red Hat CloudForms documentation for steps:

4.6. Uninstalling Red Hat CloudForms

4.6.1. Running the Uninstall Playbook

To uninstall and erase a deployed Red Hat CloudForms installation from OpenShift Container Platform, run the following playbook:

# ansible-playbook -v [-i /path/to/inventory] \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-management/uninstall.yml
Important

NFS export definitions and data stored on NFS exports are not automatically removed. You are urged to manually erase any data from old application or database deployments before attempting to initialize a new deployment.

4.6.2. Troubleshooting

Failure to erase old PostgreSQL data can result in cascading errors, causing the postgresql pod to enter a crashloopbackoff state. This blocks the cfme pod from ever starting. The cause of the crashloopbackoff is due to incorrect file permissions on the database NFS export created during a previous deployment.

To continue, erase all data from the PostgreSQL export and delete the pod (not the deployer pod). For example, if you had the following pods:

$ oc get pods
NAME                 READY     STATUS             RESTARTS   AGE
httpd-1-cx7fk        1/1       Running            1          21h
cfme-0               0/1       Running            1          21h
memcached-1-vkc7p    1/1       Running            1          21h
postgresql-1-deploy  1/1       Running            1          21h
postgresql-1-6w2t4   0/1       CrashLoopBackOff   1          21h

Then you would:

  1. Erase the data from the database NFS export.
  2. Run:

    $ oc delete postgresql-1-6w2t4

The PostgreSQL deployer pod will try to scale up a new postgresql pod to replace the one you deleted. After the postgresql pod is running, the cfme pod will stop blocking and begin application initialization.

Chapter 5. Master and Node Configuration

5.1. Customizing master and node configuration after installation

The openshift start command and its subcommands (master to launch a master server and node to launch a node server) take a limited set of arguments that are sufficient for launching servers in a development or experimental environment.

However, these arguments are insufficient to describe and control the full set of configuration and security options that are necessary in a production environment. You must provide those options in the Master host files, at /etc/origin/master/master-config.yaml and the node configuration maps:

These files define options including overriding the default plug-ins, connecting to etcd, automatically creating service accounts, building image names, customizing project requests, configuring volume plug-ins, and much more.

This topic covers the available options for customizing your OpenShift Container Platform master and node hosts, and shows you how to make changes to the configuration after installation.

These files are fully specified with no default values. Therefore, an empty value indicates that you want to start up with an empty value for that parameter. This makes it easy to reason about exactly what your configuration is, but it also makes it difficult to remember all of the options to specify. To make this easier, the configuration files can be created with the --write-config option and then used with the --config option.

5.2. Installation dependencies

Production environments should be installed using the standard cluster installation process. In production environments, it is a good idea to use multiple masters for the purposes of high availability (HA). A cluster architecture of three masters is recommended, and HAproxy is the recommended solution for this.

Caution

If etcd is installed on the master hosts, you must configure your cluster to use at least three masters, because etcd would not be able to decide which one is authoritative. The only way to successfully run only two masters is if you install etcd on hosts other than the masters.

5.3. Configuring masters and nodes

The method you use to configure your master and node configuration files must match the method that was used to install your OpenShift Container Platform cluster. If you followed the standard cluster installation processe, then make your configuration changes in the Ansible inventory file.

5.4. Making configuration changes using Ansible

For this section, familiarity with Ansible is assumed.

Only a portion of the available host configuration options are exposed to Ansible. After an OpenShift Container Platform install, Ansible creates an inventory file with some substituted values. Modifying this inventory file and re-running the Ansible installer playbook is how you customize your OpenShift Container Platform cluster.

While OpenShift Container Platform supports using Ansible for cluster installation, using an Ansible playbook and inventory file, you can also use other management tools, such as Puppet, Chef, or Salt.

Use Case: Configuring the cluster to use HTPasswd authentication

Note
  • This use case assumes you have already set up SSH keys to all the nodes referenced in the playbook.
  • The htpasswd utility is in the httpd-tools package:

    # yum install httpd-tools

To modify the Ansible inventory and make configuration changes:

  1. Open the ./hosts inventory file.
  2. Add the following new variables to the [OSEv3:vars] section of the file:

    # htpasswd auth
    openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    # Defining htpasswd users
    #openshift_master_htpasswd_users={'<name>': '<hashed-password>', '<name>': '<hashed-password>'}
    # or
    #openshift_master_htpasswd_file=/etc/origin/master/htpasswd

    For HTPasswd authentication the openshift_master_identity_providers variable enables the authentication type. You can configure three different authentication options that use HTPasswd:

    • Specify only openshift_master_identity_providers if /etc/origin/master/htpasswd is already configured and present on the host.
    • Specify both openshift_master_identity_providers and openshift_master_htpasswd_file to copy a local htpasswd file to the host.
    • Specify both openshift_master_identity_providers and openshift_master_htpasswd_users to generate a new htpasswd file on the host.

    Because OpenShift Container Platform requires a hashed password to configure HTPasswd authentication, you can use the htpasswd command, as shown in the following section, to generate the hashed password(s) for your user(s) or to create the flat file with the users and associated hashed passwords.

    The following example changes the authentication method from the default deny all setting to htpasswd and uses the specified file to generate user IDs and passwords for the jsmith and bloblaw users.

    # htpasswd auth
    openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    # Defining htpasswd users
    openshift_master_htpasswd_users={'jsmith': '$apr1$wIwXkFLI$bAygtKGmPOqaJftB', 'bloblaw': '7IRJ$2ODmeLoxf4I6sUEKfiA$2aDJqLJe'}
    # or
    #openshift_master_htpasswd_file=/etc/origin/master/htpasswd
  3. Re-run the ansible playbook for these modifications to take effect:

    $ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/deploy_cluster.yml

    The playbook updates the configuration, and restarts the OpenShift Container Platform master service to apply the changes.

You have now modified the master and node configuration files using Ansible, but this is just a simple use case. From here you can see which master and node configuration options are exposed to Ansible and customize your own Ansible inventory.

5.4.1. Using the htpasswd commmand

To configure the OpenShift Container Platform cluster to use HTPasswd authentication, you need at least one user with a hashed password to include in the inventory file.

You can:

To create a user and hashed password:

  1. Run the following command to add the specified user:

    $ htpasswd -n <user_name>
    Note

    You can include the -b option to supply the password on the command line:

    $ htpasswd -nb <user_name> <password>
  2. Enter and confirm a clear-text password for the user.

    For example:

    $ htpasswd -n myuser
    New password:
    Re-type new password:
    myuser:$apr1$vdW.cI3j$WSKIOzUPs6Q

    The command generates a hashed version of the password.

You can then use the hashed password when configuring HTPasswd authentication. The hashed password is the string after the :. In the above example,you would enter:

openshift_master_htpasswd_users={'myuser': '$apr1$wIwXkFLI$bAygtISk2eKGmqaJftB'}

To create a flat file with a user name and hashed password:

  1. Execute the following command:

    $ htpasswd -c /etc/origin/master/htpasswd <user_name>
    Note

    You can include the -b option to supply the password on the command line:

    $ htpasswd -c -b <user_name> <password>
  2. Enter and confirm a clear-text password for the user.

    For example:

    htpasswd -c /etc/origin/master/htpasswd user1
    New password:
    Re-type new password:
    Adding password for user user1

    The command generates a file that includes the user name and a hashed version of the user’s password.

You can then use the password file when configuring HTPasswd authentication.

Note

For more information on the htpasswd command, see HTPasswd Identity Provider.

5.5. Making manual configuration changes

Use Case: Configure the cluster to use HTPasswd authentication

To manually modify a configuration file:

  1. Open the configuration file you want to modify, which in this case is the /etc/origin/master/master-config.yaml file:
  2. Add the following new variables to the identityProviders stanza of the file:

    oauthConfig:
      ...
      identityProviders:
      - name: my_htpasswd_provider
        challenge: true
        login: true
        mappingMethod: claim
        provider:
          apiVersion: v1
          kind: HTPasswdPasswordIdentityProvider
          file: /etc/origin/master/htpasswd
  3. Save your changes and close the file.
  4. Restart the master for the changes to take effect:

    # master-restart api
    # master-restart controllers

You have now manually modified the master and node configuration files, but this is just a simple use case. From here you can see all the master and node configuration options, and further customize your own cluster by making further modifications.

Note

To modify a node in your cluster, update the node configuration maps as needed. Do not manually edit the node-config.yaml file.

5.6. Master Configuration Files

This section reviews parameters mentioned in the master-config.yaml file.

You can create a new master configuration file to see the valid options for your installed version of OpenShift Container Platform.

Important

Whenever you modify the master-config.yaml file, you must restart the master for the changes to take effect. See Restarting OpenShift Container Platform services.

5.6.1. Admission Control Configuration

Table 5.1. Admission Control Configuration Parameters
Parameter NameDescription

AdmissionConfig

Contains the admission control plug-in configuration. OpenShift Container Platform has a configurable list of admission controller plug-ins that are triggered whenever API objects are created or modified. This option allows you to override the default list of plug-ins; for example, disabling some plug-ins, adding others, changing the ordering, and specifying configuration. Both the list of plug-ins and their configuration can be controlled from Ansible.

APIServerArguments

Key-value pairs that will be passed directly to the Kube API server that match the API servers' command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig, which may cause invalid configurations. Use APIServerArguments with the event-ttl value to store events in etcd. The default is 2h, but it can be set to less to prevent memory growth:

apiServerArguments:
  event-ttl:
  - "15m"

ControllerArguments

Key-value pairs that will be passed directly to the Kube controller manager that match the controller manager’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig, which may cause invalid configurations.

DefaultAdmissionConfig

Used to enable or disable various admission plug-ins. When this type is present as the configuration object under pluginConfig and if the admission plug-in supports it, this will cause an off by default admission plug-in to be enabled.

PluginConfig

Allows specifying a configuration file per admission control plug-in.

PluginOrderOverride

A list of admission control plug-in names that will be installed on the master. Order is significant. If empty, a default list of plug-ins is used.

SchedulerArguments

Key-value pairs that will be passed directly to the Kube scheduler that match the scheduler’s command line arguments. These are not migrated, but if you reference a value that does not exist the server will not start. These values may override other settings in KubernetesMasterConfig, which may cause invalid configurations.

5.6.2. Asset Configuration

Table 5.2. Asset Configuration Parameters
Parameter NameDescription

AssetConfig

If present, then the asset server starts based on the defined parameters. For example:

assetConfig:
  logoutURL: ""
  masterPublicURL: https://master.ose32.example.com:8443
  publicURL: https://master.ose32.example.com:8443/console/
  servingInfo:
    bindAddress: 0.0.0.0:8443
    bindNetwork: tcp4
    certFile: master.server.crt
    clientCA: ""
    keyFile: master.server.key
    maxRequestsInFlight: 0
    requestTimeoutSeconds: 0

corsAllowedOrigins

To access the API server from a web application using a different host name, you must whitelist that host name by specifying corsAllowedOrigins in the configuration field or by specifying the --cors-allowed-origins option on openshift start. No pinning or escaping is done to the value. See Web Console for example usage.

DisabledFeatures

A list of features that should not be started. You will likely want to set this as null. It is very unlikely that anyone will want to manually disable features and that is not encouraged.

Extensions

Files to serve from the asset server file system under a subcontext.

ExtensionDevelopment

When set to true, tells the asset server to reload extension scripts and stylesheets for every request rather than only at startup. It lets you develop extensions without having to restart the server for every change.

ExtensionProperties

Key- (string) and value- (string) pairs that will be injected into the console under the global variable OPENSHIFT_EXTENSION_PROPERTIES.

ExtensionScripts

File paths on the asset server files to load as scripts when the web console loads.

ExtensionStylesheets

File paths on the asset server files to load as style sheets when the web console loads.

LoggingPublicURL

The public endpoint for logging (optional).

LogoutURL

An optional, absolute URL to redirect web browsers to after logging out of the web console. If not specified, the built-in logout page is shown.

MasterPublicURL

How the web console can access the OpenShift Container Platform server.

MetricsPublicURL

The public endpoint for metrics (optional).

PublicURL

URL of the asset server.

5.6.3. Authentication and Authorization Configuration

Table 5.3. Authentication and Authorization Parameters
Parameter NameDescription

authConfig

Holds authentication and authorization configuration options.

AuthenticationCacheSize

Indicates how many authentication results should be cached. If 0, the default cache size is used.

AuthorizationCacheTTL

Indicates how long an authorization result should be cached. It takes a valid time duration string (e.g. "5m"). If empty, you get the default timeout. If zero (e.g. "0m"), caching is disabled.

5.6.4. Controller Configuration

Table 5.4. Controller Configuration Parameters
Parameter NameDescription

Controllers

List of the controllers that should be started. If set to none, no controllers will start automatically. The default value is * which will start all controllers. When using *, you may exclude controllers by prepending a - in front of their name. No other values are recognized at this time.

ControllerLeaseTTL

Enables controller election, instructing the master to attempt to acquire a lease before controllers start and renewing it within a number of seconds defined by this value. Setting this value non-negative forces pauseControllers=true. This value defaults off (0, or omitted) and controller election can be disabled with -1.

PauseControllers

Instructs the master to not automatically start controllers, but instead to wait until a notification to the server is received before launching them.

5.6.5. etcd Configuration

Table 5.5. etcd Configuration Parameters
Parameter NameDescription

Address

The advertised host:port for client connections to etcd.

etcdClientInfo

Contains information about how to connect to etcd. Specifies if etcd is run as embedded or non-embedded, and the hosts. The rest of the configuration is handled by the Ansible inventory. For example:

etcdClientInfo:
  ca: ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://m1.aos.example.com:4001

etcdConfig

If present, then etcd starts based on the defined parameters. For example:

etcdConfig:
  address: master.ose32.example.com:4001
  peerAddress: master.ose32.example.com:7001
  peerServingInfo:
    bindAddress: 0.0.0.0:7001
    certFile: etcd.server.crt
    clientCA: ca.crt
    keyFile: etcd.server.key
  servingInfo:
    bindAddress: 0.0.0.0:4001
    certFile: etcd.server.crt
    clientCA: ca.crt
    keyFile: etcd.server.key
  storageDirectory: /var/lib/origin/openshift.local.etcd

etcdStorageConfig

Contains information about how API resources are stored in etcd. These values are only relevant when etcd is the backing store for the cluster.

KubernetesStoragePrefix

The path within etcd that the Kubernetes resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is kubernetes.io.

KubernetesStorageVersion

The API version that Kubernetes resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version.

OpenShiftStoragePrefix

The path within etcd that the OpenShift Container Platform resources will be rooted under. This value, if changed, will mean existing objects in etcd will no longer be located. The default value is openshift.io.

OpenShiftStorageVersion

API version that OS resources in etcd should be serialized to. This value should not be advanced until all clients in the cluster that read from etcd have code that allows them to read the new version.

PeerAddress

The advertised host:port for peer connections to etcd.

PeerServingInfo

Describes how to start serving the etcd peer.

ServingInfo

Describes how to start serving. For example:

servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  requestTimeoutSeconds: 3600

StorageDir

The path to the etcd storage directory.

5.6.6. Grant Configuration

Table 5.6. Grant Configuration Parameters
Parameter NameDescription

GrantConfig

Describes how to handle grants.

GrantHandlerAuto

Auto-approves client authorization grant requests.

GrantHandlerDeny

Auto-denies client authorization grant requests.

GrantHandlerPrompt

Prompts the user to approve new client authorization grant requests.

Method

Determines the default strategy to use when an OAuth client requests a grant.This method will be used only if the specific OAuth client does not provide a strategy of their own. Valid grant handling methods are:

  • auto: always approves grant requests, useful for trusted clients
  • prompt: prompts the end user for approval of grant requests, useful for third-party clients
  • deny: always denies grant requests, useful for black-listed clients

5.6.7. Image Configuration

Table 5.7. Image Configuration Parameters
Parameter NameDescription

Format

The format of the name to be built for the system component.

Latest

Determines if the latest tag will be pulled from the registry.

5.6.8. Image Policy Configuration

Table 5.8. Image Policy Configuration Parameters
Parameter NameDescription

DisableScheduledImport

Allows scheduled background import of images to be disabled.

MaxImagesBulkImportedPerRepository

Controls the number of images that are imported when a user does a bulk import of a Docker repository. This number defaults to 5 to prevent users from importing large numbers of images accidentally. Set -1 for no limit.

MaxScheduledImageImportsPerMinute

The maximum number of scheduled image streams that will be imported in the background per minute. The default value is 60.

ScheduledImageImportMinimumIntervalSeconds

The minimum number of seconds that can elapse between when image streams scheduled for background import are checked against the upstream repository. The default value is 15 minutes.

AllowedRegistriesForImport

Limits the docker registries that normal users may import images from. Set this list to the registries that you trust to contain valid Docker images and that you want applications to be able to import from. Users with permission to create Images or ImageStreamMappings via the API are not affected by this policy - typically only administrators or system integrations will have those permissions.

InternalRegistryHostname

Sets the hostname for the default internal image registry. The value must be in hostname[:port] format. For backward compatibility, users can still use OPENSHIFT_DEFAULT_REGISTRY environment variable but this setting overrides the environment variable. When this is set, the internal registry must have its hostname set as well. See setting the registry hostname for more details.

ExternalRegistryHostname

ExternalRegistryHostname sets the hostname for the default external image registry. The external hostname should be set only when the image registry is exposed externally. The value is used in publicDockerImageRepository field in ImageStreams. The value must be in hostname[:port] format.

5.6.9. Kubernetes Master Configuration

Table 5.9. Kubernetes Master Configuration Parameters
Parameter NameDescription

APILevels

A list of API levels that should be enabled on startup, v1 as examples.

DisabledAPIGroupVersions

A map of groups to the versions (or *) that should be disabled.

KubeletClientInfo

Contains information about how to connect to kubelets.

KubernetesMasterConfig

Contains information about how to connect to kubelet’s KubernetesMasterConfig. If present, then start the kubernetes master with this process.

MasterCount

The number of expected masters that should be running. This value defaults to 1 and may be set to a positive integer, or if set to -1, indicates this is part of a cluster.

MasterIP

The public IP address of Kubernetes resources. If empty, the first result from net.InterfaceAddrs will be used.

MasterKubeConfig

File name for the .kubeconfig file that describes how to connect this node to the master.

ServicesNodePortRange

The range to use for assigning service public ports on a host. Default 30000-32767.

ServicesSubnet

The subnet to use for assigning service IPs.

StaticNodeNames

The list of nodes that are statically known.

5.6.10. Network Configuration

Choose the CIDRs in the following parameters carefully, because the IPv4 address space is shared by all users of the nodes. OpenShift Container Platform reserves CIDRs from the IPv4 address space for its own use, and reserves CIDRs from the IPv4 address space for addresses that are shared between the external user and the cluster.

Table 5.10. Network Configuration Parameters
Parameter NameDescription

ClusterNetworkCIDR

The CIDR string to specify the global overlay network’s L3 space. This is reserved for the internal use of the cluster networking.

externalIPNetworkCIDRs

Controls what values are acceptable for the service external IP field. If empty, no externalIP may be set. It may contain a list of CIDRs which are checked for access. If a CIDR is prefixed with !, IPs in that CIDR will be rejected. Rejections will be applied first, then the IP checked against one of the allowed CIDRs. You must ensure this range does not overlap with your nodes, pods, or service CIDRs for security reasons.

HostSubnetLength

The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host.

ingressIPNetworkCIDR

Controls the range to assign ingress IPs from for services of type LoadBalancer on bare metal. It may contain a single CIDR that it will be allocated from. By default 172.46.0.0/16 is configured. For security reasons, you should ensure that this range does not overlap with the CIDRs reserved for external IPs, nodes, pods, or services.

HostSubnetLength

The number of bits to allocate to each host’s subnet. For example, 8 would mean a /24 network on the host.

NetworkConfig

To be passed to the compiled-in-network plug-in. Many of the options here can be controlled in the Ansible inventory.

  • NetworkPluginName (string)
  • ClusterNetworkCIDR (string)
  • HostSubnetLength (unsigned integer)
  • ServiceNetworkCIDR (string)
  • externalIPNetworkCIDRs (string array): Controls which values are acceptable for the service external IP field. If empty, no external IP may be set. It can contain a list of CIDRs which are checked for access. If a CIDR is prefixed with !, then IPs in that CIDR are rejected. Rejections are applied first, then the IP is checked against one of the allowed CIDRs. For security purposes, you should ensure this range does not overlap with your nodes, pods, or service CIDRs.

For Example:

networkConfig:
  clusterNetworks
  - cidr: 10.3.0.0/16
    hostSubnetLength: 8
  networkPluginName: example/openshift-ovs-subnet
# serviceNetworkCIDR must match kubernetesMasterConfig.servicesSubnet
  serviceNetworkCIDR: 179.29.0.0/16

NetworkPluginName

The name of the network plug-in to use.

ServiceNetwork

The CIDR string to specify the service networks.

5.6.11. OAuth Authentication Configuration

Table 5.11. OAuth Configuration Parameters
Parameter NameDescription

AlwaysShowProviderSelection

Forces the provider selection page to render even when there is only a single provider.

AssetPublicURL

Used for building valid client redirect URLs for external access.

Error

A path to a file containing a go template used to render error pages during the authentication or grant flow If unspecified, the default error page is used.

IdentityProviders

Ordered list of ways for a user to identify themselves.

Login

A path to a file containing a go template used to render the login page. If unspecified, the default login page is used.

MasterCA

CA for verifying the TLS connection back to the MasterURL.

MasterPublicURL

Used for building valid client redirect URLs for external access.

MasterURL

Used for making server-to-server calls to exchange authorization codes for access tokens.

OAuthConfig

If present, then the /oauth endpoint starts based on the defined parameters. For example:

oauthConfig:
  assetPublicURL: https://master.ose32.example.com:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    mappingMethod: claim
    name: htpasswd_all
    provider:
      apiVersion: v1
      kind: HTPasswdPasswordIdentityProvider
      file: /etc/origin/openshift-passwd
  masterCA: ca.crt
  masterPublicURL: https://master.ose32.example.com:8443
  masterURL: https://master.ose32.example.com:8443
  sessionConfig:
    sessionMaxAgeSeconds: 3600
    sessionName: ssn
    sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  tokenConfig:
    accessTokenMaxAgeSeconds: 86400
    authorizeTokenMaxAgeSeconds: 500

OAuthTemplates

Allows for customization of pages like the login page.

ProviderSelection

A path to a file containing a go template used to render the provider selection page. If unspecified, the default provider selection page is used.

SessionConfig

Holds information about configuring sessions.

Templates

Allows you to customize pages like the login page.

TokenConfig

Contains options for authorization and access tokens.

5.6.12. Project Configuration

Table 5.12. Project Configuration Parameters
Parameter NameDescription

DefaultNodeSelector

Holds default project node label selector.

ProjectConfig

Holds information about project creation and defaults:

  • DefaultNodeSelector (string): Holds the default project node label selector.
  • ProjectRequestMessage (string): The string presented to a user if they are unable to request a project via the projectrequest API endpoint.
  • ProjectRequestTemplate (string): The template to use for creating projects in response to projectrequest. It is in the format <namespace>/<template>. It is optional, and if it is not specified, a default template is used.
  • SecurityAllocator: Controls the automatic allocation of UIDs and MCS labels to a project. If nil, allocation is disabled:

    • mcsAllocatorRange (string): Defines the range of MCS categories that will be assigned to namespaces. The format is <prefix>/<numberOfLabels>[,<maxCategory>]. The default is s0/2 and will allocate from c0 → c1023, which means a total of 535k labels are available. If this value is changed after startup, new projects may receive labels that are already allocated to other projects. The prefix may be any valid SELinux set of terms (including user, role, and type). However, leaving the prefix at its default allows the server to set them automatically. For example, s0:/2 would allocate labels from s0:c0,c0 to s0:c511,c511 whereas s0:/2,512 would allocate labels from s0:c0,c0,c0 to s0:c511,c511,511.
    • mcsLabelsPerProject (integer): Defines the number of labels to reserve per project. The default is 5 to match the default UID and MCS ranges.
    • uidAllocatorRange (string): Defines the total set of Unix user IDs (UIDs) automatically allocated to projects, and the size of the block that each namespace gets. For example, 1000-1999/10 would allocate ten UIDs per namespace, and would be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks, which is the expected size of ranges for container images when user namespaces are started.

ProjectRequestMessage

The string presented to a user if they are unable to request a project via the project request API endpoint.

ProjectRequestTemplate

The template to use for creating projects in response to a projectrequest. It is in the format namespace/template and it is optional. If it is not specified, a default template is used.

5.6.13. Scheduler Configuration

Table 5.13. Scheduler Configuration Parameters
Parameter NameDescription

SchedulerConfigFile

Points to a file that describes how to set up the scheduler. If empty, you get the default scheduling rules

5.6.14. Security Allocator Configuration

Table 5.14. Security Allocator Parameters
Parameter NameDescription

MCSAllocatorRange

Defines the range of MCS categories that will be assigned to namespaces. The format is <prefix>/<numberOfLabels>[,<maxCategory>]. The default is s0/2 and will allocate from c0 to c1023, which means a total of 535k labels are available (1024 choose 2 ~ 535k). If this value is changed after startup, new projects may receive labels that are already allocated to other projects. Prefix may be any valid SELinux set of terms (including user, role, and type), although leaving them as the default will allow the server to set them automatically.

SecurityAllocator

Controls the automatic allocation of UIDs and MCS labels to a project. If nil, allocation is disabled.

UIDAllocatorRange

Defines the total set of Unix user IDs (UIDs) that will be allocated to projects automatically, and the size of the block that each namespace gets. For example, 1000-1999/10 will allocate ten UIDs per namespace, and will be able to allocate up to 100 blocks before running out of space. The default is to allocate from 1 billion to 2 billion in 10k blocks (which is the expected size of the ranges container images will use once user namespaces are started).

5.6.15. Service Account Configuration

Table 5.15. Service Account Configuration Parameters
Parameter NameDescription

LimitSecretReferences

Controls whether or not to allow a service account to reference any secret in a namespace without explicitly referencing them.

ManagedNames

A list of service account names that will be auto-created in every namespace. If no names are specified, the ServiceAccountsController will not be started.

MasterCA

The CA for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so they can verify connections to the master.

PrivateKeyFile

A file containing a PEM-encoded private RSA key, used to sign service account tokens. If no private key is specified, the service account TokensController will not be started.

PublicKeyFiles

A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, the public portion of the key is used. The list of public keys is used to verify presented service account tokens. Each key is tried in order until the list is exhausted or verification succeeds. If no keys are specified, no service account authentication will be available.

ServiceAccountConfig

Holds options related to service accounts:

  • LimitSecretReferences (boolean): Controls whether or not to allow a service account to reference any secret in a namespace without explicitly referencing them.
  • ManagedNames (string): A list of service account names that will be auto-created in every namespace. If no names are specified, then the ServiceAccountsController will not be started.
  • MasterCA (string): The certificate authority for verifying the TLS connection back to the master. The service account controller will automatically inject the contents of this file into pods so that they can verify connections to the master.
  • PrivateKeyFile (string): Contains a PEM-encoded private RSA key, used to sign service account tokens. If no private key is specified, then the service account TokensController will not be started.
  • PublicKeyFiles (string): A list of files, each containing a PEM-encoded public RSA key. If any file contains a private key, then OpenShift Container Platform uses the public portion of the key. The list of public keys is used to verify service account tokens; each key is tried in order until either the list is exhausted or verification succeeds. If no keys are specified, then service account authentication will not be available.

5.6.16. Serving Information Configuration

Table 5.16. Serving Information Configuration Parameters
Parameter NameDescription

AllowRecursiveQueries

Allows the DNS server on the master to answer queries recursively. Note that open resolvers can be used for DNS amplification attacks and the master DNS should not be made accessible to public networks.

BindAddress

The ip:port to serve on.

BindNetwork

Controls limits and behavior for importing images.

CertFile

A file containing a PEM-encoded certificate.

CertInfo

TLS cert information for serving secure traffic.

ClientCA

The certificate bundle for all the signers that you recognize for incoming client certificates.

dnsConfig

If present, then start the DNS server based on the defined parameters. For example:

dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4

DNSDomain

Holds the domain suffix.

DNSIP

Holds the IP.

KeyFile

A file containing a PEM-encoded private key for the certificate specified by CertFile.

MasterClientConnectionOverrides

Provides overrides to the client connection used to connect to the master. This parameter is not supported. To set QPS and burst values, see Setting Node QPS and Burst Values.

MaxRequestsInFlight

The number of concurrent requests allowed to the server. If zero, no limit.

NamedCertificates

A list of certificates to use to secure requests to specific host names.

RequestTimeoutSecond

The number of seconds before requests are timed out. The default is 60 minutes. If -1, there is no limit on requests.

ServingInfo

The HTTP serving information for the assets.

5.6.17. Volume Configuration

Table 5.17. Volume Configuration Parameters
Parameter NameDescription

DynamicProvisioningEnabled

A boolean to enable or disable dynamic provisioning. Default is true.

FSGroup

Enables local storage quotas on each node for each FSGroup. At present this is only implemented for emptyDir volumes, and if the underlying volumeDirectory is on an XFS filesystem.

MasterVolumeConfig

Contains options for configuring volume plug-ins in the master node.

NodeVolumeConfig

Contains options for configuring volumes on the node.

VolumeConfig

Contains options for configuring volume plug-ins in the node:

  • DynamicProvisioningEnabled (boolean): Default value is true, and toggles dynamic provisioning off when false.

VolumeDirectory

The directory that volumes are stored under.

5.6.18. Basic Audit

Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system.

Audit works at the API server level, logging all requests coming to the server. Each audit log contains two entries:

  1. The request line containing:

    1. A Unique ID allowing to match the response line (see #2)
    2. The source IP of the request
    3. The HTTP method being invoked
    4. The original user invoking the operation
    5. The impersonated user for the operation (self meaning himself)
    6. The impersonated group for the operation (lookup meaning user’s group)
    7. The namespace of the request or <none>
    8. The URI as requested
  2. The response line containing:

    1. The unique ID from #1
    2. The response code

Example output for user admin asking for a list of pods:

AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" ip="127.0.0.1" method="GET" user="admin" as="<self>" asgroups="<lookup>" namespace="default" uri="/api/v1/namespaces/default/pods"
AUDIT: id="5c3b8227-4af9-4322-8a71-542231c3887b" response="200"

The openshift_master_audit_config variable enables API service auditing. It takes an array of the following options:

Table 5.18. Audit Configuration Parameters
Parameter NameDescription

enabled

A boolean to enable or disable audit logs. Default is false.

auditFilePath

File path where the requests should be logged to. If not set, logs are printed to master logs.

maximumFileRetentionDays

Specifies maximum number of days to retain old audit log files based on the time stamp encoded in their filename.

maximumRetainedFiles

Specifies the maximum number of old audit log files to retain.

maximumFileSizeMegabytes

Specifies maximum size in megabytes of the log file before it gets rotated. Defaults to 100MB.

Important

Because the OpenShift Container Platform master API now runs as static pod, you must define the auditFilePath location in the /var/lib/origin or /etc/origin/master/ file.

Example Audit Configuration

auditConfig:
  auditFilePath: "/var/lib/origin/audit-ocp.log"
  enabled: true
  maximumFileRetentionDays: 10
  maximumFileSizeMegabytes: 10
  maximumRetainedFiles: 10

Advanced Setup for the Audit Log

The directory /var/lib/origin will be created if it does not exist.

You can specify advanced audit log parameters by using the following parameter value format:

openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/origin/openpaas-oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5}

5.6.19. Advanced Audit

The advanced audit feature provides several improvements over the basic audit functionality, including fine-grained events filtering and multiple output back ends.

To enable the advanced audit feature, provide the following values in the openshift_master_audit_config parameter:

openshift_master_audit_config={"enabled": true, "auditFilePath": "/var/lib/origin/oscp-audit.log", "maximumFileRetentionDays": 14, "maximumFileSizeMegabytes": 500, "maximumRetainedFiles": 5, "policyFile": "/etc/origin/master/adv-audit.yaml", "logFormat":"json"}
Important

The policy file /etc/origin/master/adv-audit.yaml must be available on each master node.

The following table contains additional options you can use.

Table 5.19. Advanced Audit Configuration Parameters
Parameter NameDescription

policyFile

Path to the file that defines the audit policy configuration.

policyConfiguration

An embedded audit policy configuration.

logFormat

Specifies the format of the saved audit logs. Allowed values are legacy (the format used in basic audit), and json.

webHookKubeConfig

Path to a .kubeconfig-formatted file that defines the audit webhook configuration, where the events are sent to.

webHookMode

Specifies the strategy for sending audit events. Allowed values are block (blocks processing another event until the previous has fully processed) and batch (buffers events and delivers in batches).

Important

To enable the advanced audit feature, you must provide either policyFile orpolicyConfiguration describing the audit policy rules:

Sample Audit Policy Configuration

apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:

  # Do not log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None 1
    users: ["system:kube-proxy"] 2
    verbs: ["watch"] 3
    resources: 4
    - group: ""
      resources: ["endpoints", "services"]

  # Do not log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"] 5
    nonResourceURLs: 6
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"] 7

  # Log configmap and secret changes in all other namespaces at the metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata 8

  # Log login failures from the web console or CLI. Review the logs and refine your policies.
  - level: Metadata
    nonResourceURLs:
    - /login* 9
    - /oauth* 10

1 8
There are four possible levels every event can be logged at:
  • None - Do not log events that match this rule.
  • Metadata - Log request metadata (requesting user, time stamp, resource, verb, etc.), but not request or response body. This is the same level as the one used in basic audit.
  • Request - Log event metadata and request body, but not response body.
  • RequestResponse - Log event metadata, request, and response bodies.
2
A list of users the rule applies to. An empty list implies every user.
3
A list of verbs this rule applies to. An empty list implies every verb. This is Kubernetes verb associated with API requests (including get, list, watch, create, update, patch, delete, deletecollection, and proxy).
4
A list of resources the rule applies to. An empty list implies every resource. Each resource is specified as a group it is assigned to (for example, an empty for Kubernetes core API, batch, build.openshift.io, etc.), and a resource list from that group.
5
A list of groups the rule applies to. An empty list implies every group.
6
A list of non-resources URLs the rule applies to.
7
A list of namespaces the rule applies to. An empty list implies every namespace.
9
Endpoint used by the web console.
10
Endpoint used by the CLI.

For more information on advanced audit, see the Kubernetes documentation

5.6.20. Specifying TLS ciphers for etcd

You can specify the supported TLS ciphers to use in communication between the master and etcd servers.

  1. On each etcd node, upgrade etcd:

    # yum update etcd iptables-services
  2. Confirm that your etcd version is 3.2.22 or later:

    # etcd --version
    etcd Version: 3.2.22
  3. On each master host, specify the ciphers to enable in the /etc/origin/master/master-config.yaml file:

    servingInfo:
      ...
      minTLSVersion: VersionTLS12
      cipherSuites:
      - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
      - TLS_RSA_WITH_AES_256_CBC_SHA
      - TLS_RSA_WITH_AES_128_CBC_SHA
    ...
  4. On each master host, restart the master service:

    # master-restart api
    # master-restart controllers
  5. Confirm that the cipher is applied. For example, for TLSv1.2 cipher ECDHE-RSA-AES128-GCM-SHA256, run the following command:

    # openssl s_client -connect etcd1.example.com:2379 1
    CONNECTED(00000003)
    depth=0 CN = etcd1.example.com
    verify error:num=20:unable to get local issuer certificate
    verify return:1
    depth=0 CN = etcd1.example.com
    verify error:num=21:unable to verify the first certificate
    verify return:1
    139905367488400:error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:s3_pkt.c:1493:SSL alert number 42
    139905367488400:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177:
    ---
    Certificate chain
     0 s:/CN=etcd1.example.com
       i:/CN=etcd-signer@1529635004
    ---
    Server certificate
    -----BEGIN CERTIFICATE-----
    MIIEkjCCAnqgAwIBAgIBATANBgkqhkiG9w0BAQsFADAhMR8wHQYDVQQDDBZldGNk
    ........
    ....
    eif87qttt0Sl1vS8DG1KQO1oOBlNkg==
    -----END CERTIFICATE-----
    subject=/CN=etcd1.example.com
    issuer=/CN=etcd-signer@1529635004
    ---
    Acceptable client certificate CA names
    /CN=etcd-signer@1529635004
    Client Certificate Types: RSA sign, ECDSA sign
    Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1
    Shared Requested Signature Algorithms: RSA+SHA256:ECDSA+SHA256:RSA+SHA384:ECDSA+SHA384:RSA+SHA1:ECDSA+SHA1
    Peer signing digest: SHA384
    Server Temp Key: ECDH, P-256, 256 bits
    ---
    SSL handshake has read 1666 bytes and written 138 bytes
    ---
    New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
    Server public key is 2048 bit
    Secure Renegotiation IS supported
    Compression: NONE
    Expansion: NONE
    No ALPN negotiated
    SSL-Session:
        Protocol  : TLSv1.2
        Cipher    : ECDHE-RSA-AES128-GCM-SHA256
        Session-ID:
        Session-ID-ctx:
        Master-Key: 1EFA00A91EE5FC5EDDCFC67C8ECD060D44FD3EB23D834EDED929E4B74536F273C0F9299935E5504B562CD56E76ED208D
        Key-Arg   : None
        Krb5 Principal: None
        PSK identity: None
        PSK identity hint: None
        Start Time: 1529651744
        Timeout   : 300 (sec)
        Verify return code: 21 (unable to verify the first certificate)
    1
    etcd1.example.com is the name of an etcd host.

5.7. Node Configuration Files

During installation, OpenShift Container Platform creates a configmap in the openshift-node project for each type of node group:

  • node-config-master
  • node-config-infra
  • node-config-compute
  • node-config-all-in-one
  • node-config-master-infra

To make configuration changes to an existing node, edit the appropriate configuration map. A sync pod on each node watches for changes in the configuration maps. During installation, the sync pods are created by using sync Daemonsets, and a /etc/origin/node/node-config.yaml file, where the node configuration parameters reside, is added to each node. When a sync pod detects configuration map change, it updates the node-config.yaml on all nodes in that node group and restarts the appropriate nodes.

$ oc get cm -n openshift-node
NAME                       DATA      AGE
node-config-all-in-one     1         1d
node-config-compute        1         1d
node-config-infra          1         1d
node-config-master         1         1d
node-config-master-infra   1         1d

Sample configuration map for the node-config-compute group

apiVersion: v1
authConfig:      1
  authenticationCacheSize: 1000
  authenticationCacheTTL: 5m
  authorizationCacheSize: 1000
  authorizationCacheTTL: 5m
dnsBindAddress: 127.0.0.1:53
dnsDomain: cluster.local
dnsIP: 0.0.0.0               2
dnsNameservers: null
dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
dockerConfig:
  dockerShimRootDirectory: /var/lib/dockershim
  dockerShimSocket: /var/run/dockershim.sock
  execHandlerName: native
enableUnidling: true
imageConfig:
  format: registry.reg-aws.openshift.com/openshift3/ose-${component}:${version}
  latest: false
iptablesSyncPeriod: 30s
kind: NodeConfig
kubeletArguments: 3
  bootstrap-kubeconfig:
  - /etc/origin/node/bootstrap.kubeconfig
  cert-dir:
  - /etc/origin/node/certificates
  cloud-config:
  - /etc/origin/cloudprovider/aws.conf
  cloud-provider:
  - aws
  enable-controller-attach-detach:
  - 'true'
  feature-gates:
  - RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true
  node-labels:
  - node-role.kubernetes.io/compute=true
  pod-manifest-path:
  - /etc/origin/node/pods  4
  rotate-certificates:
  - 'true'
masterClientConnectionOverrides:
  acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
  burst: 40
  contentType: application/vnd.kubernetes.protobuf
  qps: 20
masterKubeConfig: node.kubeconfig
networkConfig:   5
  mtu: 8951
  networkPluginName: redhat/openshift-ovs-subnet  6
servingInfo:                   7
  bindAddress: 0.0.0.0:10250
  bindNetwork: tcp4
  clientCA: client-ca.crt
volumeConfig:
  localQuota:
    perFSGroup: null    8
volumeDirectory: /var/lib/origin/openshift.local.volumes

1
Authentication and authorization configuration options.
2
IP address prepended to a pod’s /etc/resolv.conf.
3
Key value pairs that are passed directly to the Kubelet that match the Kubelet’s command line arguments.
4
The path to the pod manifest file or directory. A directory must contain one or more manifest files. OpenShift Container Platform uses the manifest files to create pods on the node.
5
The pod network settings on the node.
6
Software defined network (SDN) plug-in. Set to redhat/openshift-ovs-subnet for the ovs-subnet plug-in; redhat/openshift-ovs-multitenant for the ovs-multitenant plug-in; or redhat/openshift-ovs-networkpolicy for the ovs-networkpolicy plug-in.
7
Certificate information for the node.
8
Optional: PEM-encoded certificate bundle. If set, a valid client certificate must be presented and validated against the certificate authorities in the specified file before the request headers are checked for user names.
Note

Do not manually modify the /etc/origin/node/node-config.yaml file.

The node configuration file determines the resources of a node. See the Allocating node resources section in the Cluster Administrator guide for more information.

5.7.1. Pod and Node Configuration

Table 5.20. Pod and Node Configuration Parameters
Parameter NameDescription

NodeConfig

The fully specified configuration starting an OpenShift Container Platform node.

NodeIP

Node may have multiple IPs, so this specifies the IP to use for pod traffic routing. If not specified, network parse/lookup on the nodeName is performed and the first non-loopback address is used.

NodeName

The value used to identify this particular node in the cluster. If possible, this should be your fully qualified hostname. If you are describing a set of static nodes to the master, this value must match one of the values in the list.

PodEvictionTimeout

Controls grace period for deleting pods on failed nodes. It takes valid time duration string. If empty, you get the default pod eviction timeout.

ProxyClientInfo

Specifies the client cert/key to use when proxying to pods.

5.7.2. Docker Configuration

Table 5.21. Docker Configuration Parameters
Parameter NameDescription

AllowDisabledDocker

If true, the kubelet will ignore errors from Docker. This means that a node can start on a machine that does not have docker started.

DockerConfig

Holds Docker related configuration options

ExecHandlerName

The handler to use for executing commands in Docker containers.

5.7.3. Local Storage Configuration

You can use the XFS quota subsystem to limit the size of emptyDir volumes and volumes based on an emptyDir volume, such as secrets and configuration maps, on each node.

To limit the size of emptyDir volumes in an XFS filesystem, configure local volume quota for each unique FSGroup using the node-config-compute configuration map in the openshift-node project.

apiVersion: kubelet.config.openshift.io/v1
kind: VolumeConfig
  localQuota: 1
    perFSGroup: 1Gi 2
1
Contains options for controlling local volume quota on the node.
2
Set this value to a resource quantity representing the desired quota per [FSGroup], per node, such as 1Gi, 512Mi, and so forth. Requires the volumeDirectory to be on an XFS filesystem mounted with the grpquota option. The matching security context constraint fsGroup type must be set to MustRunAs.

If no FSGroup is specified, indicating the request matched an SCC with RunAsAny, the quota application is skipped.

Note

Do not edit the /etc/origin/node/volume-config.yaml file directly. The file is created from the node-config-compute configuration map. Use the node-config-compute configuration map to create or edit the paramaters in the volume-config.yaml file.

5.7.4. Setting Node Queries per Second (QPS) Limits and Burst Values

The rate at which Kubelet talks to API server depends on Queries per Second (QPS) and burst values. The default values are good enough if there are limited pods running on each node. Provided there are enough CPU and memory resources on the node, the QPS and burst values can be tweaked in the /etc/origin/node/node-config.yaml file:

kubeletArguments:
  kube-api-qps:
  - "20"
  kube-api-burst:
  - "40"

Then restart OpenShift Container Platform node services.

Note

The QPS and burst values above are defaults for OpenShift Container Platform.

5.7.5. Parallel Image Pulls with Docker 1.9+

If you are using Docker 1.9+, you may want to consider enabling parallel image pulling, as the default is to pull images one at a time.

Note

There is a potential issue with data corruption prior to Docker 1.9. However, starting with 1.9, the corruption issue is resolved and it is safe to switch to parallel pulls.

kubeletArguments:
  serialize-image-pulls:
  - "false" 1
1
Change to true to disable parallel pulls. (This is the default config)

5.8. Passwords and Other Sensitive Data

For some authentication configurations, an LDAP bindPassword or OAuth clientSecret value is required. Instead of specifying these values directly in the master configuration file, these values may be provided as environment variables, external files, or in encrypted files.

Environment Variable Example

  ...
  bindPassword:
    env: BIND_PASSWORD_ENV_VAR_NAME

External File Example

  ...
  bindPassword:
    file: bindPassword.txt

Encrypted External File Example

  ...
  bindPassword:
    file: bindPassword.encrypted
    keyFile: bindPassword.key

To create the encrypted file and key file for the above example:

$ oc adm ca encrypt --genkey=bindPassword.key --out=bindPassword.encrypted
> Data to encrypt: B1ndPass0rd!

Run oc adm commands only from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts.

Warning

Encrypted data is only as secure as the decrypting key. Care should be taken to limit filesystem permissions and access to the key file.

5.9. Creating New Configuration Files

When defining an OpenShift Container Platform configuration from scratch, start by creating new configuration files.

For master host configuration files, use the openshift start command with the --write-config option to write the configuration files. For node hosts, use the oc adm create-node-config command to write the configuration files.

The following commands write the relevant launch configuration file(s), certificate files, and any other necessary files to the specified --write-config or --node-dir directory.

Generated certificate files are valid for two years, while the certification authority (CA) certificate is valid for five years. This can be altered with the --expire-days and --signer-expire-days options, but for security reasons, it is recommended to not make them greater than these values.

To create configuration files for an all-in-one server (a master and a node on the same host) in the specified directory:

$ openshift start --write-config=/openshift.local.config

To create a master configuration file and other required files in the specified directory:

$ openshift start master --write-config=/openshift.local.config/master

To create a node configuration file and other related files in the specified directory:

$ oc adm create-node-config \
    --node-dir=/openshift.local.config/node-<node_hostname> \
    --node=<node_hostname> \
    --hostnames=<node_hostname>,<ip_address> \
    --certificate-authority="/path/to/ca.crt" \
    --signer-cert="/path/to/ca.crt" \
    --signer-key="/path/to/ca.key"
    --signer-serial="/path/to/ca.serial.txt"
    --node-client-certificate-authority="/path/to/ca.crt"

When creating node configuration files, the --hostnames option accepts a comma-delimited list of every host name or IP address you want server certificates to be valid for.

5.10. Launching Servers Using Configuration Files

Once you have modified the master and/or node configuration files to your specifications, you can use them when launching servers by specifying them as an argument. Keep in mind that if you specify a configuration file, none of the other command line options you pass are respected.

Note

To modify a node in your cluster, update the node configuration maps as needed. Do not manually edit the node-config.yaml file.

To launch an all-in-one server using a master configuration and a node configuration file:

$ openshift start --master-config=/openshift.local.config/master/master-config.yaml --node-config=/openshift.local.config/node-<node_hostname>/node-config.yaml

To launch a master server using a master configuration file:

$ openshift start master --config=/openshift.local.config/master/master-config.yaml

To launch a node server using a node configuration file:

$ openshift start node --config=/openshift.local.config/node-<node_hostname>/node-config.yaml

5.11. Viewing Master and Node Logs

OpenShift Container Platform collects log messages for debugging, using the systemd-journald.service for nodes and a script, called master-logs, for masters.

Note

The number of lines displayed in the web console is hard-coded at 5000 and cannot be changed. To see the entire log, use the CLI.

The logging uses five log message severities based on Kubernetes logging conventions, as follows:

Table 5.22. Log Level Options
OptionDescription

0

Errors and warnings only

2

Normal information

4

Debugging-level information

6

API-level debugging information (request / response)

8

Body-level API debugging information

You can change the log levels independently for masters or nodes as needed.

View node logs

To view logs for the node system, run the following command:

# journalctl -r -u <journal_name>

Use the -r option to show the newest entries first.

View master logs

To view logs for the master components, run the following command:

# /usr/local/bin/master-logs <component> <container>

For example:

# /usr/local/bin/master-logs controllers controllers
# /usr/local/bin/master-logs api api
# /usr/local/bin/master-logs etcd etcd

Redirect master log to a file

To redirect the output of master log into a file, run the following command:

master-logs api api 2> file

5.11.1. Configuring Logging Levels

You can control which INFO messages are logged by setting the DEBUG_LOGLEVEL option in the in node configuration files or the /etc/origin/master/master.env file. Configuring the logs to collect all messages can lead to large logs that are difficult to interpret and can take up excessive space. Only collect all messages when you need to debug your cluster.

Note

Messages with FATAL, ERROR, WARNING, and some INFO severities appear in the logs regardless of the log configuration.

To change the logging level:

  1. Edit the /etc/origin/master/master.env file for the master or /etc/sysconfig/atomic-openshift-node file for the nodes.
  2. Enter a value from the Log Level Options table in the DEBUG_LOGLEVEL field.

    For example:

    DEBUG_LOGLEVEL=4
  3. Restart the master or node host as appropriate. See Restarting OpenShift Container Platform services.

After the restart, all new log messages will conform to the new setting. Older messages do not change.

Note

The default log level can be set using the standard cluster installation process. For more information, see Cluster Variables.

The following examples are excerpts of redirected master log files at various log levels. System information has been removed from these examples.

Excerpt of master-logs api api 2> file output at loglevel=2

W1022 15:08:09.787705       1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
I1022 15:08:09.787894       1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I1022 15:08:09.787913       1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
I1022 15:08:09.889022       1 dns_server.go:63] DNS listening at 0.0.0.0:8053
I1022 15:08:09.893156       1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
I1022 15:08:09.893500       1 master.go:431] Starting OAuth2 API at /oauth
I1022 15:08:09.914759       1 master.go:431] Starting OAuth2 API at /oauth
I1022 15:08:09.942349       1 master.go:431] Starting OAuth2 API at /oauth
W1022 15:08:09.977088       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1022 15:08:09.977176       1 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
[restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
I1022 15:08:10.231405       1 master.go:431] Starting OAuth2 API at /oauth
W1022 15:08:10.259523       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1022 15:08:10.259555       1 swagger.go:38] No API exists for predefined swagger description /api/v1
I1022 15:08:23.895493       1 logs.go:49] http: TLS handshake error from 10.10.94.10:46322: EOF
I1022 15:08:24.449577       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I1022 15:08:24.449916       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I1022 15:08:24.496147       1 logs.go:49] http: TLS handshake error from 127.0.0.1:39140: EOF
I1022 15:08:24.821198       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1022 15:08:24.833022       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1022 15:08:24.865087       1 controller.go:537] quota admission added evaluator for: { events}
I1022 15:08:24.865393       1 logs.go:49] http: TLS handshake error from 127.0.0.1:39162: read tcp4 127.0.0.1:443->127.0.0.1:39162: read: connection reset by peer
I1022 15:08:24.966917       1 controller_utils.go:1026] Caches are synced for crd-autoregister controller
I1022 15:08:24.967961       1 autoregister_controller.go:136] Starting autoregister controller
I1022 15:08:24.967977       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1022 15:08:25.015924       1 controller.go:537] quota admission added evaluator for: { serviceaccounts}
I1022 15:08:25.077984       1 cache.go:39] Caches are synced for autoregister controller
W1022 15:08:25.304265       1 lease_endpoint_reconciler.go:176] Resetting endpoints for master service "kubernetes" to [10.10.94.10]
E1022 15:08:25.472536       1 memcache.go:153] couldn't get resource list for servicecatalog.k8s.io/v1beta1: the server could not find the requested resource
E1022 15:08:25.550888       1 memcache.go:153] couldn't get resource list for servicecatalog.k8s.io/v1beta1: the server could not find the requested resource
I1022 15:08:29.480691       1 healthz.go:72] /healthz/log check
I1022 15:08:30.981999       1 controller.go:105] OpenAPI AggregationController: Processing item v1beta1.servicecatalog.k8s.io
E1022 15:08:30.990914       1 controller.go:111] loading OpenAPI spec for "v1beta1.servicecatalog.k8s.io" failed with: OpenAPI spec does not exists
I1022 15:08:30.990965       1 controller.go:119] OpenAPI AggregationController: action for item v1beta1.servicecatalog.k8s.io: Rate Limited Requeue.
I1022 15:08:31.530473       1 trace.go:76] Trace[1253590531]: "Get /api/v1/namespaces/openshift-infra/serviceaccounts/serviceaccount-controller" (started: 2018-10-22 15:08:30.868387562 +0000 UTC m=+24.277041043) (total time: 661.981642ms):
Trace[1253590531]: [661.903178ms] [661.89217ms] About to write a response
I1022 15:08:31.531366       1 trace.go:76] Trace[83808472]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:30.831296749 +0000 UTC m=+24.239950203) (total time: 700.049245ms):

Excerpt of master-logs api api 2> file output at loglevel=4

I1022 15:08:09.746980       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: AlwaysDeny.
I1022 15:08:09.747597       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: ResourceQuota.
I1022 15:08:09.748038       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/ClusterResourceQuota.
I1022 15:08:09.786771       1 start_master.go:458] Starting master on 0.0.0.0:443 (v3.10.45)
I1022 15:08:09.786798       1 start_master.go:459] Public master address is https://openshift.com:443
I1022 15:08:09.786844       1 start_master.go:463] Using images from "registry.access.redhat.com/openshift3/ose-<component>:v3.10.45"
W1022 15:08:09.787046       1 dns_server.go:37] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
W1022 15:08:09.787705       1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
I1022 15:08:09.787894       1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I1022 15:08:09.787913       1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
I1022 15:08:09.889022       1 dns_server.go:63] DNS listening at 0.0.0.0:8053
I1022 15:08:09.893156       1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
I1022 15:08:09.893500       1 master.go:431] Starting OAuth2 API at /oauth
I1022 15:08:09.914759       1 master.go:431] Starting OAuth2 API at /oauth
I1022 15:08:09.942349       1 master.go:431] Starting OAuth2 API at /oauth
W1022 15:08:09.977088       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1022 15:08:09.977176       1 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
[restful] 2018/10/22 15:08:09 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
I1022 15:08:10.231405       1 master.go:431] Starting OAuth2 API at /oauth
W1022 15:08:10.259523       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1022 15:08:10.259555       1 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
[restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
I1022 15:08:10.444303       1 master.go:431] Starting OAuth2 API at /oauth
W1022 15:08:10.492409       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1022 15:08:10.492507       1 swagger.go:38] No API exists for predefined swagger description /api/v1
[restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] listing is available at https://openshift.com:443/swaggerapi
[restful] 2018/10/22 15:08:10 log.go:33: [restful/swagger] https://openshift.com:443/swaggerui/ is mapped to folder /swagger-ui/
I1022 15:08:10.774824       1 master.go:431] Starting OAuth2 API at /oauth
I1022 15:08:23.808685       1 logs.go:49] http: TLS handshake error from 10.128.0.11:39206: EOF
I1022 15:08:23.815311       1 logs.go:49] http: TLS handshake error from 10.128.0.14:53054: EOF
I1022 15:08:23.822286       1 customresource_discovery_controller.go:174] Starting DiscoveryController
I1022 15:08:23.822349       1 naming_controller.go:276] Starting NamingConditionController
I1022 15:08:23.822705       1 logs.go:49] http: TLS handshake error from 10.128.0.14:53056: EOF
+24.277041043) (total time: 661.981642ms):
Trace[1253590531]: [661.903178ms] [661.89217ms] About to write a response
I1022 15:08:31.531366       1 trace.go:76] Trace[83808472]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:30.831296749 +0000 UTC m=+24.239950203) (total time: 700.049245ms):
Trace[83808472]: [700.049245ms] [700.04027ms] END
I1022 15:08:31.531695       1 trace.go:76] Trace[1916801734]: "Get /api/v1/namespaces/aws-sb/secrets/aws-servicebroker" (started: 2018-10-22 15:08:31.031163449 +0000 UTC m=+24.439816907) (total time: 500.514208ms):
Trace[1916801734]: [500.514208ms] [500.505008ms] END
I1022 15:08:44.675371       1 healthz.go:72] /healthz/log check
I1022 15:08:46.589759       1 controller.go:537] quota admission added evaluator for: { endpoints}
I1022 15:08:46.621270       1 controller.go:537] quota admission added evaluator for: { endpoints}
I1022 15:08:57.159494       1 healthz.go:72] /healthz/log check
I1022 15:09:07.161315       1 healthz.go:72] /healthz/log check
I1022 15:09:16.297982       1 trace.go:76] Trace[2001108522]: "GuaranteedUpdate etcd3: *core.Node" (started: 2018-10-22 15:09:15.139820419 +0000 UTC m=+68.548473981) (total time: 1.158128974s):
Trace[2001108522]: [1.158012755s] [1.156496534s] Transaction committed
I1022 15:09:16.298165       1 trace.go:76] Trace[1124283912]: "Patch /api/v1/nodes/master-0.com/status" (started: 2018-10-22 15:09:15.139695483 +0000 UTC m=+68.548348970) (total time: 1.158434318s):
Trace[1124283912]: [1.158328853s] [1.15713683s] Object stored in database
I1022 15:09:16.298761       1 trace.go:76] Trace[24963576]: "GuaranteedUpdate etcd3: *core.Node" (started: 2018-10-22 15:09:15.13159057 +0000 UTC m=+68.540244112) (total time: 1.167151224s):
Trace[24963576]: [1.167106144s] [1.165570379s] Transaction committed
I1022 15:09:16.298882       1 trace.go:76] Trace[222129183]: "Patch /api/v1/nodes/node-0.com/status" (started: 2018-10-22 15:09:15.131269234 +0000 UTC m=+68.539922722) (total time: 1.167595526s):
Trace[222129183]: [1.167517296s] [1.166135605s] Object stored in database

Excerpt of master-logs api api 2> file output at loglevel=8

1022 15:11:58.829357       1 plugins.go:84] Registered admission plugin "NamespaceLifecycle"
I1022 15:11:58.839967       1 plugins.go:84] Registered admission plugin "Initializers"
I1022 15:11:58.839994       1 plugins.go:84] Registered admission plugin "ValidatingAdmissionWebhook"
I1022 15:11:58.840012       1 plugins.go:84] Registered admission plugin "MutatingAdmissionWebhook"
I1022 15:11:58.840025       1 plugins.go:84] Registered admission plugin "AlwaysAdmit"
I1022 15:11:58.840082       1 plugins.go:84] Registered admission plugin "AlwaysPullImages"
I1022 15:11:58.840105       1 plugins.go:84] Registered admission plugin "LimitPodHardAntiAffinityTopology"
I1022 15:11:58.840126       1 plugins.go:84] Registered admission plugin "DefaultTolerationSeconds"
I1022 15:11:58.840146       1 plugins.go:84] Registered admission plugin "AlwaysDeny"
I1022 15:11:58.840176       1 plugins.go:84] Registered admission plugin "EventRateLimit"
I1022 15:11:59.850825       1 feature_gate.go:190] feature gates: map[AdvancedAuditing:true]
I1022 15:11:59.859108       1 register.go:154] Admission plugin AlwaysAdmit is not enabled.  It will not be started.
I1022 15:11:59.859284       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: AlwaysAdmit.
I1022 15:11:59.859809       1 register.go:154] Admission plugin NamespaceAutoProvision is not enabled.  It will not be started.
I1022 15:11:59.859939       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceAutoProvision.
I1022 15:11:59.860594       1 register.go:154] Admission plugin NamespaceExists is not enabled.  It will not be started.
I1022 15:11:59.860778       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceExists.
I1022 15:11:59.863999       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: NamespaceLifecycle.
I1022 15:11:59.864626       1 register.go:154] Admission plugin EventRateLimit is not enabled.  It will not be started.
I1022 15:11:59.864768       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: EventRateLimit.
I1022 15:11:59.865259       1 register.go:154] Admission plugin ProjectRequestLimit is not enabled.  It will not be started.
I1022 15:11:59.865376       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: ProjectRequestLimit.
I1022 15:11:59.866126       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: OriginNamespaceLifecycle.
I1022 15:11:59.866709       1 register.go:154] Admission plugin openshift.io/RestrictSubjectBindings is not enabled.  It will not be started.
I1022 15:11:59.866761       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/RestrictSubjectBindings.
I1022 15:11:59.867304       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/JenkinsBootstrapper.
I1022 15:11:59.867823       1 plugins.go:149] Loaded 1 admission controller(s) successfully in the following order: openshift.io/BuildConfigSecretInjector.
I1022 15:12:00.015273       1 master_config.go:476] Initializing cache sizes based on 0MB limit
I1022 15:12:00.015896       1 master_config.go:539] Using the lease endpoint reconciler with TTL=15s and interval=10s
I1022 15:12:00.018396       1 storage_factory.go:285] storing { apiServerIPInfo} in v1, reading as __internal from storagebackend.Config{Type:"etcd3", Prefix:"kubernetes.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1022 15:12:00.037710       1 storage_factory.go:285] storing { endpoints} in v1, reading as __internal from storagebackend.Config{Type:"etcd3", Prefix:"kubernetes.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1022 15:12:00.054112       1 compact.go:54] compactor already exists for endpoints [https://master-0.com:2379]
I1022 15:12:00.054678       1 start_master.go:458] Starting master on 0.0.0.0:443 (v3.10.45)
I1022 15:12:00.054755       1 start_master.go:459] Public master address is https://openshift.com:443
I1022 15:12:00.054837       1 start_master.go:463] Using images from "registry.access.redhat.com/openshift3/ose-<component>:v3.10.45"
W1022 15:12:00.056957       1 dns_server.go:37] Binding DNS on port 8053 instead of 53, which may not be resolvable from all clients
W1022 15:12:00.065497       1 server.go:79] Unable to keep dnsmasq up to date, 0.0.0.0:8053 must point to port 53
I1022 15:12:00.066061       1 logs.go:49] skydns: ready for queries on cluster.local. for tcp4://0.0.0.0:8053 [rcache 0]
I1022 15:12:00.066265       1 logs.go:49] skydns: ready for queries on cluster.local. for udp4://0.0.0.0:8053 [rcache 0]
I1022 15:12:00.158725       1 dns_server.go:63] DNS listening at 0.0.0.0:8053
I1022 15:12:00.167910       1 htpasswd.go:118] Loading htpasswd file /etc/origin/master/htpasswd...
I1022 15:12:00.168182       1 htpasswd.go:118] Loading htpasswd file /etc/origin/master/htpasswd...
I1022 15:12:00.231233       1 storage_factory.go:285] storing {apps.openshift.io deploymentconfigs} in apps.openshift.io/v1, reading as apps.openshift.io/__internal from storagebackend.Config{Type:"etcd3", Prefix:"openshift.io", ServerList:[]string{"https://master-0.com:2379"}, KeyFile:"/etc/origin/master/master.etcd-client.key", CertFile:"/etc/origin/master/master.etcd-client.crt", CAFile:"/etc/origin/master/master.etcd-ca.crt", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1022 15:12:00.248136       1 compact.go:54] compactor already exists for endpoints [https://master-0.com:2379]
I1022 15:12:00.248697       1 store.go:1391] Monitoring deploymentconfigs.apps.openshift.io count at <storage-prefix>//deploymentconfigs
W1022 15:12:00.256861       1 swagger.go:38] No API exists for predefined swagger description /oapi/v1
W1022 15:12:00.258106       1 swagger.go:38] No API exists for predefined swagger description /api/v1

5.12. Restarting master and node services

To apply master or node configuration changes, you must restart the respective services.

To reload master configuration changes, restart master services running in control plane static pods using the master-restart command:

# master-restart api
# master-restart controllers

To reload node configuration changes, restart the node service on the node host:

# systemctl restart atomic-openshift-node

Chapter 6. OpenShift Ansible Broker Configuration

6.1. Overview

When the OpenShift Ansible broker (OAB) is deployed in a cluster, its behavior is largely dictated by the broker’s configuration file loaded on startup. The broker’s configuration is stored as a ConfigMap object in the broker’s namespace (openshift-ansible-service-broker by default).

Example OpenShift Ansible Broker Configuration File

registry: 1
  - type: dockerhub
    name: docker
    url: https://registry.hub.docker.com
    org: <dockerhub_org>
    fail_on_error: false
  - type: rhcc
    name: rhcc
    url: https://registry.access.redhat.com
    fail_on_error: true
    white_list:
      - "^foo.*-apb$"
      - ".*-apb$"
    black_list:
      - "bar.*-apb$"
      - "^my-apb$"
  - type: local_openshift
    name: lo
    namespaces:
      - openshift
    white_list:
      - ".*-apb$"
dao: 2
  etcd_host: localhost
  etcd_port: 2379
log: 3
  logfile: /var/log/ansible-service-broker/asb.log
  stdout: true
  level: debug
  color: true
openshift: 4
  host: ""
  ca_file: ""
  bearer_token_file: ""
  image_pull_policy: IfNotPresent
  sandbox_role: "edit"
  keep_namespace: false
  keep_namespace_on_error: true
broker: 5
  bootstrap_on_startup: true
  dev_broker: true
  launch_apb_on_bind: false
  recovery: true
  output_request: true
  ssl_cert_key: /path/to/key
  ssl_cert: /path/to/cert
  refresh_interval: "600s"
  auth:
    - type: basic
      enabled: true
secrets: 6
  - title: Database credentials
    secret: db_creds
    apb_name: dh-rhscl-postgresql-apb

1
See Registry Configuration for details.
2
See DAO Configuration for details.
3
See Log Configuration for details.
4
See OpenShift Configuration for details.
5
See Broker Configuration for details.
6
See Secrets Configuration for details.

6.2. Modifying the OpenShift Ansible Broker Configuration

To modify the OAB’s default broker configuration after it has been deployed:

  1. Edit the broker-config ConfigMap object in the OAB’s namespace as a user with cluster-admin privileges:

    $ oc edit configmap broker-config -n openshift-ansible-service-broker
  2. After saving any updates, redeploy the OAB’s deployment configuration for the changes to take effect:

    $ oc rollout latest dc/asb -n openshift-ansible-service-broker

6.3. Registry Configuration

The registry section allows you to define the registries that the broker should look at for APBs.

Table 6.1. registry Section Configuration Options
FieldDescriptionRequired

name

The name of the registry. Used by the broker to identify APBs from this registry.

Y

user

The user name for authenticating to the registry. Not used when auth_type is set to secret or file.

N

pass

The password for authenticating to the registry. Not used when auth_type is set to secret or file.

N

auth_type

How the broker should read the registry credentials if they are not defined in the broker configuration via user and pass. Can be secret (uses a secret in the broker namespace) or file (uses a mounted file).

N

auth_name

Name of the secret or file storing the registry credentials that should be read. Used when auth_type is set to secret.

N, only required when auth_type is set to secret or file.

org

The namespace or organization that the image is contained in.

N

type

The type of registry. Available adapters are mock, rhcc, openshift, dockerhub, and local_openshift.

Y

namespaces

The list of namespaces to configure the local_openshift registry type with. By default, a user should use openshift.

N

url

The URL that is used to retrieve image information. Used extensively for RHCC while the dockerhub type uses hard-coded URLs.

N

fail_on_error

Should this registry fail, the bootstrap request if it fails. Will stop the execution of other registries loading.

N

white_list

The list of regular expressions used to define which image names should be allowed through. Must have a white list to allow APBs to be added to the catalog. The most permissive regular expression that you can use is .*-apb$ if you would want to retrieve all APBs in a registry. See APB Filtering for more details.

N

black_list

The list of regular expressions used to define which images names should never be allowed through. See APB Filtering for more details.

N

images

The list of images to be used with an OpenShift Container Registry.

N

6.3.1. Production or Development

A production broker configuration is designed to be pointed at a trusted container distribution registry, such as the Red Hat Container Catalog (RHCC):

registry:
  - name: rhcc
    type: rhcc
    url: https://registry.access.redhat.com
    tag: v3.10
    white_list:
      - ".*-apb$"
  - type: local_openshift
    name: localregistry
    namespaces:
      - openshift
    white_list: []

However, a development broker configuration is primarily used by developers working on the broker. To enable developer settings, set the registry name to dev and the dev_broker field in the broker section to true:

registry:
  name: dev
broker:
  dev_broker: true

6.3.2. Storing Registry Credentials

The broker configuration determines how the broker should read any registry credentials. They can be read from the user and pass values in the registry section, for example:

registry:
  - name: isv
    type: openshift
    url: https://registry.connect.redhat.com
    user: <user>
    pass: <password>

If you want to ensure these credentials are not publicly accessible, the auth_type field in the registry section can be set to the secret or file type. The secret type configures a registry to use a secret from the broker’s namespace, while the file type configures a registry to use a secret that has been mounted as a volume.

To use the secret or file type:

  1. The associated secret should have the values username and password defined. When using a secret, you must ensure that the openshift-ansible-service-broker namespace exists, as this is where the secret will be read from.

    For example, create a reg-creds.yaml file:

    $ cat reg-creds.yaml
    ---
    username: <user_name>
    password: <password>
  2. Create a secret from this file in the openshift-ansible-service-broker namespace:

    $ oc create secret generic \
        registry-credentials-secret \
        --from-file reg-creds.yaml \
        -n openshift-ansible-service-broker
  3. Choose whether you want to use the secret or file type:

    • To use the secret type:

      1. In the broker configuration, set auth_type to secret and auth_name to the name of the secret:

        registry:
          - name: isv
            type: openshift
            url: https://registry.connect.redhat.com
            auth_type: secret
            auth_name: registry-credentials-secret
      2. Set the namespace where the secret is located:

        openshift:
          namespace: openshift-ansible-service-broker
    • To use the file type:

      1. Edit the asb deployment configuration to mount your file into /tmp/registry-credentials/reg-creds.yaml:

        $ oc edit dc/asb -n openshift-ansible-service-broker

        In the containers.volumeMounts section, add:

        volumeMounts:
          - mountPath: /tmp/registry-credentials
            name: reg-auth

        In the volumes section, add:

            volumes:
              - name: reg-auth
                secret:
                  defaultMode: 420
                  secretName: registry-credentials-secret
      2. In the broker configuration, set auth_type to file and auth_type to the location of the file:

        registry:
          - name: isv
            type: openshift
            url: https://registry.connect.redhat.com
            auth_type: file
            auth_name: /tmp/registry-credentials/reg-creds.yaml

6.3.3. Mock Registry

A mock registry is useful for reading local APB specs. Instead of going out to a registry to search for image specs, this uses a list of local specs. Set the name of the registry to mock to use the mock registry.

registry:
  - name: mock
    type: mock

6.3.4. Dockerhub Registry

The dockerhub type allows you to load APBs from a specific organization in the DockerHub. For example, the ansibleplaybookbundle organization.

registry:
  - name: dockerhub
    type: dockerhub
    org: ansibleplaybookbundle
    user: <user>
    pass: <password>
    white_list:
      - ".*-apb$"

6.3.5. APB Filtering

APBs can be filtered out by their image name using a combination of the white_list or black_list parameters, set on a registry basis inside the broker’s configuration.

Both are optional lists of regular expressions that will be run over the total set of discovered APBs for a given registry to determine matches.

Table 6.2. APB Filter Behavior
PresentAllowedBlocked

Only whitelist

Matches a regex in list.

Any APB that does not match.

Only blacklist

All APBs that do not match.

APBs that match a regex in list.

Both present

Matches regex in whitelist but not in blacklist.

APBs that match a regex in blacklist.

None

No APBs from the registry.

All APBs from that registry.

For example:

Whitelist Only

white_list:
  - "foo.*-apb$"
  - "^my-apb$"

Anything matching on foo.*-apb$ and only my-apb will be allowed through in this case. All other APBs will be rejected.

Blacklist Only

black_list:
  - "bar.*-apb$"
  - "^foobar-apb$"

Anything matching on bar.*-apb$ and only foobar-apb will be blocked in this case. All other APBs will be allowed through.

Whitelist and Blacklist

white_list:
  - "foo.*-apb$"
  - "^my-apb$"
black_list:
  - "^foo-rootkit-apb$"

Here, foo-rootkit-apb is specifically blocked by the blacklist despite its match in the whitelist because the whitelist match is overridden.

Otherwise, only those matching on foo.*-apb$ and my-apb will be allowed through.

Example Broker Configuration registry Section:

registry:
  - type: dockerhub
    name: dockerhub
    url: https://registry.hub.docker.com
    user: <user>
    pass: <password>
    org: <org>
    white_list:
      - "foo.*-apb$"
      - "^my-apb$"
    black_list:
      - "bar.*-apb$"
      - "^foobar-apb$"

6.3.6. Local OpenShift Container Registry

Using the local_openshift type will allow you to load APBs from the OpenShift Container Registry that is internal to the OpenShift Container Platform cluster. You can configure the namespaces in which you want to look for published APBs.

registry:
  - type: local_openshift
    name: lo
    namespaces:
      - openshift
    white_list:
      - ".*-apb$"

6.3.7. Red Hat Container Catalog Registry

Using the rhcc type will allow you to load APBs that are published to the Red Hat Container Catalog (RHCC) registry.

registry:
  - name: rhcc
    type: rhcc
    url: https://registry.access.redhat.com
    white_list:
      - ".*-apb$"

6.3.8. Red Hat Connect Partner Registry

Third-party images in the Red Hat Container Catalog are served from the Red Hat Connect Partner Registry at https://registry.connect.redhat.com. The partner_rhcc type allows the broker to be bootstrapped from the Partner Registry to retrieve a list of APBs and load their specs. The Partner Registry requires authentication for pulling images with a valid Red Hat Customer Portal user name and password.

registry:
  - name: partner_reg
    type: partner_rhcc
    url:  https://registry.connect.redhat.com
    user: <registry_user>
    pass: <registry_password>
    white_list:
      - ".*-apb$"

Because the Partner Registry requires authentication, the following manual step is also required to configure the broker to use the Partner Registry URL:

  1. Run the following command on all nodes of a OpenShift Container Platform cluster:

    # docker --config=/var/lib/origin/.docker \
        login -u <registry_user> -p <registry_password> \
        registry.connect.redhat.com

6.3.9. Multiple Registries

You can use more than one registry to separate APBs into logical organizations and be able to manage them from the same broker. The registries must have a unique, non-empty name. If there is no unique name, the service broker will fail to start with an error message alerting you to the problem.

registry:
  - name: dockerhub
    type: dockerhub
    org: ansibleplaybookbundle
    user: <user>
    pass: <password>
    white_list:
      - ".*-apb$"
  - name: rhcc
    type: rhcc
    url: <rhcc_url>
    white_list:
      - ".*-apb$"

6.4. Broker Authentication

The broker supports authentication, meaning when connecting to the broker, the caller must supply the Basic Auth or Bearer Auth credentials for each request. Using curl, it is as simple as supplying:

-u <user_name>:<password>

or

-h "Authorization: bearer <token>

to the command. The service catalog must be configured with a secret containing the user name and password combinations or the bearer token.

6.4.1. Basic Auth

To enable Basic Auth usage, set the following in the broker configuration:

broker:
   ...
   auth:
     - type: basic 1
       enabled: true 2
1
The type field specifies the type of authentication to use.
2
The enabled field allows you to disable a particular authentication type. This keeps you from having to delete the entire section of auth just to disable it.
6.4.1.1. Deployment Template and Secrets

Typically the broker is configured using a ConfigMap in a deployment template. You supply the authentication configuration the same way as in the file configuration.

The following is an example of the deployment template:

auth:
  - type: basic
    enabled: ${ENABLE_BASIC_AUTH}

Another part to Basic Auth is the user name and password used to authenticate against the broker. While the Basic Auth implementation can be backed by different back-end services, the currently supported one is backed by a secret. The secret must be injected into the pod via a volume mount at the /var/run/asb_auth location. This is from where the broker will read the user name and password.

In the deployment template, a secret must be specified. For example:

- apiVersion: v1
  kind: Secret
  metadata:
    name: asb-auth-secret
    namespace: openshift-ansible-service-broker
  data:
    username: ${BROKER_USER}
    password: ${BROKER_PASS}

The secret must contain a user name and password. The values must be base64 encoded. The easiest way to generate the values for those entries is to use the echo and base64 commands:

$ echo -n admin | base64 1
YWRtaW4=
1
The -n option is very important.

This secret must now be injected to the pod via a volume mount. This is configured in the deployment template as well:

spec:
  serviceAccount: asb
  containers:
  - image: ${BROKER_IMAGE}
    name: asb
    imagePullPolicy: IfNotPresent
    volumeMounts:
      ...
      - name: asb-auth-volume
        mountPath: /var/run/asb-auth

Then, in the volumes section, mount the secret:

volumes:
  ...
  - name: asb-auth-volume
    secret:
      secretName: asb-auth-secret

The above will have created a volume mount located at /var/run/asb-auth. This volume will have two files: a user name and password written by the asb-auth-secret secret.

6.4.1.2. Configuring Service Catalog and Broker Communication

Now that the broker is configured to use Basic Auth, you must tell the service catalog how to communicate with the broker. This is accomplished by the authInfo section of the broker resource.

The following is an example of creating a broker resource in the service catalog. The spec tells the service catalog what URL the broker is listening at. The authInfo tells it what secret to read to get the authentication information.

apiVersion: servicecatalog.k8s.io/v1alpha1
kind: Broker
metadata:
  name: ansible-service-broker
spec:
  url: https://asb-1338-openshift-ansible-service-broker.172.17.0.1.nip.io
  authInfo:
    basicAuthSecret:
      namespace: openshift-ansible-service-broker
      name: asb-auth-secret

Starting with v0.0.17 of the service catalog, the broker resource configuration changes:

apiVersion: servicecatalog.k8s.io/v1alpha1
kind: ServiceBroker
metadata:
  name: ansible-service-broker
spec:
  url: https://asb-1338-openshift-ansible-service-broker.172.17.0.1.nip.io
  authInfo:
    basic:
      secretRef:
        namespace: openshift-ansible-service-broker
        name: asb-auth-secret

6.4.2. Bearer Auth

By default, if no authentication is specified the broker will use bearer token authentication (Bearer Auth). Bearer Auth uses delegated authentication from the Kubernetes apiserver library.

Note

Bearer Auth is only available starting in OpenShift Container Platform 3.7.

The configuration grants access, through Kubernetes RBAC roles and role bindings, to the URL prefix. The broker has added a configuration option cluster_url to specify the url_prefix. This value defaults to openshift-ansible-service-broker.

Example Cluster Role

- apiVersion: authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: access-asb-role
  rules:
  - nonResourceURLs: ["/ansible-service-broker", "/ansible-service-broker/*"]
    verbs: ["get", "post", "put", "patch", "delete"]

6.4.2.1. Deployment Template and Secrets

The following is an example of creating a secret that the service catalog can use. This example assumes that the role, access-asb-role, has been created already. From the deployment template:

- apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: ansibleservicebroker-client
    namespace: openshift-ansible-service-broker

- apiVersion: authorization.openshift.io/v1
  kind: ClusterRoleBinding
  metadata:
    name: ansibleservicebroker-client
  subjects:
  - kind: ServiceAccount
    name: ansibleservicebroker-client
    namespace: openshift-ansible-service-broker
  roleRef:
    kind: ClusterRole
    name: access-asb-role

- apiVersion: v1
  kind: Secret
  metadata:
    name: ansibleservicebroker-client
    annotations:
      kubernetes.io/service-account.name: ansibleservicebroker-client
  type: kubernetes.io/service-account-token

The above example creates a service account, granting access to access-asb-role and creating a secret for that service accounts token.

6.4.2.2. Configuring Service Catalog and Broker Communication

Now that the broker is configured to use Bearer Auth tokens, you must tell the service catalog how to communicate with the broker. This is accomplished by the authInfo section of the broker resource.

The following is an example of creating a broker resource in the service catalog. The spec tells the service catalog what URL the broker is listening at. The authInfo tells it what secret to read to get the authentication information.

apiVersion: servicecatalog.k8s.io/v1alpha1
kind: ServiceBroker
metadata:
  name: ansible-service-broker
spec:
  url: https://asb.openshift-ansible-service-broker.svc:1338${BROKER_URL_PREFIX}/
  authInfo:
    bearer:
      secretRef:
        kind: Secret
        namespace: openshift-ansible-service-broker
        name: ansibleservicebroker-client

6.5. DAO Configuration

FieldDescriptionRequired

etcd_host

The URL of the etcd host.

Y

etcd_port

The port to use when communicating with etcd_host.

Y

6.6. Log Configuration

FieldDescriptionRequired

logfile

Where to write the broker’s logs.

Y

stdout

Write logs to stdout.

Y

level

Level of the log output.

Y

color

Color the logs.

Y

6.7. OpenShift Configuration

FieldDescriptionRequired

host

OpenShift Container Platform host.

N

ca_file

Location of the certificate authority file.

N

bearer_token_file

Location of bearer token to be used.

N

image_pull_policy

When to pull the image.

Y

namespace

The namespace that the broker has been deployed to. Important for things like passing parameter values via secret.

Y

sandbox_role

Role to give to an APB sandbox environment.

Y

keep_namespace

Always keep namespace after an APB execution.

N

keep_namespace_on_error

Keep namespace after an APB execution has an error.

N

6.8. Broker Configuration

The broker section tells the broker what functionality should be enabled and disabled. It will also tell the broker where to find files on disk that will enable the full functionality.

FieldDescriptionDefault ValueRequired

dev_broker

Allow development routes to be accessible.

false

N

launch_apb_on_bind

Allow bind to be a no-op.

false

N

bootstrap_on_startup

Allow the broker attempt to bootstrap itself on start up. Will retrieve the APBs from configured registries.

false

N

recovery

Allow the broker to attempt to recover itself by dealing with pending jobs noted in etcd.

false

N

output_request

Allow the broker to output the requests to the log file as they come in for easier debugging.

false

N

ssl_cert_key

Tells the broker where to find the TLS key file. If not set, the API server will attempt to create one.

""

N

ssl_cert

Tells the broker where to find the TLS .crt file. If not set, the API server will attempt to create one.

""

N

refresh_interval

The interval to query registries for new image specs.

"600s"

N

auto_escalate

Allows the broker to escalate the permissions of a user while running the APB.

false

N

cluster_url

Sets the prefix for the URL that the broker is expecting.

openshift-ansible-service-broker

N

Note

Async bind and unbind is an experimental feature and is not supported or enabled by default. With the absence of async bind, setting launch_apb_on_bind to true can cause the bind action to timeout and will span a retry. The broker will handle this with "409 Conflicts" because it is the same bind request with different parameters.

6.9. Secrets Configuration

The secrets section creates associations between secrets in the broker’s namespace and APBs the broker runs. The broker uses these rules to mount secrets into running APBs, allowing the user to use secrets to pass parameters without exposing them to the catalog or users.

The section is a list where each entry has the following structure:

FieldDescriptionRequired

title

The title of the rule. This is just for display and output purposes.

Y

apb_name

The name of the APB to associate with the specified secret. This is the fully qualified name (<registry_name>-<image_name>).

Y

secret

The name of the secret to pull parameters from.

Y

You can download and use the create_broker_secret.py file to create and format this configuration section.

secrets:
- title: Database credentials
  secret: db_creds
  apb_name: dh-rhscl-postgresql-apb

6.10. Running Behind a Proxy

When running the OAB inside of a proxied OpenShift Container Platform cluster, it is important to understand its core concepts and consider them within the context of a proxy used for external network access.

As an overview, the broker itself runs as a pod within the cluster. It has a requirement for external network access depending on how its registries have been configured.

6.10.1. Registry Adapter Whitelists

The broker’s configured registry adapters must be able to communicate with their external registries in order to bootstrap successfully and load remote APB manifests. These requests can be made via the proxy, however, the proxy must ensure that the required remote hosts are accessible.

Example required whitelisted hosts:

Registry Adapter TypeWhitelisted Hosts

rhcc

registry.access.redhat.com, access.redhat.com

dockerhub

docker.io

6.10.2. Configuring the Broker Behind a Proxy Using Ansible

If during initial installation you configure your OpenShift Container Platform cluster to run behind a proxy (see Configuring Global Proxy Options), when the OAB is deployed it will:

  • inherit those cluster-wide proxy settings automatically and
  • generate the required NO_PROXY list, including the cidr fields and serviceNetworkCIDR,

and no further configuration is needed.

6.10.3. Configuring the Broker Behind a Proxy Manually

If your cluster’s global proxy options were not configured during initial installation or prior to the broker being deployed, or if you have modified the global proxy settings, you must manually configure the broker for external access via proxy:

  1. Before attempting to run the OAB behind a proxy, review Working with HTTP Proxies and ensure your cluster is configured accordingly to run behind a proxy.

    In particular, the cluster must be configured to not proxy internal cluster requests. This is typically configured with a NO_PROXY setting of:

    .cluster.local,.svc,<serviceNetworkCIDR_value>,<master_IP>,<master_domain>,.default

    in addition to any other desired NO_PROXY settings. See Configuring NO_PROXY for more details.

    Note

    Brokers deploying unversioned, or v1 APBs must also add 172.30.0.1 to their NO_PROXY list. APBs prior to v2 extracted their credentials from running APB pods via an exec HTTP request, rather than a secret exchange. Unless you are running a broker with experimental proxy support in a cluster prior to OpenShift Container Platform 3.9, you probably do not have to worry about this.

  2. Edit the broker’s DeploymentConfig as a user with cluster-admin privileges:

    $ oc edit dc/asb -n openshift-ansible-service-broker
  3. Set the following environment variables:

    • HTTP_PROXY
    • HTTPS_PROXY
    • NO_PROXY
    Note
  4. After saving any updates, redeploy the OAB’s deployment configuration for the changes to take effect:

    $ oc rollout latest dc/asb -n openshift-ansible-service-broker

6.10.4. Setting Proxy Environment Variables in Pods

It is common that APB pods themselves may require external access via proxy as well. If the broker recognizes it has a proxy configuration, it will transparently apply these environment variables to the APB pods that it spawns. As long as the modules used within the APB respect proxy configuration via environment variable, the APB will also use these settings to perform its work.

Finally, it is possible the services spawned by the APB may also require external network access via proxy. The APB must be authored to set these environment variables explicitly if recognizes them in its own execution environment, or the cluster operator must manually modify the required services to inject them into their environments.

Chapter 7. Adding Hosts to an Existing Cluster

7.1. Adding hosts

You can add new hosts to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on only the new hosts. Before running the scaleup.yml playbook, complete all prerequisite host preparation steps.

Important

The scaleup.yml playbook configures only the new host. It does not update NO_PROXY in master services, and it does not restart master services.

You must have an existing inventory file,for example /etc/ansible/hosts, that is representative of your current cluster configuration in order to run the scaleup.yml playbook. If you previously used the atomic-openshift-installer command to run your installation, you can check ~/.config/openshift/hosts for the last inventory file that the installer generated and use that file as your inventory file. You can modify this file as required. You must then specify the file location with -i when you run the ansible-playbook.

Important

See the cluster limits section for the recommended maximum number of nodes.

Procedure
  1. Ensure you have the latest playbooks by updating the atomic-openshift-utils package:

    # yum update atomic-openshift-utils
  2. Edit your /etc/ansible/hosts file and add new_<host_type> to the [OSEv3:children] section:

    For example, to add a new node host, add new_nodes:

    [OSEv3:children]
    masters
    nodes
    new_nodes

    To add new master hosts, add new_masters.

  3. Create a [new_<host_type>] section to specify host information for the new hosts. Format this section like an existing section, as shown in the following example of adding a new node:

    [nodes]
    master[1:3].example.com
    node1.example.com openshift_node_group_name='node-config-compute'
    node2.example.com openshift_node_group_name='node-config-compute'
    infra-node1.example.com openshift_node_group_name='node-config-infra'
    infra-node2.example.com openshift_node_group_name='node-config-infra'
    
    [new_nodes]
    node3.example.com openshift_node_group_name='node-config-infra'

    See Configuring Host Variables for more options.

    When adding new masters, add hosts to both the [new_masters] section and the [new_nodes] section to ensure that the new master host is part of the OpenShift SDN.

    [masters]
    master[1:2].example.com
    
    [new_masters]
    master3.example.com
    
    [nodes]
    master[1:2].example.com
    node1.example.com openshift_node_group_name='node-config-compute'
    node2.example.com openshift_node_group_name='node-config-compute'
    infra-node1.example.com openshift_node_group_name='node-config-infra'
    infra-node2.example.com openshift_node_group_name='node-config-infra'
    
    [new_nodes]
    master3.example.com
    Important

    If you label a master host with the node-role.kubernetes.io/infra=true label and have no other dedicated infrastructure nodes, you must also explicitly mark the host as schedulable by adding openshift_schedulable=true to the entry. Otherwise, the registry and router pods cannot be placed anywhere.

  4. Run the scaleup.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i option.

    • For additional nodes:

      # ansible-playbook [-i /path/to/file] \
          /usr/share/ansible/openshift-ansible/playbooks/openshift-node/scaleup.yml
    • For additional masters:

      # ansible-playbook [-i /path/to/file] \
          /usr/share/ansible/openshift-ansible/playbooks/openshift-master/scaleup.yml
  5. Set the node label to logging-infra-fluentd=true, if you deployed the EFK stack in your cluster.

    # oc label node/new-node.example.com logging-infra-fluentd=true
  6. After the playbook runs, verify the installation.
  7. Move any hosts that you defined in the [new_<host_type>] section to their appropriate section. By moving these hosts, subsequent playbook runs that use this inventory file treat the nodes correctly. You can keep the empty [new_<host_type>] section. For example, when adding new nodes:

    [nodes]
    master[1:3].example.com
    node1.example.com openshift_node_group_name='node-config-compute'
    node2.example.com openshift_node_group_name='node-config-compute'
    node3.example.com openshift_node_group_name='node-config-compute'
    infra-node1.example.com openshift_node_group_name='node-config-infra'
    infra-node2.example.com openshift_node_group_name='node-config-infra'
    
    [new_nodes]

7.2. Adding etcd Hosts to existing cluster

You can add new etcd hosts to your cluster by running the etcd scaleup playbook. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on the new hosts only. Before running the etcd scaleup.yml playbook, complete all prerequisite host preparation steps.

To add an etcd host to an existing cluster:

  1. Ensure you have the latest playbooks by updating the openshift-ansible package:

    $ yum update openshift-ansible
  2. Edit your /etc/ansible/hosts file, add new_<host_type> to the [OSEv3:children] group and add hosts under the new_<host_type> group:

    For example, to add a new etcd, add new_etcd:

    [OSEv3:children]
    masters
    nodes
    etcd
    new_etcd
    
    [etcd]
    etcd1.example.com
    etcd2.example.com
    
    [new_etcd]
    etcd3.example.com
  3. Run the etcd scaleup.yml playbook. If your inventory file is located somewhere other than the default of /etc/ansible/hosts, specify the location with the -i option.

    $ ansible-playbook [-i /path/to/file] \
      /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/scaleup.yml
  4. After the playbook completes successfully, verify the installation.

7.3. Replacing existing masters with etcd colocated

Follow these steps when you are migrating your machines to a different data center and the network and IPs assigned to it will change.

  1. Back up the primary etcd and master nodes.

    Important

    Ensure that you back up the /etc/etcd/ directory, as noted in the etcd backup instructions.

  2. Provision as many new machines as there are masters to replace.
  3. Add or expand the cluster. for example, if you want to add 3 masters with etcd colocated, scale up 3 master nodes.
Important

In the initial release of OpenShift Container Platform version 3.11, the scaleup.yml playbook does not scale up etcd. This will be fixed in a future release on BZ#1628201.

  1. Add a master. In step 3 of that process, add the host of the new data center in [new_masters] and [new_nodes] and run the master scaleup.yml playbook.
  2. Put the same host in the etcd section and run the etcd scaleup.yml playbook.
  3. Verify that the host was added:

    # oc get nodes
  4. Verify that the master host IP was added:

    # oc get ep kubernetes
  5. Verify that etcd was added. The value of ETCDCTL_API depends on the version being used:

    # source /etc/etcd/etcd.conf
    # ETCDCTL_API=2 etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \
      --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URLS member list
  6. Copy /etc/origin/master/ca.serial.txt from the /etc/origin/master directory to the new master host that is listed first in your inventory file. By default, this is /etc/ansible/hosts.

    1. Remove the etcd hosts.
  7. Copy the /etc/etcd/ca directory to the new etcd host that is listed first in your inventory file. By default, this is /etc/ansible/hosts.
  8. Remove the old etcd clients from the master-config.yaml file:

    # grep etcdClientInfo -A 11 /etc/origin/master/master-config.yaml
  9. Restart the masters:

    # master-restart api
    # master-restart controllers
  10. Remove the old etcd members from the cluster. The value of ETCDCTL_API depends on the version being used:

    # source /etc/etcd/etcd.conf
    # ETCDCTL_API=2 etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \
      --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URLS member list
  11. Take the IDs from the output of the command above and remove the old members using the IDs:

    # etcdctl --cert-file=$ETCD_PEER_CERT_FILE --key-file=$ETCD_PEER_KEY_FILE \
      --ca-file=/etc/etcd/ca.crt --endpoints=$ETCD_LISTEN_CLIENT_URL member remove 1609b5a3a078c227
  12. Stop the etcd services on the old etcd hosts by removing the etcd pod definition:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
    1. Shut down old master API and controller services by moving definition files out of the static pods dir /etc/origin/node/pods:

      # mkdir -p /etc/origin/node/pods/disabled
      # mv /etc/origin/node/pods/controller.yaml /etc/origin/node/pods/disabled/:
    2. Remove the master nodes from the HA proxy configuration, which was installed as a load balancer by default during the native installation process.
    3. Decommission the machine.
  13. Stop the node service on the master to be removed by removing the pod definition and rebooting the host:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
    # reboot
  14. Delete the node resource:

    # oc delete node

7.4. Migrating the nodes

You can migrate nodes individually or in groups (of 2, 5, 10, and so on), depending on what you are comfortable with and how the services on the node are run and scaled.

  1. For the migration node or nodes, provision new VMs for the node’s use in the new data center.
  2. To add the new node, scale up the infrastructure. Ensure the labels for the new node are set properly and that your new API servers are added to your load balancer and successfully serving traffic.
  3. Evaluate and scale down.

    1. Mark the current node (in the old data center) unscheduled.
    2. Evacuate the node, so that pods on it are scheduled to other nodes.
    3. Verify that the evacuated services are running on the new nodes.
  4. Remove the node.

    1. Verify that the node is empty and does not have running processes.
    2. Stop the service or delete the node.

Chapter 8. Adding the Default Image Streams and Templates

8.1. Overview

If you installed OpenShift Container Platform on servers with x86_64 architecture, your cluster includes useful sets of Red Hat-provided image streams and templates to make it easy for developers to create new applications. By default, the cluster installation process automatically create these sets in the openshift project, which is a default global project to which all users have view access.

If you installed OpenShift Container Platform on servers with IBM POWER architecture, you can add image streams and templates to your cluster.

8.2. Offerings by Subscription Type

Depending on the active subscriptions on your Red Hat account, the following sets of image streams and templates are provided and supported by Red Hat. Contact your Red Hat sales representative for further subscription details.

8.2.1. OpenShift Container Platform Subscription

The core set of image streams and templates are provided and supported with an active OpenShift Container Platform subscription. This includes the following technologies:

TypeTechnology

Languages & Frameworks

Databases

Middleware Services

Other Services

8.2.2. xPaaS Middleware Add-on Subscriptions

Support for xPaaS middleware images are provided by xPaaS Middleware add-on subscriptions, which are separate subscriptions for each xPaaS product. If the relevant subscription is active on your account, image streams and templates are provided and supported for the following technologies:

TypeTechnology

Middleware Services

8.3. Before You Begin

Before you consider performing the tasks in this topic, confirm if these image streams and templates are already registered in your OpenShift Container Platform cluster by doing one of the following:

  • Log into the web console and click Add to Project.
  • List them for the openshift project using the CLI:

    $ oc get is -n openshift
    $ oc get templates -n openshift

If the default image streams and templates are ever removed or changed, you can follow this topic to create the default objects yourself. Otherwise, the following instructions are not necessary.

8.4. Prerequisites

Before you can create the default image streams and templates:

  • The integrated Docker registry service must be deployed in your OpenShift Container Platform installation.
  • You must be able to run the oc create command with cluster-admin privileges, because they operate on the default openshiftproject.
  • You must have installed the openshift-ansible RPM package. See Software Prerequisites for instructions.
  • Define shell variables for the directories containing image streams and templates. This significantly shortens the commands in the following sections. To do this:

    • For cloud installations and on-premise installations on x86_64 servers:
$ IMAGESTREAMDIR="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.10/image-streams"; \
    XPAASSTREAMDIR="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.10/xpaas-streams"; \
    XPAASTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.10/xpaas-templates"; \
    DBTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.10/db-templates"; \
    QSTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/v3.10/quickstart-templates"
  • For on-premise installations on IBM POWER8 or IBM POWER9 servers:
IMAGESTREAMDIR="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/ppc64le/image-streams"; \
    DBTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/ppc64le/db-templates"; \
    QSTEMPLATES="/usr/share/ansible/openshift-ansible/roles/openshift_examples/files/examples/ppc64le/quickstart-templates"

8.5. Creating Image Streams for OpenShift Container Platform Images

If your node hosts are subscribed using Red Hat Subscription Manager and you want to use the core set of image streams that used Red Hat Enterprise Linux (RHEL) 7 based images:

$ oc create -f $IMAGESTREAMDIR/image-streams-rhel7.json -n openshift

Alternatively, to create the core set of image streams that use the CentOS 7 based images:

$ oc create -f $IMAGESTREAMDIR/image-streams-centos7.json -n openshift

Creating both the CentOS and RHEL sets of image streams is not possible, because they use the same names. To have both sets of image streams available to users, either create one set in a different project, or edit one of the files and modify the image stream names to make them unique.

8.6. Creating Image Streams for xPaaS Middleware Images

The xPaaS Middleware image streams provide images for JBoss EAP, JBoss JWS, JBoss A-MQ, JBoss Fuse Integration Services, Decision Server, JBoss Data Virtualization and JBoss Data Grid. They can be used to build applications for those platforms using the provided templates.

To create the xPaaS Middleware set of image streams:

$ oc create -f $XPAASSTREAMDIR/jboss-image-streams.json -n openshift
Note

Access to the images referenced by these image streams requires the relevant xPaaS Middleware subscriptions.

8.7. Creating Database Service Templates

The database service templates make it easy to run a database image which can be utilized by other components. For each database (MongoDB, MySQL, and PostgreSQL), two templates are defined.

One template uses ephemeral storage in the container which means data stored will be lost if the container is restarted, for example if the pod moves. This template should be used for demonstration purposes only.

The other template defines a persistent volume for storage, however it requires your OpenShift Container Platform installation to have persistent volumes configured.

To create the core set of database templates:

$ oc create -f $DBTEMPLATES -n openshift

After creating the templates, users are able to easily instantiate the various templates, giving them quick access to a database deployment.

8.8. Creating Instant App and Quickstart Templates

The Instant App and Quickstart templates define a full set of objects for a running application. These include:

Some of the templates also define a database deployment and service so the application can perform database operations.

Note

The templates which define a database use ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data will be lost if the database pod restarts for any reason.

Using these templates, users are able to easily instantiate full applications using the various language images provided with OpenShift Container Platform. They can also customize the template parameters during instantiation so that it builds source from their own repository rather than the sample repository, so this provides a simple starting point for building new applications.

To create the core Instant App and Quickstart templates:

$ oc create -f $QSTEMPLATES -n openshift

There is also a set of templates for creating applications using various xPaaS Middleware products (JBoss EAP, JBoss JWS, JBoss A-MQ, JBoss Fuse Integration Services, Decision Server, and JBoss Data Grid), which can be registered by running:

$ oc create -f $XPAASTEMPLATES -n openshift
Note

The xPaaS Middleware templates require the xPaaS Middleware image streams, which in turn require the relevant xPaaS Middleware subscriptions.

Note

The templates which define a database use ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data will be lost if the database pod restarts for any reason.

8.9. What’s Next?

With these artifacts created, developers can now log into the web console and follow the flow for creating from a template. Any of the database or application templates can be selected to create a running database service or application in the current project. Note that some of the application templates define their own database services as well.

The example applications are all built out of GitHub repositories which are referenced in the templates by default, as seen in the SOURCE_REPOSITORY_URL parameter value. Those repositories can be forked, and the fork can be provided as the SOURCE_REPOSITORY_URL parameter value when creating from the templates. This allows developers to experiment with creating their own applications.

You can direct your developers to the Using the Instant App and Quickstart Templates section in the Developer Guide for these instructions.

Chapter 9. Configuring Custom Certificates

9.1. Overview

Administrators can configure custom serving certificates for the public host names of the OpenShift Container Platform API and web console. This can be done during a cluster installation or configured after installation.

9.2. Configuring a Certificate Chain

If a certificate chain is used, then all certificates must be manually concatenated into a single named certificate file. These certificates must be placed in the following order:

  • OpenShift Container Platform master host certificate
  • Intermediate CA certificate
  • Root CA certificate
  • Third party certificate

To create this certificate chain, concatenate the certificates into a common file. You must run this command for each certificate and ensure that they are in the previously defined order.

$ cat <certificate>.pem >> ca-chain.cert.pem

9.3. Configuring Custom Certificates During Installation

During cluster installations, custom certificates can be configured using the openshift_master_named_certificates and openshift_master_overwrite_named_certificates parameters, which are configurable in the inventory file. More details are available about configuring custom certificates with Ansible.

Custom Certificate Configuration Parameters

openshift_master_overwrite_named_certificates=true 1
openshift_master_named_certificates=[{"certfile": "/path/on/host/to/crt-file", "keyfile": "/path/on/host/to/key-file", "names": ["public-master-host.com"], "cafile": "/path/on/host/to/ca-file"}] 2
openshift_hosted_router_certificate={"certfile": "/path/on/host/to/app-crt-file", "keyfile": "/path/on/host/to/app-key-file", "cafile": "/path/on/host/to/app-ca-file"} 3

1
If you provide a value for the openshift_master_named_certificates parameter, set this parameter to true.
2
Provisions a master API certificate.
3

Example parameters for a master API certificate:

openshift_master_overwrite_named_certificates=true
openshift_master_named_certificates=[{"names": ["master.148.251.233.173.nip.io"], "certfile": "/home/cloud-user/master-bundle.cert.pem", "keyfile": "/home/cloud-user/master.148.251.233.173.nip.io.key.pem" ]

Example parameters for a router wildcard certificate:

openshift_hosted_router_certificate={"certfile": "/home/cloud-user/star-apps.148.251.233.173.nip.io.cert.pem", "keyfile": "/home/cloud-user/star-apps.148.251.233.173.nip.io.key.pem", "cafile": "/home/cloud-user/ca-chain.cert.pem"}

9.4. Configuring Custom Certificates for the Web Console or CLI

You can specify custom certificates for the web console and for the CLI through the servingInfo section of the master configuration file:

  • The servingInfo.namedCertificates section serves up custom certificates for the web console.
  • The servingInfo section serves up custom certificates for the CLI and other API calls.

You can configure multiple certificates this way, and each certificate can be associated with multiple host names, multiple routers, or the OpenShift Container Platform image registry.

A default certificate must be configured in the servingInfo.certFile and servingInfo.keyFile configuration sections in addition to namedCertificates.

Note

The namedCertificates section should be configured only for the host name associated with the masterPublicURL and oauthConfig.assetPublicURL settings in the /etc/origin/master/master-config.yaml file. Using a custom serving certificate for the host name associated with the masterURL will result in TLS errors as infrastructure components will attempt to contact the master API using the internal masterURL host.

Custom Certificates Configuration

servingInfo:
  logoutURL: ""
  masterPublicURL: https://openshift.example.com:8443
  publicURL: https://openshift.example.com:8443/console/
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt 1
  clientCA: ""
  keyFile: master.server.key 2
  maxRequestsInFlight: 0
  requestTimeoutSeconds: 0
  namedCertificates:
    - certFile: wildcard.example.com.crt 3
      keyFile: wildcard.example.com.key 4
      names:
        - "openshift.example.com"
  metricsPublicURL: "https://metrics.os.example.com/hawkular/metrics"

1 2
Path to certificate and key files for the CLI and other API calls.
3 4
Path to certificate and key files for the web console.

The openshift_master_cluster_public_hostname and openshift_master_cluster_hostname parameters in the Ansible inventory file, by default /etc/ansible/hosts, must be different. If they are the same, the named certificates will fail and you will need to re-install them.

# Native HA with External LB VIPs
openshift_master_cluster_hostname=internal.paas.example.com
openshift_master_cluster_public_hostname=external.paas.example.com

For more information on using DNS with OpenShift Container Platform, see the DNS installation prerequisites.

This approach allows you to take advantage of the self-signed certificates generated by OpenShift Container Platform and add custom trusted certificates to individual components as needed.

Note that the internal infrastructure certificates remain self-signed, which might be perceived as bad practice by some security or PKI teams. However, any risk here is minimal, as the only clients that trust these certificates are other components within the cluster. All external users and systems use custom trusted certificates.

Relative paths are resolved based on the location of the master configuration file. Restart the server to pick up the configuration changes.

9.5. Configuring a Custom Master Host Certificate

In order to facilitate trusted connections with external users of OpenShift Container Platform, you can provision a named certificate that matches the domain name provided in the openshift_master_cluster_public_hostname paramater in the Ansible inventory file, by default /etc/ansible/hosts.

You must place this certificate in a directory accessible to Ansible and add the path in the Ansible inventory file, as follows:

openshift_master_named_certificates=[{"certfile": "/path/to/console.ocp-c1.myorg.com.crt", "keyfile": "/path/to/console.ocp-c1.myorg.com.key", "names": ["console.ocp-c1.myorg.com"]}]

Where the parameter values are:

  • certfile is the path to the file that contains the OpenShift Container Platform custom master API certificate.
  • keyfile is the path to the file that contains the OpenShift Container Platform custom master API certificate key.
  • names is the cluster public hostname.

The file paths must be local to the system where Ansible runs. Certificates are copied to master hosts and are deployed within the /etc/origin/master directory.

When securing the registry, add the service hostnames and IP addresses to the server certificate for the registry. The Subject Alternative Names (SAN) must contain the following.

  • Two service hostnames:

    docker-registry.default.svc.cluster.local
    docker-registry.default.svc
  • Service IP address.

    For example:

    172.30.252.46

    Use the following command to get the Docker registry service IP address:

    oc get service docker-registry --template='{{.spec.clusterIP}}'
  • Public hostname.

    docker-registry-default.apps.example.com

    Use the following command to get the Docker registry public hostname:

    oc get route docker-registry --template '{{.spec.host}}'

For example, the server certificate should contain SAN details similar to the following:

X509v3 Subject Alternative Name:
               DNS:docker-registry-public.openshift.com, DNS:docker-registry.default.svc, DNS:docker-registry.default.svc.cluster.local, DNS:172.30.2.98, IP Address:172.30.2.98

9.6. Configuring a Custom Wildcard Certificate for the Default Router

You can configure the OpenShift Container Platform default router with a default wildcard certificate. A default wildcard certificate provides a convenient way for applications that are deployed in OpenShift Container Platform to use default encryption without needing custom certificates.

Note

Default wildcard certificates are recommended for non-production environments only.

To configure a default wildcard certificate, provision a certificate that is valid for *.<app_domain>, where <app_domain> is the value of openshift_master_default_subdomain in the Ansible inventory file, by default /etc/ansible/hosts. Once provisioned, place the certificate, key, and ca certificate files on your Ansible host, and add the following line to your Ansible inventory file.

openshift_hosted_router_certificate={"certfile": "/path/to/apps.c1-ocp.myorg.com.crt", "keyfile": "/path/to/apps.c1-ocp.myorg.com.key", "cafile": "/path/to/apps.c1-ocp.myorg.com.ca.crt"}

For example:

openshift_hosted_router_certificate={"certfile": "/home/cloud-user/star-apps.148.251.233.173.nip.io.cert.pem", "keyfile": "/home/cloud-user/star-apps.148.251.233.173.nip.io.key.pem", "cafile": "/home/cloud-user/ca-chain.cert.pem"}

Where the parameter values are:

  • certfile is the path to the file that contains the OpenShift Container Platform router wildcard certificate.
  • keyfile is the path to the file that contains the OpenShift Container Platform router wildcard certificate key.
  • cafile is the path to the file that contains the root CA for this key and certificate. If an intermediate CA is in use, the file should contain both the intermediate and root CA.

If these certificate files are new to your OpenShift Container Platform cluster, run the Ansible deploy_router.yml playbook to add these files to the OpenShift Container Platform configuration files. The playbook adds the certificate files to the /etc/origin/master/ directory.

# ansible-playbook [-i /path/to/inventory] \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/deploy_router.yml

If the certificates are not new, for example, you want to change existing certificates or replace expired certificates, run the following playbook:

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml
Note

For this playbook to run, the certificate names must not change. If the certificate names change, rerun the Ansible deploy_cluster.yml playbook as if the certificates were new.

9.7. Configuring a Custom Certificate for the Image Registry

The OpenShift Container Platform image registry is an internal service that facilitates builds and deployments. Most of the communication with the registry is handled by internal components in OpenShift Container Platform. As such, you should not need to replace the certificate used by the registry service itself.

However, by default, the registry uses routes to allow external systems and users to do pulls and pushes of images. You can use a re-encrypt route with a custom certificate that is presented to external users instead of using the internal, self-signed certificate.

To configure this, add the following lines of code to the [OSEv3:vars] section of the Ansible inventory file, by default /etc/ansible/hosts file. Specify the certificates to use with the registry route.

openshift_hosted_registry_routehost=registry.apps.c1-ocp.myorg.com 1
openshift_hosted_registry_routecertificates={"certfile": "/path/to/registry.apps.c1-ocp.myorg.com.crt", "keyfile": "/path/to/registry.apps.c1-ocp.myorg.com.key", "cafile": "/path/to/registry.apps.c1-ocp.myorg.com-ca.crt"} 2
openshift_hosted_registry_routetermination=reencrypt 3
1
The host name of the registry.
2
The locations of the cacert, cert, and key files.
  • certfile is the path to the file that contains the OpenShift Container Platform registry certificate.
  • keyfile is the path to the file that contains the OpenShift Container Platform registry certificate key.
  • cafile is the path to the file that contains the root CA for this key and certificate. If an intermediate CA is in use, the file should contain both the intermediate and root CA.
3
Specify where encryption is performed:
  • Set to reencrypt with a re-encrypt route to terminate encryption at the edge router and re-encrypt it with a new certificate supplied by the destination.
  • Set to passthrough to terminate encryption at the destination. The destination is responsible for decrypting traffic.

9.8. Configuring a Custom Certificate for a Load Balancer

If your OpenShift Container Platform cluster uses the default load balancer or an enterprise-level load balancer, you can use custom certificates to make the web console and API available externally using a publicly-signed custom certificate. leaving the existing internal certificates for the internal endpoints.

To configure OpenShift Container Platform to use custom certificates in this way:

  1. Edit the servingInfo section of the master configuration file:

    servingInfo:
      logoutURL: ""
      masterPublicURL: https://openshift.example.com:8443
      publicURL: https://openshift.example.com:8443/console/
      bindAddress: 0.0.0.0:8443
      bindNetwork: tcp4
      certFile: master.server.crt
      clientCA: ""
      keyFile: master.server.key
      maxRequestsInFlight: 0
      requestTimeoutSeconds: 0
      namedCertificates:
        - certFile: wildcard.example.com.crt 1
          keyFile: wildcard.example.com.key 2
          names:
            - "openshift.example.com"
      metricsPublicURL: "https://metrics.os.example.com/hawkular/metrics"
    1
    Path to the certificate file for the web console.
    2
    Path to the key file for the web console.
    Note

    Configure the namedCertificates section for only the host name associated with the masterPublicURL and oauthConfig.assetPublicURL settings. Using a custom serving certificate for the host name associated with the masterURL causes in TLS errors as infrastructure components attempt to contact the master API using the internal masterURL host.

  2. Specify the openshift_master_cluster_public_hostname and openshift_master_cluster_hostname paramaters in the Ansible inventory file, by default /etc/ansible/hosts. These values must be different. If they are the same, the named certificates will fail.

    # Native HA with External LB VIPs
    openshift_master_cluster_hostname=paas.example.com 1
    openshift_master_cluster_public_hostname=public.paas.example.com 2
    1
    The FQDN for internal load balancer configured for SSL passthrough.
    2
    The FQDN for external the load balancer with custom (public) certificate.

For information specific to your load balancer environment, refer to the OpenShift Container Platform Reference Architecture for your provider and Custom Certificate SSL Termination (Production).

9.9. Retrofit Custom Certificates into a Cluster

You can retrofit custom master and custom router certificates into an existing OpenShift Container Platform cluster.

9.9.1. Retrofit Custom Master Certificates into a Cluster

To retrofit custom certificates:

  1. Edit the Ansible inventory file to set the openshift_master_overwrite_named_certificates=true.
  2. Specify the path to the certificate using the openshift_master_named_certificates parameter.

    openshift_master_overwrite_named_certificates=true
    openshift_master_named_certificates=[{"certfile": "/path/on/host/to/crt-file", "keyfile": "/path/on/host/to/key-file", "names": ["public-master-host.com"], "cafile": "/path/on/host/to/ca-file"}] 1
  3. Run the following playbook:

    ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml
  4. If you use named certificates:

    1. Update the certificate parameters in the master-config.yaml file on each master node.
    2. Restart the OpenShift Container Platform master service to apply the changes.

      # master-restart api
      # master-restart controllers

9.9.2. Retrofit Custom Router Certificates into a Cluster

To retrofit custom router certificates:

  1. Edit the Ansible inventory file to set the openshift_master_overwrite_named_certificates=true.
  2. Specify the path to the certificate using the openshift_hosted_router_certificate parameter.

    openshift_master_overwrite_named_certificates=true
    openshift_hosted_router_certificate={"certfile": "/path/on/host/to/app-crt-file", "keyfile": "/path/on/host/to/app-key-file", "cafile": "/path/on/host/to/app-ca-file"} 1
  3. Run the following playbook:

    ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-router-certificates.yml

9.10. Using Custom Certificates with Other Components

For information on how other components, such as Logging & Metrics, use custom certificates, see Certificate Management.

Chapter 10. Redeploying Certificates

10.1. Overview

OpenShift Container Platform uses certificates to provide secure connections for the following components:

  • masters (API server and controllers)
  • etcd
  • nodes
  • registry
  • router

You can use Ansible playbooks provided with the installer to automate checking expiration dates for cluster certificates. Playbooks are also provided to automate backing up and redeploying these certificates, which can fix common certificate errors.

Possible use cases for redeploying certificates include:

  • The installer detected the wrong host names and the issue was identified too late.
  • The certificates are expired and you need to update them.
  • You have a new CA and want to create certificates using it instead.

10.2. Checking Certificate Expirations

You can use the installer to warn you about any certificates expiring within a configurable window of days and notify you about any certificates that have already expired. Certificate expiry playbooks use the Ansible role openshift_certificate_expiry.

Certificates examined by the role include:

  • Master and node service certificates
  • Router and registry service certificates from etcd secrets
  • Master, node, router, registry, and kubeconfig files for cluster-admin users
  • etcd certificates (including embedded)

10.2.1. Role Variables

The openshift_certificate_expiry role uses the following variables:

Table 10.1. Core Variables
Variable NameDefault ValueDescription

openshift_certificate_expiry_config_base

/etc/origin

Base OpenShift Container Platform configuration directory.

openshift_certificate_expiry_warning_days

30

Flag certificates that will expire in this many days from now.

openshift_certificate_expiry_show_all

no

Include healthy (non-expired and non-warning) certificates in results.

Table 10.2. Optional Variables
Variable NameDefault ValueDescription

openshift_certificate_expiry_generate_html_report

no

Generate an HTML report of the expiry check results.

openshift_certificate_expiry_html_report_path

/tmp/cert-expiry-report.html

The full path for saving the HTML report.

openshift_certificate_expiry_save_json_results

no

Save expiry check results as a JSON file.

openshift_certificate_expiry_json_results_path

/tmp/cert-expiry-report.json

The full path for saving the JSON report.

10.2.2. Running Certificate Expiration Playbooks

The OpenShift Container Platform installer provides a set of example certificate expiration playbooks, using different sets of configuration for the openshift_certificate_expiry role.

These playbooks must be used with an inventory file that is representative of the cluster. For best results, run ansible-playbook with the -v option.

Using the easy-mode.yaml example playbook, you can try the role out before tweaking it to your specifications as needed. This playbook:

  • Produces JSON and stylized HTML reports in /tmp/.
  • Sets the warning window very large, so you will almost always get results back.
  • Includes all certificates (healthy or not) in the results.

easy-mode.yaml Playbook

- name: Check cert expirys
  hosts: nodes:masters:etcd
  become: yes
  gather_facts: no
  vars:
    openshift_certificate_expiry_warning_days: 1500
    openshift_certificate_expiry_save_json_results: yes
    openshift_certificate_expiry_generate_html_report: yes
    openshift_certificate_expiry_show_all: yes
  roles:
    - role: openshift_certificate_expiry

To run the easy-mode.yaml playbook:

$ ansible-playbook -v -i <inventory_file> \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-checks/certificate_expiry/easy-mode.yaml
Other Example Playbooks

The other example playbooks are also available to run directly out of the /usr/share/ansible/openshift-ansible/playbooks/certificate_expiry/ directory.

Table 10.3. Other Example Playbooks
File NameUsage

default.yaml

Produces the default behavior of the openshift_certificate_expiry role.

html_and_json_default_paths.yaml

Generates HTML and JSON artifacts in their default paths.

longer_warning_period.yaml

Changes the expiration warning window to 1500 days.

longer-warning-period-json-results.yaml

Changes the expiration warning window to 1500 days and saves the results as a JSON file.

To run any of these example playbooks:

$ ansible-playbook -v -i <inventory_file> \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-checks/certificate_expiry/<playbook>

10.2.3. Output Formats

As noted above, there are two ways to format your check report. In JSON format for machine parsing, or as a stylized HTML page for easy skimming.

HTML Report

An example of an HTML report is provided with the installer. You can open the following file in your browser to view it:

/usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/examples/cert-expiry-report.html

JSON Report

There are two top-level keys in the saved JSON results: data and summary.

The data key is a hash where the keys are the names of each host examined and the values are the check results for the certificates identified on each respective host.

The summary key is a hash that summarizes the total number of certificates:

  • examined on the entire cluster
  • that are OK
  • expiring within the configured warning window
  • already expired

For an example of the full JSON report, see /usr/share/ansible/openshift-ansible/roles/openshift_certificate_expiry/examples/cert-expiry-report.json.

The summary from the JSON data can be easily checked for warnings or expirations using a variety of command-line tools. For example, using grep you can look for the word summary and print out the two lines after the match (-A2):

$ grep -A2 summary /tmp/cert-expiry-report.json
    "summary": {
        "warning": 16,
        "expired": 0

If available, the jq tool can also be used to pick out specific values. The first two examples below show how to select just one value, either warning or expired. The third example shows how to select both values at once:

$ jq '.summary.warning' /tmp/cert-expiry-report.json
16

$ jq '.summary.expired' /tmp/cert-expiry-report.json
0

$ jq '.summary.warning,.summary.expired' /tmp/cert-expiry-report.json
16
0

10.3. Redeploying Certificates

Use the following playbooks to redeploy master, etcd, node, registry, and router certificates on all relevant hosts. You can redeploy all of them at once using the current CA, redeploy certificates for specific components only, or redeploy a newly generated or custom CA on its own.

Just like the certificate expiry playbooks, these playbooks must be run with an inventory file that is representative of the cluster.

In particular, the inventory must specify or override all host names and IP addresses set via the following variables such that they match the current cluster configuration:

  • openshift_public_hostname
  • openshift_public_ip
  • openshift_master_cluster_hostname
  • openshift_master_cluster_public_hostname

The playbooks you need are provided by:

# yum install openshift-ansible
Note

The validity (length in days until they expire) for any certificates auto-generated while redeploying can be configured via Ansible as well. See Configuring Certificate Validity.

Note

OpenShift Container Platform CA and etcd certificates expire after five years. Signed OpenShift Container Platform certificates expire after two years.

10.3.1. Redeploying All Certificates Using the Current OpenShift Container Platform and etcd CA

The redeploy-certificates.yml playbook does not regenerate the OpenShift Container Platform CA certificate. New master, etcd, node, registry, and router certificates are created using the current CA certificate to sign new certificates.

This also includes serial restarts of:

  • etcd
  • master services
  • node services

To redeploy master, etcd, and node certificates using the current OpenShift Container Platform CA, run this playbook, specifying your inventory file:

$ ansible-playbook -i <inventory_file> \
    /usr/share/ansible/openshift-ansible/playbooks/redeploy-certificates.yml
Important

If the OpenShift Container Platform CA was redeployed with the openshift-master/redeploy-openshift-ca.yml playbook you must add -e openshift_redeploy_openshift_ca=true to this command.

10.3.2. Redeploying a New or Custom OpenShift Container Platform CA

The openshift-master/redeploy-openshift-ca.yml playbook redeploys the OpenShift Container Platform CA certificate by generating a new CA certificate and distributing an updated bundle to all components including client kubeconfig files and the node’s database of trusted CAs (the CA-trust).

This also includes serial restarts of:

  • master services
  • node services
  • docker

Additionally, you can specify a custom CA certificate when redeploying certificates instead of relying on a CA generated by OpenShift Container Platform.

When the master services are restarted, the registry and routers can continue to communicate with the master without being redeployed because the master’s serving certificate is the same, and the CA the registry and routers have are still valid.

To redeploy a newly generated or custom CA:

  1. Optionally, specify a custom CA. The certfile that you specify as part of the custom CA parameter, openshift_master_ca_certificate, must contain only the single certificate that signs the OpenShift Container Platform certificates. If you have intermediate certificates in your chain, you must bundle them into a different file.

    1. To specify a CA without intermediate certificates, set the following variable in your inventory file:

      # Configure custom ca certificate
      # NOTE: CA certificate will not be replaced with existing clusters.
      # This option may only be specified when creating a new cluster or
      # when redeploying cluster certificates with the redeploy-certificates
      # playbook.
      openshift_master_ca_certificate={'certfile': '</path/to/ca.crt>', 'keyfile': '</path/to/ca.key>'}
    2. To specify a CA certificate that is issued by an intermediate CA:

      1. Create a bundled certificate that contains the full chain of intermediate and root certificates for the CA:

        # cat intermediate/certs/<intermediate.cert.pem> \
              certs/ca.cert.pem >> intermediate/certs/ca-chain.cert.pem
      2. Set the following variables in your inventory file:

        # Configure custom ca certificate
        # NOTE: CA certificate will not be replaced with existing clusters.
        # This option may only be specified when creating a new cluster or
        # when redeploying cluster certificates with the redeploy-certificates
        # playbook.
        openshift_master_ca_certificate={'certfile': '</path/to/ca.crt>', 'keyfile': '</path/to/ca.key>'}
        openshift_additional_ca=intermediate/certs/ca-chain.cert.pem
  2. Run the openshift-master/redeploy-openshift-ca.yml playbook, specifying your inventory file:

    $ ansible-playbook -i <inventory_file> \
        /usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-openshift-ca.yml

    With the new OpenShift Container Platform CA in place, you can then use the redeploy-certificates.yml playbook at your discretion whenever you want to redeploy certificates signed by the new CA on all components.

    Important

    When using the redeploy-certificates.yml playbook after the new OpenShift Container Platform CA is in place, you must add -e openshift_redeploy_openshift_ca=true to the playbook command.

10.3.3. Redeploying a New etcd CA

The openshift-etcd/redeploy-ca.yml playbook redeploys the etcd CA certificate by generating a new CA certificate and distributing an updated bundle to all etcd peers and master clients.

This also includes serial restarts of:

  • etcd
  • master services

To redeploy a newly generated etcd CA:

  1. Run the openshift-etcd/redeploy-ca.yml playbook, specifying your inventory file:

    $ ansible-playbook -i <inventory_file> \
        /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-ca.yml

With the new etcd CA in place, you can then use the openshift-etcd/redeploy-certificates.yml playbook at your discretion whenever you want to redeploy certificates signed by the new etcd CA on etcd peers and master clients. Alternatively, you can use the redeploy-certificates.yml playbook to redeploy certificates for OpenShift Container Platform components in addition to etcd peers and master clients.

Note

The etcd certificate redeployment can result in copying the serial to all master hosts.

10.3.4. Redeploying Master Certificates Only

The openshift-master/redeploy-certificates.yml playbook only redeploys master certificates. This also includes serial restarts of master services.

To redeploy master certificates, run this playbook, specifying your inventory file:

$ ansible-playbook -i <inventory_file> \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-master/redeploy-certificates.yml
Important

After running this playbook, you must regenerate any service signing certificate or key pairs by deleting existing secrets that contain service serving certificates or removing and re-adding annotations to appropriate services.

10.3.5. Redeploying etcd Certificates Only

The openshift-etcd/redeploy-certificates.yml playbook only redeploys etcd certificates including master client certificates.

This also include serial restarts of:

  • etcd
  • master services.

To redeploy etcd certificates, run this playbook, specifying your inventory file:

$ ansible-playbook -i <inventory_file> \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/redeploy-certificates.yml

10.3.6. Redeploying Node Certificates

OpenShift Container Platform automatically rotates node certificates when they get close to expiring. If you need to redeploy certificates because the CA certificate was changed, you can use the playbooks/redeploy-certificates.yml playbook with the -e openshift_redeploy_openshift_ca=true flag. See Redeploying All Certificates Using the Current OpenShift Container Platform and etcd CA for details.

10.3.7. Redeploying Registry or Router Certificates Only

The openshift-hosted/redeploy-registry-certificates.yml and openshift-hosted/redeploy-router-certificates.yml playbooks replace installer-created certificates for the registry and router. If custom certificates are in use for these components, see Redeploying Custom Registry or Router Certificates to replace them manually.

10.3.7.1. Redeploying Registry Certificates Only

To redeploy registry certificates, run the following playbook, specifying your inventory file:

$ ansible-playbook -i <inventory_file> \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-registry-certificates.yml
10.3.7.2. Redeploying Router Certificates Only

To redeploy router certificates, run the following playbook, specifying your inventory file:

$ ansible-playbook -i <inventory_file> \
    /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/redeploy-router-certificates.yml

10.3.8. Redeploying Custom Registry or Router Certificates

When nodes are evacuated due to a redeployed CA, registry and router pods are restarted. If the registry and router certificates were not also redeployed with the new CA, this can cause outages because they cannot reach the masters using their old certificates.

The playbooks for redeploying certificates cannot redeploy custom registry or router certificates, so to address this issue, you can manually redeploy the registry and router certificates.

10.3.8.1. Redeploying Registry Certificates Manually

To redeploy registry certificates manually, you must add new registry certificates to a secret named registry-certificates, then redeploy the registry:

  1. Switch to the default project for the remainder of these steps:

    $ oc project default
  2. If your registry was initially created on OpenShift Container Platform 3.1 or earlier, it may still be using environment variables to store certificates (which has been deprecated in favor of using secrets).

    1. Run the following and look for the OPENSHIFT_CA_DATA, OPENSHIFT_CERT_DATA, OPENSHIFT_KEY_DATA environment variables:

      $ oc env dc/docker-registry --list
    2. If they do not exist, skip this step. If they do, create the following ClusterRoleBinding:

      $ cat <<EOF |
      apiVersion: v1
      groupNames: null
      kind: ClusterRoleBinding
      metadata:
        creationTimestamp: null
        name: registry-registry-role
      roleRef:
        kind: ClusterRole
        name: system:registry
      subjects:
      - kind: ServiceAccount
        name: registry
        namespace: default
      userNames:
      - system:serviceaccount:default:registry
      EOF
      oc create -f -

      Then, run the following to remove the environment variables:

      $ oc env dc/docker-registry OPENSHIFT_CA_DATA- OPENSHIFT_CERT_DATA- OPENSHIFT_KEY_DATA- OPENSHIFT_MASTER-
  3. Set the following environment variables locally to make later commands less complex:

    $ REGISTRY_IP=`oc get service docker-registry -o jsonpath='{.spec.clusterIP}'`
    $ REGISTRY_HOSTNAME=`oc get route/docker-registry -o jsonpath='{.spec.host}'`
  4. Create new registry certificates:

    $ oc adm ca create-server-cert \
        --signer-cert=/etc/origin/master/ca.crt \
        --signer-key=/etc/origin/master/ca.key \
        --hostnames=$REGISTRY_IP,docker-registry.default.svc,docker-registry.default.svc.cluster.local,$REGISTRY_HOSTNAME \
        --cert=/etc/origin/master/registry.crt \
        --key=/etc/origin/master/registry.key \
        --signer-serial=/etc/origin/master/ca.serial.txt

    Run oc adm commands only from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts.

  5. Update the registry-certificates secret with the new registry certificates:

    $ oc create secret generic registry-certificates \
        --from-file=/etc/origin/master/registry.crt,/etc/origin/master/registry.key \
        -o json --dry-run | oc replace -f -
  6. Redeploy the registry:

    $ oc rollout latest dc/docker-registry
10.3.8.2. Redeploying Router Certificates Manually

To redeploy router certificates manually, you must add new router certificates to a secret named router-certs, then redeploy the router:

  1. Switch to the default project for the remainder of these steps:

    $ oc project default
  2. If your router was initially created on OpenShift Container Platform 3.1 or earlier, it might still use environment variables to store certificates, which has been deprecated in favor of using service serving certificate secret.

    1. Run the following command and look for the OPENSHIFT_CA_DATA, OPENSHIFT_CERT_DATA, OPENSHIFT_KEY_DATA environment variables:

      $ oc env dc/router --list
    2. If those variables exist, create the following ClusterRoleBinding:

      $ cat <<EOF |
      apiVersion: v1
      groupNames: null
      kind: ClusterRoleBinding
      metadata:
        creationTimestamp: null
        name: router-router-role
      roleRef:
        kind: ClusterRole
        name: system:router
      subjects:
      - kind: ServiceAccount
        name: router
        namespace: default
      userNames:
      - system:serviceaccount:default:router
      EOF
      oc create -f -
    3. If those variables exist, run the following command to remove them:

      $ oc env dc/router OPENSHIFT_CA_DATA- OPENSHIFT_CERT_DATA- OPENSHIFT_KEY_DATA- OPENSHIFT_MASTER-
  3. Obtain a certificate.

    • If you use an external Certificate Authority (CA) to sign your certificates, create a new certificate and provide it to OpenShift Container Platform by following your internal processes.
    • If you use the internal OpenShift Container Platform CA to sign certificates, run the following commands:

      Important

      The following commands generate a certificate that is internally signed. It will be trusted by only clients that trust the OpenShift Container Platform CA.

      $ cd /root
      $ mkdir cert ; cd cert
      $ oc adm ca create-server-cert \
          --signer-cert=/etc/origin/master/ca.crt \
          --signer-key=/etc/origin/master/ca.key \
          --signer-serial=/etc/origin/master/ca.serial.txt \
          --hostnames='*.hostnames.for.the.certificate' \
          --cert=router.crt \
          --key=router.key \

      These commands generate the following files:

      • A new certificate named router.crt.
      • A copy of the signing CA certificate chain, /etc/origin/master/ca.crt. This chain can contain more than one certificate if you use intermediate CAs.
      • A corresponding private key named router.key.
  4. Create a new file that concatenates the generated certificates:

    $ cat router.crt /etc/origin/master/ca.crt router.key > router.pem
    Note

    This step is only valid if you are using a certificate signed by the OpenShift CA. If a custom certificate is used, a file with the correct CA chain should be used instead of /etc/origin/master/ca.crt.

  5. Before you generate a new secret, back up the current one:

    $ oc get -o yaml --export secret router-certs > ~/old-router-certs-secret.yaml
  6. Create a new secret to hold the new certificate and key, and replace the contents of the existing secret:

    $ oc create secret tls router-certs --cert=router.pem \ 1
        --key=router.key -o json --dry-run | \
        oc replace -f -
    1
    router.pem is the file that contains the concatenation of the certificates that you generated.
  7. Redeploy the router:

    $ oc rollout latest dc/router

    When routers are initially deployed, an annotation is added to the router’s service that automatically creates a service serving certificate secret named router-metrics-tls.

    To redeploy router-metrics-tls certificates manually, that service serving certificate can be triggered to be recreated by deleting the secret, removing and re-adding annotations to the router service, then redeploying the router-metrics-tls secret:

  8. Remove the following annotations from the router service:

    $ oc annotate service router \
        service.alpha.openshift.io/serving-cert-secret-name- \
        service.alpha.openshift.io/serving-cert-signed-by-
  9. Remove the existing router-metrics-tls secret.

    $ oc delete secret router-metrics-tls
  10. Re-add the annotations:

    $ oc annotate service router \
        service.alpha.openshift.io/serving-cert-secret-name=router-metrics-tls

Chapter 11. Configuring authentication and user agent

11.1. Overview

The OpenShift Container Platform master includes a built-in OAuth server. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API.

As an administrator, you can configure OAuth using the master configuration file to specify an identity provider. It is a best practice to configure your identity provider during cluster installation, but you can configure it after installation.

Note

OpenShift Container Platform user names containing /, :, and % are not supported.

The Deny All identity provider is used by default, which denies access for all user names and passwords. To allow access, you must choose a different identity provider and configure the master configuration file appropriately (located at /etc/origin/master/master-config.yaml by default).

When you run a master without a configuration file, the Allow All identity provider is used by default, which allows any non-empty user name and password to log in. This is useful for testing purposes. To use other identity providers, or to modify any token, grant, or session options, you must run the master from a configuration file.

Note

Roles need to be assigned to administer the setup with an external user.

After making changes to an identity provider, you must restart the master services for the changes to take effect:

# master-restart api
# master-restart controllers

11.2. Identity provider parameters

There are four parameters common to all identity providers:

ParameterDescription

name

The provider name is prefixed to provider user names to form an identity name.

challenge

When true, unauthenticated token requests from non-web clients (like the CLI) are sent a WWW-Authenticate challenge header. Not supported by all identity providers.

To prevent cross-site request forgery (CSRF) attacks against browser clients Basic authentication challenges are only sent if a X-CSRF-Token header is present on the request. Clients that expect to receive Basic WWW-Authenticate challenges should set this header to a non-empty value.

login

When true, unauthenticated token requests from web clients (like the web console) are redirected to a login page backed by this provider. Not supported by all identity providers.

If you want users to be sent to a branded page before being redirected to the identity provider’s login, then set oauthConfig → alwaysShowProviderSelection: true in the master configuration file. This provider selection page can be customized.

mappingMethod

Defines how new identities are mapped to users when they log in. Enter one of the following values:

claim
The default value. Provisions a user with the identity’s preferred user name. Fails if a user with that user name is already mapped to another identity.
lookup
Looks up an existing identity, user identity mapping, and user, but does not automatically provision users or identities. This allows cluster administrators to set up identities and users manually, or using an external process. Using this method requires you to manually provision users. See Manually Provisioning a User When Using the Lookup Mapping Method.
generate
Provisions a user with the identity’s preferred user name. If a user with the preferred user name is already mapped to an existing identity, a unique user name is generated. For example, myuser2. This method should not be used in combination with external processes that require exact matches between OpenShift Container Platform user names and identity provider user names, such as LDAP group sync.
add
Provisions a user with the identity’s preferred user name. If a user with that user name already exists, the identity is mapped to the existing user, adding to any existing identity mappings for the user. Required when multiple identity providers are configured that identify the same set of users and map to the same user names.
Note

When adding or changing identity providers, you can map identities from the new provider to existing users by setting the mappingMethod parameter to add.

11.3. Configuring identity providers

OpenShift Container Platform supports configuring only a single identity provider. However, you can extend the basic authentication for more complex configurations such as LDAP failover.

You can use these parameters to define the identity provider during installation or after installation.

11.3.1. Configuring identity providers with Ansible

For initial cluster installations, the Deny All identity provider is configured by default, though it can be overriden during installation by configuring openshift_master_identity_providers parameter in the inventory file. Session options in the OAuth configuration are also configurable in the inventory file.

Example identity provider configuration with Ansible

# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# Defining htpasswd users
#openshift_master_htpasswd_users={'user1': '<pre-hashed password>', 'user2': '<pre-hashed password>'}
# or
#openshift_master_htpasswd_file=/etc/origin/master/htpasswd

# Allow all auth
#openshift_master_identity_providers=[{'name': 'allow_all', 'login': 'true', 'challenge': 'true', 'kind': 'AllowAllPasswordIdentityProvider'}]

# LDAP auth
#openshift_master_identity_providers=[{'name': 'my_ldap_provider', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': '', 'bindPassword': '', 'insecure': 'false', 'url': 'ldap://ldap.example.com:389/ou=users,dc=example,dc=com?uid'}]
# Configuring the ldap ca certificate 1
#openshift_master_ldap_ca=<ca text>
# or
#openshift_master_ldap_ca_file=<path to local ca file to use> 2

# Available variables for configuring certificates for other identity providers:
#openshift_master_openid_ca
#openshift_master_openid_ca_file 3
#openshift_master_request_header_ca
#openshift_master_request_header_ca_file 4

1
If you specified 'insecure': 'true' in the openshift_master_identity_providers parameter for only an LDAP identity provider, you can omit the CA certificate.
2 3 4
If you specify a file on the host you run the playbook on, its contents are copied to the /etc/origin/master/<identity_provider_name>_<identity_provider_type>_ca.crt file. The identity provider name is the value of the openshift_master_identity_providers parameter, ldap, openid, or request_header. If you do not specify the CA text or the path to the local CA file, you must place the CA certificate in this location. If you specify multiple identity providers, you must manually place the CA certificate for each provider in this location. You cannot change this location.

You can specify multiple identity providers. If you do, you must place the CA certificate for each identity provider in the /etc/origin/master/ directory. For example, you include the following providers in your openshift_master_identity_providers value:

openshift_master_identity_providers:
- name: foo
  provider:
    kind: OpenIDIdentityProvider
    ...
- name: bar
  provider:
    kind: OpenIDIdentityProvider
    ...
- name: baz
  provider:
    kind: RequestHeaderIdentityProvider
    ...

You must place the CA certificates for these identity providers in the following files:

  • /etc/origin/master/foo_openid_ca.crt
  • /etc/origin/master/bar_openid_ca.crt
  • /etc/origin/master/baz_requestheader_ca.crt

11.3.2. Configuring identity providers in the master configuration file

You can configure the master host for authentication using your desired identity provider by modifying the master configuration file.

Example 11.1. Example identity provider configuration in the master configuration file

...
oauthConfig:
  identityProviders:
  - name: htpasswd_auth
    challenge: true
    login: true
    mappingMethod: "claim"
...

When set to the default claim value, OAuth will fail if the identity is mapped to a previously-existing user name.

11.3.2.1. Manually provisioning a user when using the lookup mapping method

When using the lookup mapping method, user provisioning is done by an external system, via the API. Typically, identities are automatically mapped to users during login. The 'lookup' mapping method automatically disables this automatic mapping, which requires you to provision users manually.

For more information on identity objects, see the Identity user API obejct.

If you are using the lookup mapping method, use the following steps for each user after configuring the identity provider:

  1. Create an OpenShift Container Platform User, if not created already:

    $ oc create user <username>

    For example, the following command creates a OpenShift Container Platform User bob:

    $ oc create user bob
  2. Create an OpenShift Container Platform Identity, if not created already. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:

    $ oc create identity <identity-provider>:<user-id-from-identity-provider>

    The <identity-provider> is the name of the identity provider in the master configuration, as shown in the appropriate identity provider section below.

    For example, the following commands creates an Identity with identity provider ldap_provider and the identity provider user name bob_s.

    $ oc create identity ldap_provider:bob_s
  3. Create a user/identity mapping for the created user and identity:

    $ oc create useridentitymapping <identity-provider>:<user-id-from-identity-provider> <username>

    For example, the following command maps the identity to the user:

    $ oc create useridentitymapping ldap_provider:bob_s bob

11.3.3. Allow all

Set AllowAllPasswordIdentityProvider in the identityProviders stanza to allow any non-empty user name and password to log in.

Example 11.2. Master Configuration Using AllowAllPasswordIdentityProvider

oauthConfig:
  ...
  identityProviders:
  - name: my_allow_provider 1
    challenge: true 2
    login: true 3
    mappingMethod: claim 4
    provider:
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider
1
This provider name is prefixed to provider user names to form an identity name.
2
When true, unauthenticated token requests from non-web clients (like the CLI) are sent a WWW-Authenticate challenge header for this provider.
3
When true, unauthenticated token requests from web clients (like the web console) are redirected to a login page backed by this provider.
4
Controls how mappings are established between this provider’s identities and user objects, as described above.

11.3.4. Deny all

Set DenyAllPasswordIdentityProvider in the identityProviders stanza to deny access for all user names and passwords.

Example 11.3. Master Configuration Using DenyAllPasswordIdentityProvider

oauthConfig:
  ...
  identityProviders:
  - name: my_deny_provider 1
    challenge: true 2
    login: true 3
    mappingMethod: claim 4
    provider:
      apiVersion: v1
      kind: DenyAllPasswordIdentityProvider
1
This provider name is prefixed to provider user names to form an identity name.
2
When true, unauthenticated token requests from non-web clients (like the CLI) are sent a WWW-Authenticate challenge header for this provider.
3
When true, unauthenticated token requests from web clients (like the web console) are redirected to a login page backed by this provider.
4
Controls how mappings are established between this provider’s identities and user objects, as described above.

11.3.5. HTPasswd

Set HTPasswdPasswordIdentityProvider in the identityProviders stanza to validate user names and passwords against a flat file generated using htpasswd.

Note

The htpasswd utility is in the httpd-tools package:

# yum install httpd-tools

OpenShift Container Platform supports the Bcrypt, SHA-1, and MD5 cryptographic hash functions, and MD5 is the default for htpasswd. Plaintext, encrypted text, and other hash functions are not currently supported.

The flat file is reread if its modification time changes, without requiring a server restart.

Important

Because the OpenShift Container Platform master API now runs as a static pod, you must create the HTPasswdPasswordIdentityProvider htpasswd file in /etc/origin/master/ so it can be read by the container.

To use the htpasswd command:

  • To create a flat file with a user name and hashed password, run:

    $ htpasswd -c /etc/origin/master/htpasswd <user_name>

    Then, enter and confirm a clear-text password for the user. The command generates a hashed version of the password.

    For example:

    htpasswd -c /etc/origin/master/htpasswd user1
    New password:
    Re-type new password:
    Adding password for user user1
    Note

    You can include the -b option to supply the password on the command line:

    $ htpasswd -c -b <user_name> <password>

    For example:

    $ htpasswd -c -b file user1 MyPasswo