Installation and Configuration
OpenShift Container Platform 3.7 Installation and Configuration
Abstract
Chapter 1. Overview
OpenShift Container Platform Installation and Configuration topics cover the basics of installing and configuring OpenShift Container Platform in your environment. Configuration, management, and logging are also covered. Use these topics for the one-time tasks required to quickly set up your OpenShift Container Platform environment and configure it based on your organizational needs.
For day to day cluster administration tasks, see Cluster Administration.
Chapter 2. Installing a Cluster
2.1. Planning
2.1.1. Initial Planning
For production environments, several factors influence installation. Consider the following questions as you read through the documentation:
- Which installation method do you want to use? The Installation Methods section provides some information about the quick and advanced installation methods.
- How many pods are required in your cluster? The Sizing Considerations section provides limits for nodes and pods so you can calculate how large your environment needs to be.
- How many hosts do you require in the cluster? The Environment Scenarios section provides multiple examples of Single Master and Multiple Master configurations.
- Is high availability required? High availability is recommended for fault tolerance. In this situation, you might aim to use the Multiple Masters Using Native HA example as a basis for your environment.
- Which installation type do you want to use: RPM or containerized? Both installations provide a working OpenShift Container Platform environment, but you might have a preference for a particular method of installing, managing, and updating your services.
- Which identity provider do you use for authentication? If you already use a supported identity provider, it is a best practice to configure OpenShift Container Platform to use that identity provider during advanced installation.
- Is my installation supported if integrating with other technologies? See the OpenShift Container Platform Tested Integrations for a list of tested integrations.
2.1.2. Installation Methods
Both the quick and advanced installation methods are supported for development and production environments. If you want to quickly get OpenShift Container Platform up and running to try out for the first time, use the quick installer and let the interactive CLI guide you through the configuration options relevant to your environment.
For the most control over your cluster’s configuration, you can use the advanced installation method. This method is particularly suited if you are already familiar with Ansible. However, following along with the OpenShift Container Platform documentation should equip you with enough information to reliably deploy your cluster and continue to manage its configuration post-deployment using the provided Ansible playbooks directly.
If you install initially using the quick installer, you can always further tweak your cluster’s configuration and adjust the number of hosts in the cluster using the same installer tool. If you wanted to later switch to using the advanced method, you can create an inventory file for your configuration and carry on that way.
2.1.3. Sizing Considerations
Determine how many nodes and pods you require for your OpenShift Container Platform cluster. Cluster scalability correlates to the number of pods in a cluster environment. That number influences the other numbers in your setup. See Cluster Limits for the latest limits for objects in OpenShift Container Platform.
2.1.4. Environment Scenarios
This section outlines different examples of scenarios for your OpenShift Container Platform environment. Use these scenarios as a basis for planning your own OpenShift Container Platform cluster, based on your sizing needs.
Moving from a single master cluster to multiple masters after installation is not supported.
For information on updating labels, see Updating Labels on Nodes.
2.1.4.1. Single Master and Node on One System
OpenShift Container Platform can be installed on a single system for a development environment only. An all-in-one environment is not considered a production environment.
2.1.4.2. Single Master and Multiple Nodes
The following table describes an example environment for a single master (with embedded etcd) and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master and node |
node1.example.com | Node |
node2.example.com |
2.1.4.3. Single Master, Multiple etcd, and Multiple Nodes
The following table describes an example environment for a single master, three etcd hosts, and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master and node |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported.
2.1.4.4. Multiple Masters Using Native HA
The following describes an example environment for three masters, one HAProxy load balancer, three etcd hosts, and two nodes using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported.
2.1.4.5. Stand-alone Registry
You can also install OpenShift Container Platform to act as a stand-alone registry using the OpenShift Container Platform’s integrated registry. See Installing a Stand-alone Registry for details on this scenario.
2.1.5. RPM Versus Containerized
An RPM installation installs all services through package management and configures services to run within the same user space, while a containerized installation installs services using container images and runs separate services in individual containers.
See the Installing on Containerized Hosts topic for more details on configuring your installation to use containerized services.
2.2. Prerequisites
2.2.1. System Requirements
The following sections identify the hardware specifications and system-level requirements of all hosts within your OpenShift Container Platform environment.
2.2.1.1. Red Hat Subscriptions
You must have an active OpenShift Container Platform subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.
OpenShift Container Platform 3.7 requires Docker 1.12.
2.2.1.2. Minimum Hardware Requirements
The system requirements vary per host type:
| |
| |
External etcd Nodes |
|
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage with Docker-formatted Containers for instructions on configuring this during or after installation.
The system’s temporary directory is determined according to the rules defined in the tempfile
module in Python’s standard library.
OpenShift Container Platform only supports servers with the x86_64 architecture.
2.2.1.3. Production Level Hardware Requirements
Test or sample environments function with the minimum requirements. For production environments, the following recommendations apply:
- Master Hosts
- In a highly available OpenShift Container Platform cluster with external etcd, a master host should have, in addition to the minimum requirements in the table above, 1 CPU core and 1.5 GB of memory for each 1000 pods. Therefore, the recommended size of a master host in an OpenShift Container Platform cluster of 2000 pods would be the minimum requirements of 2 CPU cores and 16 GB of RAM, plus 2 CPU cores and 3 GB of RAM, totaling 4 CPU cores and 19 GB of RAM.
A minimum of three etcd hosts and a load-balancer between the master hosts are required.
The OpenShift Container Platform master caches deserialized versions of resources aggressively to ease CPU load. However, in smaller clusters of less than 1000 pods, this cache can waste a lot of memory for negligible CPU load reduction. The default cache size is 50,000 entries, which, depending on the size of your resources, can grow to occupy 1 to 2 GB of memory. This cache size can be reduced using the following setting the in /etc/origin/master/master-config.yaml:
kubernetesMasterConfig: apiServerArguments: deserialization-cache-size: - "1000"
- Node Hosts
- The size of a node host depends on the expected size of its workload. As an OpenShift Container Platform cluster administrator, you will need to calculate the expected workload, then add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
Use the above with the following table to plan the maximum loads for nodes and pods:
Host | Sizing Recommendation |
---|---|
Maximum nodes per cluster | 2000 |
Maximum pods per cluster | 120000 |
Maximum pods per nodes | 250 |
Maximum pods per core | 10 |
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.
2.2.1.4. Storage management
Directory | Notes | Sizing | Expected Growth |
---|---|---|---|
/var/lib/openshift | Used for etcd storage only when in single master mode and etcd is embedded in the atomic-openshift-master process. | Less than 10GB. | Will grow slowly with the environment. Only storing metadata. |
/var/lib/etcd | Used for etcd storage when in Multi-Master mode or when etcd is made standalone by an administrator. | Less than 20 GB. | Will grow slowly with the environment. Only storing metadata. |
/var/lib/docker | When the run time is docker, this is the mount point. Storage used for active container runtimes (including pods) and storage of local images (not used for registry storage). Mount point should be managed by docker-storage rather than manually. | 50 GB for a Node with 16 GB memory. Additional 20-25 GB for every additional 8 GB of memory. | Growth is limited by the capacity for running containers. |
/var/lib/containers | When the run time is CRI-O, this is the mount point. Storage used for active container runtimes (including pods) and storage of local images (not used for registry storage). | 50 GB for a Node with 16 GB memory. Additional 20-25 GB for every additional 8 GB of memory. | Growth limited by capacity for running containers |
/var/lib/origin/openshift.local.volumes | Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent storage PVs. | Varies | Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. |
/var/log | Log files for all components. | 10 to 30 GB. | Log files can grow quickly; size can be managed by growing disks or managed using log rotate. |
2.2.1.5. Configuring Core Usage
By default, OpenShift Container Platform masters and nodes use all available cores in the system they run on. You can choose the number of cores you want OpenShift Container Platform to use by setting the GOMAXPROCS
environment variable.
For example, run the following before starting the server to make OpenShift Container Platform only run on one core:
# export GOMAXPROCS=1
2.2.1.6. SELinux
Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift Container Platform or the installer will fail. Also, configure SELINUXTYPE=targeted
in the /etc/selinux/config file:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
Using OverlayFS
OverlayFS is a union file system that allows you to overlay one file system on top of another.
As of Red Hat Enterprise Linux 7.4, you have the option to configure your OpenShift Container Platform environment to use OverlayFS. The overlay2
graph driver is fully supported in addition to the older overlay
driver. However, Red Hat recommends using overlay2
instead of overlay
, because of its speed and simple implementation.
See the Overlay Graph Driver section of the Atomic Host documentation for instructions on how to to enable the overlay2
graph driver for the Docker service.
2.2.1.7. Security Warning
OpenShift Container Platform runs containers on hosts in the cluster, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access the hosts' Docker daemon and perform docker build
and docker push
operations. As such, cluster administrators should be aware of the inherent security risks associated with performing docker run
operations on arbitrary images as they effectively have root access. This is particularly relevant for docker build
operations.
Exposure to harmful containers can be limited by assigning specific builds to nodes so that any exposure is limited to those nodes. To do this, see the Assigning Builds to Specific Nodes section of the Developer Guide. For cluster administrators, see the Configuring Global Build Defaults and Overrides section of the Installation and Configuration Guide.
You can also use security context constraints to control the actions that a pod can perform and what it has the ability to access. For instructions on how to enable images to run with USER in the Dockerfile, see Managing Security Context Constraints (requires a user with cluster-admin privileges).
For more information, see these articles:
2.2.2. Environment Requirements
The following section defines the requirements of the environment containing your OpenShift Container Platform configuration. This includes networking considerations and access to external services, such as Git repository access, storage, and cloud infrastructure providers.
2.2.2.1. DNS
OpenShift Container Platform requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.
Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform.
Key components of OpenShift Container Platform run themselves inside of containers and use the following process for name resolution:
- By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.
-
OpenShift Container Platform then inserts one DNS value into the pods (above the node’s nameserver values). That value is defined in the /etc/origin/node/node-config.yaml file by the
dnsIP
parameter, which by default is set to the address of the host node because the host is using dnsmasq. -
If the
dnsIP
parameter is omitted from the node-config.yaml file, then the value defaults to the kubernetes service IP, which is the first nameserver in the pod’s /etc/resolv.conf file.
As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.
NetworkManager is required on the nodes in order to populate dnsmasq with the DNS IP addresses. DNS does not work properly when the network interface for OpenShift Container Platform has NM_CONTROLLED=no
.
The following is an example set of DNS records:
master1 A 10.64.33.100 master2 A 10.64.33.103 node1 A 10.64.33.101 node2 A 10.64.33.102
If you do not have a properly functioning DNS environment, you could experience failure with:
- Product installation via the reference Ansible-based scripts
- Deployment of the infrastructure containers (registry, routers)
- Access to the OpenShift Container Platform web console, because it is not accessible via IP address alone
2.2.2.1.1. Configuring Hosts to Use DNS
Make sure each host in your environment is configured to resolve hostnames from your DNS server. The configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:
- Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.
Enabled, then the NetworkManager dispatch script automatically configures DNS based on the DHCP configuration. Optionally, you can add a value to
dnsIP
in the node-config.yaml file to prepend the pod’s resolv.conf file. The second nameserver is then defined by the host’s first nameserver. By default, this will be the IP address of the node host.NoteFor most configurations, do not set the
openshift_dns_ip
option during the advanced installation of OpenShift Container Platform (using Ansible), because this option overrides the default IP address set bydnsIP
.Instead, allow the installer to configure each node to use dnsmasq and forward requests to SkyDNS or the external DNS provider. If you do set the
openshift_dns_ip
option, then it should be set either with a DNS IP that queries SkyDNS first, or to the SkyDNS service or endpoint IP (the Kubernetes service IP).
To verify that hosts can be resolved by your DNS server:
Check the contents of /etc/resolv.conf:
$ cat /etc/resolv.conf # Generated by NetworkManager search example.com nameserver 10.64.33.1 # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
In this example, 10.64.33.1 is the address of our DNS server.
Test that the DNS servers listed in /etc/resolv.conf are able to resolve host names to the IP addresses of all masters and nodes in your OpenShift Container Platform environment:
$ dig <node_hostname> @<IP_address> +short
For example:
$ dig master.example.com @10.64.33.1 +short 10.64.33.100 $ dig node1.example.com @10.64.33.1 +short 10.64.33.101
2.2.2.1.2. Configuring a DNS Wildcard
Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.
A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Container Platform router.
For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:
*.cloudapps.example.com. 300 IN A 192.168.133.2
In almost all cases, when referencing VMs you must use host names, and the host names that you use must match the output of the hostname -f
command on each node.
In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Container Platform may fail to resolve host names properly.
2.2.2.2. Network Access
A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.
2.2.2.2.1. NetworkManager
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required. DNS does not work properly when the network interface for OpenShift Container Platform has NM_CONTROLLED=no
.
2.2.2.2.2. Configuring firewalld as the firewall
While iptables is the default firewall, firewalld is recommended for new installations. You can enable firewalld by setting os_firewall_use_firewalld=true
in the Ansible inventory file.
[OSEv3:vars] os_firewall_use_firewalld=True
Setting this variable to true
opens the required ports and adds rules to the default zone, which ensure that firewalld is configured correctly.
Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.
2.2.2.2.3. Required Ports
The OpenShift Container Platform installation automatically creates a set of internal firewall rules on each host using iptables. However, if your network configuration uses an external firewall, such as a hardware-based firewall, you must ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.
Ensure the following ports required by OpenShift Container Platform are open on your network and configured to allow access between hosts. Some ports are optional depending on your configuration and usage.
4789 | UDP | Required for SDN communication between pods on separate hosts. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
10250 | TCP |
The master proxies to node hosts via the Kubelet for |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
2049 | TCP/UDP | Required when provisioning an NFS host as part of the installer. |
2379 | TCP | Used for standalone etcd (clustered) to accept changes in state. |
2380 | TCP | etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered). |
4001 | TCP | Used for embedded etcd (non-clustered) to accept changes in state. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
9000 | TCP |
If you choose the |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on. |
8444 | TCP |
Port that the |
22 | TCP | Required for SSH by the installer or system administrator. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts. |
80 or 443 | TCP | For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router. |
1936 | TCP | (Optional) Required to be open when running the template router to access statistics. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section below for more information. |
4001 | TCP | For embedded etcd (non-clustered) use. Only required to be internally open on the master host. 4001 is for server-client connections. |
2379 and 2380 | TCP | For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd. |
4789 | UDP | For VxLAN use (OpenShift SDN). Required only internally on node hosts. |
8443 | TCP | For use by the OpenShift Container Platform web console, shared with the API server. |
10250 | TCP | For use by the Kubelet. Required to be externally open on nodes. |
Notes
- In the above examples, port 4789 is used for User Datagram Protocol (UDP).
- When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.
-
OpenShift Container Platform internal DNS cannot be received over SDN. Depending on the detected values of
openshift_facts
, or if theopenshift_ip
andopenshift_public_ip
values are overridden, it will be the computed value ofopenshift_ip
. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata. -
The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of
openshift_hostname
andopenshift_public_hostname
. Port 1936 can still be inaccessible due to your iptables rules. Use the following to configure iptables to open port 1936:
# iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp \ --dport 1936 -j ACCEPT
9200 | TCP |
For Elasticsearch API use. Required to be internally open on any infrastructure nodes so Kibana is able to retrieve logs for display. It can be externally opened for direct access to Elasticsearch by means of a route. The route can be created using |
9300 | TCP | For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster may communicate with each other. |
2.2.2.3. Persistent Storage
The Kubernetes persistent volume framework allows you to provision an OpenShift Container Platform cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Container Platform installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.
The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift Container Platform cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.
2.2.2.4. Cloud Provider Considerations
There are certain aspects to take into consideration if installing OpenShift Container Platform on a cloud provider.
- For Amazon Web Services, see the Permissions and the Configuring a Security Group sections.
- For OpenStack, see the Permissions and the Configuring a Security Group sections.
2.2.2.4.1. Overriding Detected IP Addresses and Host Names
Some deployments require that the user override the detected host names and IP addresses for the hosts. To see the default values, run the openshift_facts
playbook:
# ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift_facts.yml
For Amazon Web Services, see the Overriding Detected IP Addresses and Host Names section.
Now, verify the detected common settings. If they are not what you expect them to be, you can override them.
The Advanced Installation topic discusses the available Ansible variables in greater detail.
Variable | Usage |
---|---|
|
|
|
|
|
|
|
|
|
|
If openshift_hostname
is set to a value other than the metadata-provided private-dns-name
value, the native cloud integration for those providers will no longer work.
2.2.2.4.2. Post-Installation Configuration for Cloud Providers
Following the installation process, you can configure OpenShift Container Platform for AWS, OpenStack, or GCE.
2.2.2.5. Containerized GlusterFS Considerations
If you choose to configure containerized GlusterFS persistent storage for your cluster, or if you choose to configure a containerized GlusterFS-backed OpenShift Container Registry, you must consider the following prerequisites.
2.2.2.5.1. Storage Nodes
To use containerized GlusterFS persistent storage:
- A minimum of 3 storage nodes is required.
- Each storage node must have at least 1 raw block device with least 100 GB available.
To run a containerized GlusterFS-backed OpenShift Container Registry:
- A minimum of 3 storage nodes is required.
- Each storage node must have at least 1 raw block device with at least 10 GB of free storage.
While containerized GlusterFS persistent storage can be configured and deployed on the same OpenShift Container Platform cluster as a containerized GlusterFS-backed registry, their storage should be kept separate from each other and also requires additional storage nodes. For example, if both are configured, a total of 6 storage nodes would be needed: 3 for the registry and 3 for persistent storage. This limitation is imposed to avoid potential impacts on performance in I/O and volume creation.
2.2.2.5.2. Required Software Components
For any RHEL (non-Atomic) storage nodes, the following RPM respository must be enabled:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
The mount.glusterfs
command must be available on all nodes that will host pods that will use GlusterFS volumes. For RPM-based systems, the glusterfs-fuse package must be installed:
# yum install glusterfs-fuse
If GlusterFS is already installed on the nodes, ensure the latest version is installed:
# yum update glusterfs-fuse
2.3. Host Preparation
2.3.1. Setting PATH
The PATH
for the root user on each host must contain the following directories:
- /bin
- /sbin
- /usr/bin
- /usr/sbin
These should all be included by default in a fresh RHEL 7.x installation.
2.3.2. Operating System Requirements
A base installation of RHEL 7.3 or 7.4 (with the latest packages from the Extras channel) or RHEL Atomic Host 7.4.2 or later is required for master and node hosts. See the following documentation for the respective installation instructions, if required:
2.3.3. Host Registration
Each host must be registered using Red Hat Subscription Manager (RHSM) and have an active OpenShift Container Platform subscription attached to access the required packages.
On each host, register with RHSM:
# subscription-manager register --username=<user_name> --password=<password>
Pull the latest subscription data from RHSM:
# subscription-manager refresh
List the available subscriptions:
# subscription-manager list --available --matches '*OpenShift*'
In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
# subscription-manager attach --pool=<pool_id>
Disable all yum repositories:
Disable all the enabled RHSM repositories:
# subscription-manager repos --disable="*"
List the remaining yum repositories and note their names under
repo id
, if any:# yum repolist
Use
yum-config-manager
to disable the remaining yum repositories:# yum-config-manager --disable <repo_id>
Alternatively, disable all repositories:
yum-config-manager --disable \*
Note that this could take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 3.7:
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.7-rpms" \ --enable="rhel-7-fast-datapath-rpms"
2.3.4. Installing Base Packages
For RHEL 7 systems:
Install the following base packages:
# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
Update the system to the latest packages:
# yum update # systemctl reboot
If you plan to use the RPM-based installer to run an advanced installation, you can skip this step. However, if you plan to use the containerized installer (currently a Technology Preview feature):
Install the atomic package:
# yum install atomic
- Skip to Installing Docker.
Install the following package, which provides RPM-based OpenShift Container Platform installer utilities and pulls in other tools required by the quick and advanced installation methods, such as Ansible and related configuration files:
# yum install atomic-openshift-utils
For RHEL Atomic Host 7 systems:
Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:
# atomic host upgrade
After the upgrade is completed and prepared for the next boot, reboot the host:
# systemctl reboot
2.3.5. Installing Docker
At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OpenShift Container Platform.
For RHEL 7 systems, install Docker 1.12:
On RHEL Atomic Host 7 systems, Docker should already be installed, configured, and running by default.
# yum install docker-1.12.6
After the package installation is complete, verify that version 1.12 was installed:
# rpm -V docker-1.12.6 # docker version
The Advanced Installation method automatically changes /etc/sysconfig/docker.
2.3.6. Configuring Docker Storage
Containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications.
For RHEL Atomic Host
The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.
If you do not have enough allocated, see Managing Storage with Docker Formatted Containers for details on using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.
For RHEL
The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.
Docker stores images and containers in a graph driver, which is a pluggable storage technology, such as DeviceMapper, OverlayFS, and Btrfs. Each has advantages and disadvantages. For example, OverlayFS is faster than DeviceMapper at starting and stopping containers, but is not Portable Operating System Interface for Unix (POSIX) compliant because of the architectural limitations of a union file system and is not supported prior to Red Hat Enterprise Linux 7.2. See the Red Hat Enterprise Linux release notes for information on using OverlayFS with your version of RHEL.
For more information on the benefits and limitations of DeviceMapper and OverlayFS, see Choosing a Graph Driver.
2.3.6.1. Configuring OverlayFS
OverlayFS is a type of union file system. It allows you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified.
Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.
For information on enabling the OverlayFS storage driver for the Docker service, see the Red Hat Enterprise Linux Atomic Host documentation.
2.3.6.2. Configuring Thin Pool Storage
You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. This can be done after installing Docker and should be done before creating images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:
- Option A) Use an additional block device.
- Option B) Use an existing, specified volume group.
- Option C) Use the remaining free space from the volume group where your root file system is located.
Option A is the most robust option, however it requires adding an additional block device to your host before configuring Docker storage. Options B and C both require leaving free space available when provisioning your host. Option C is known to cause issues with some applications, for example Red Hat Mobile Application Platform (RHMAP).
Create the docker-pool volume using one of the following three options:
Option A) Use an additional block device.
In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device you wish to use. Set VG to the volume group name you wish to create; docker-vg is a reasonable choice. For example:
# cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS=/dev/vdc VG=docker-vg EOF
Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup [5/1868] 0 Checking that no-one is using this disk right now ... OK Disk /dev/vdc: 31207 cylinders, 16 heads, 63 sectors/track sfdisk: /dev/vdc: unrecognized partition table type Old situation: sfdisk: No partitions found New situation: Units: sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/vdc1 2048 31457279 31455232 8e Linux LVM /dev/vdc2 0 - 0 0 Empty /dev/vdc3 0 - 0 0 Empty /dev/vdc4 0 - 0 0 Empty Warning: partition 1 does not start at a cylinder boundary Warning: partition 1 does not end at a cylinder boundary Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) Physical volume "/dev/vdc1" successfully created Volume group "docker-vg" successfully created Rounding up size to full physical extent 16.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker-vg/docker-pool to thin pool. Logical volume "docker-pool" changed.
Option B) Use an existing, specified volume group.
In /etc/sysconfig/docker-storage-setup, set VG to the desired volume group. For example:
# cat <<EOF > /etc/sysconfig/docker-storage-setup VG=docker-vg EOF
Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup Rounding up size to full physical extent 16.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker-vg/docker-pool to thin pool. Logical volume "docker-pool" changed.
Option C) Use the remaining free space from the volume group where your root file system is located.
Verify that the volume group where your root file system resides has the desired free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup Rounding up size to full physical extent 32.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume rhel/docker-pool and rhel/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted rhel/docker-pool to thin pool. Logical volume "docker-pool" changed.
Verify your configuration. You should have a dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool logical volume:
# cat /etc/sysconfig/docker-storage DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker--vg-docker--pool # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert docker-pool rhel twi-a-t--- 9.29g 0.00 0.12
ImportantBefore using Docker or OpenShift Container Platform, verify that the docker-pool logical volume is large enough to meet your needs. The docker-pool volume should be 60% of the available volume group and will grow to fill the volume group via LVM monitoring.
Check if Docker is running:
# systemctl is-active docker
If Docker has not yet been started on the host, enable and start the service:
# systemctl enable docker # systemctl start docker
If Docker is already running, re-initialize Docker:
WarningThis will destroy any containers or images currently on the host.
# systemctl stop docker # rm -rf /var/lib/docker/* # systemctl restart docker
If there is any content in /var/lib/docker/, it must be deleted. Files will be present if Docker has been used prior to the installation of OpenShift Container Platform.
2.3.6.3. Reconfiguring Docker Storage
Should you need to reconfigure Docker storage after having created the docker-pool, you should first remove the docker-pool logical volume. If you are using a dedicated volume group, you should also remove the volume group and any associated physical volumes before reconfiguring docker-storage-setup according to the instructions above.
See Logical Volume Manager Administration for more detailed information on LVM management.
2.3.6.4. Enabling Image Signature Support
OpenShift Container Platform is capable of cryptographically verifying images are from trusted sources. The Container Security Guide provides a high-level description of how image signing works.
You can configure image signature verification using the atomic
command line interface (CLI), version 1.12.5 or greater. The atomic
CLI is pre-installed on RHEL Atomic Host systems.
For more on the atomic
CLI, see the Atomic CLI documentation.
Install the atomic package if it is not installed on the host system:
$ yum install atomic
The atomic trust sub-command manages trust configuration. The default configuration is to whitelist all registries. This means no signature verification is configured.
$ atomic trust show * (default) accept
A reasonable configuration might be to whitelist a particular registry or namespace, blacklist (reject) untrusted registries, and require signature verification on a vendor registry. The following set of commands performs this example configuration:
Example Atomic Trust Configuration
$ atomic trust add --type insecureAcceptAnything 172.30.1.1:5000 $ atomic trust add --sigstoretype atomic \ --pubkeys pub@example.com \ 172.30.1.1:5000/production $ atomic trust add --sigstoretype atomic \ --pubkeys /etc/pki/example.com.pub \ 172.30.1.1:5000/production $ atomic trust add --sigstoretype web \ --sigstore https://access.redhat.com/webassets/docker/content/sigstore \ --pubkeys /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release \ registry.access.redhat.com # atomic trust show * (default) accept 172.30.1.1:5000 accept 172.30.1.1:5000/production signed security@example.com registry.access.redhat.com signed security@redhat.com,security@redhat.com
When all the signed sources are verified, nodes may be further hardened with a global reject
default:
$ atomic trust default reject $ atomic trust show * (default) reject 172.30.1.1:5000 accept 172.30.1.1:5000/production signed security@example.com registry.access.redhat.com signed security@redhat.com,security@redhat.com
Use the atomic
man page man atomic-trust
for additional examples.
The following files and directories comprise the trust configuration of a host:
- /etc/containers/registries.d/*
- /etc/containers/policy.json
The trust configuration may be managed directly on each node or the generated files managed on a separate host and distributed to the appropriate nodes using Ansible, for example. See this Red Hat Knowledgebase Article for an example of automating file distribution with Ansible.
2.3.6.5. Managing Container Logs
Sometimes a container’s log file (the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running) can increase to a problematic size. You can manage this by configuring Docker’s json-file
logging driver to restrict the size and number of log files.
Option | Purpose |
---|---|
| Sets the size at which a new log file is created. |
| Sets the file on each host to configure the options. |
For example, to set the maximum file size to 1MB and always keep the last three log files, edit the /etc/sysconfig/docker file to configure max-size=1M
and max-file=3
:
OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'
Next, restart the Docker service:
# systemctl restart docker
2.3.6.6. Viewing Available Container Logs
Container logs are stored in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:
# ls -lh /var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/ total 2.6M -rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json -rw-r--r--. 1 root root 649K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log -rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1 -rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2 -rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json drwx------. 2 root root 6 Nov 24 00:12 secrets
See Docker’s documentation for additional information on how to configure logging drivers.
2.3.6.7. Blocking Local Volume Usage
When a volume is provisioned using the VOLUME
instruction in a Dockerfile or using the docker run -v <volumename>
command, a host’s storage space is used. Using this storage can lead to an unexpected out of space issue and could bring down the host.
In OpenShift Container Platform, users trying to run their own images risk filling the entire storage space on a node host. One solution to this issue is to prevent users from running images with volumes. This way, the only storage a user has access to can be limited, and the cluster administrator can assign storage quota.
Using docker-novolume-plugin solves this issue by disallowing starting a container with local volumes defined. In particular, the plug-in blocks docker run
commands that contain:
-
The
--volumes-from
option -
Images that have
VOLUME
(s) defined -
References to existing volumes that were provisioned with the
docker volume
command
The plug-in does not block references to bind mounts.
To enable docker-novolume-plugin, perform the following steps on each node host:
Install the docker-novolume-plugin package:
$ yum install docker-novolume-plugin
Enable and start the docker-novolume-plugin service:
$ systemctl enable docker-novolume-plugin $ systemctl start docker-novolume-plugin
Edit the /etc/sysconfig/docker file and append the following to the
OPTIONS
list:--authorization-plugin=docker-novolume-plugin
Restart the docker service:
$ systemctl restart docker
After you enable this plug-in, containers with local volumes defined fail to start and show the following error message:
runContainer: API error (500): authorization denied by plugin docker-novolume-plugin: volumes are not allowed
2.3.7. Ensuring Host Access
The quick and advanced installation methods require a user that has access to all hosts. If you want to run the installer as a non-root user, passwordless sudo rights must be configured on each destination host.
For example, you can generate an SSH key on the host where you will invoke the installation process:
# ssh-keygen
Do not use a password.
An easy way to distribute your SSH keys is by using a bash
loop:
# for host in master.example.com \ node1.example.com \ node2.example.com; \ do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
Modify the host names in the above command according to your configuration.
2.3.8. Setting Proxy Overrides
If the /etc/environment file on your nodes contains either an http_proxy
or https_proxy
value, you must also set a no_proxy
value in that file to allow open communication between OpenShift Container Platform components.
The no_proxy
parameter in /etc/environment file is not the same value as the global proxy values that you set in your inventory file. The global proxy values configure specific OpenShift Container Platform services with your proxy settings. See Configuring Global Proxy Options for details.
If the /etc/environment file contains proxy values, define the following values in the no_proxy
parameter of that file on each node:
- Master and node host names or their domain suffix.
- Other internal host names or their domain suffix.
- Etcd IP addresses. You must provide IP addresses and not host names because etcd access is controlled by IP address.
-
Kubernetes IP address, by default
172.30.0.1
. Must be the value set in theopenshift_portal_net
parameter in your inventory file. -
Kubernetes internal domain suffix,
cluster.local
. -
Kubernetes internal domain suffix,
.svc
.
Because no_proxy
does not support CIDR, you can use domain suffixes.
If you use either an http_proxy
or https_proxy
value, your no_proxy
parameter value resembles the following example:
no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1
2.3.9. What’s Next?
If you are interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to prepare your hosts.
When you are ready to proceed, you can install OpenShift Container Platform using the quick installation or advanced installation method.
If you are installing a stand-alone registry, continue with Installing a Stand-alone Registry.
2.4. Installing on Containerized Hosts
2.4.1. RPM Versus Containerized Installation
You can opt to install OpenShift Container Platform using the RPM or containerized package method. Either installation method results in a working environment, but the choice comes from the operating system and how you choose to update your hosts.
The default method for installing OpenShift Container Platform on Red Hat Enterprise Linux (RHEL) uses RPMs. When targeting a Red Hat Atomic Host system, the containerized method is the only available option, and is automatically selected for you based on the detection of the /run/ostree-booted file.
When using RPMs, all services are installed and updated via package management from an outside source. These modify a host’s existing configuration within the same user space. Alternatively, containerized installs instead are a complete, all-in-one resource using container images and its own operating system within the container. Any updated, newer containers replace any existing ones on your host. Choosing one method over the other depends on how you choose to update OpenShift Container Platform in the future.
The following table outlines further differences between the RPM and Containerized methods:
RPM | Containerized | |
---|---|---|
Installation Method |
Packages via |
Container images via |
Service Management |
|
|
Operating System | Red Hat Enterprise Linux | Red Hat Enterprise Linux or Red Hat Atomic Host |
2.4.2. Install Methods for Containerized Hosts
As with the RPM installation, you can choose between the quick and advanced install methods for the containerized install.
For the quick installation method, you can choose between the RPM or containerized method on a per host basis during the interactive installation, or set the values manually in an installation configuration file.
For the advanced installation method, you can set the Ansible variable containerized=true
in an inventory file on a cluster-wide or per host basis.
For the disconnected installation method, to install the etcd container, you can set the Ansible variable osm_etcd_image
to be the fully qualified name of the etcd image on your local registry, for example, registry.example.com/rhel7/etcd
.
2.4.3. Required Images
Containerized installations make use of the following images:
- openshift3/ose
- openshift3/node
- openshift3/openvswitch
- registry.access.redhat.com/rhel7/etcd
By default, all of the above images are pulled from the Red Hat Registry at registry.access.redhat.com.
If you need to use a private registry to pull these images during the installation, you can specify the registry information ahead of time. For the advanced installation method, you can set the following Ansible variables in your inventory file, as required:
openshift_docker_additional_registries=<registry_hostname> openshift_docker_insecure_registries=<registry_hostname> openshift_docker_blocked_registries=<registry_hostname>
For the quick installation method, you can export the following environment variables on each target host:
# export OO_INSTALL_ADDITIONAL_REGISTRIES=<registry_hostname> # export OO_INSTALL_INSECURE_REGISTRIES=<registry_hostname>
Blocked Docker registries cannot currently be specified using the quick installation method.
The configuration of additional, insecure, and blocked Docker registries occurs at the beginning of the installation process to ensure that these settings are applied before attempting to pull any of the required images.
2.4.4. Starting and Stopping Containers
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands. For containerized installations, these unit names match those of an RPM installation, with the exception of the etcd service which is named etcd_container.
This change is necessary as currently RHEL Atomic Host ships with the etcd package installed as part of the operating system, so a containerized version is used for the OpenShift Container Platform installation instead. The installation process disables the default etcd service. The etcd package is slated to be removed from RHEL Atomic Host in the future.
2.4.5. File Paths
All OpenShift Container Platform configuration files are placed in the same locations during containerized installation as RPM based installations and will survive os-tree upgrades.
However, the default image stream and template files are installed at /etc/origin/examples/ for containerized installations rather than the standard /usr/share/openshift/examples/, because that directory is read-only on RHEL Atomic Host.
2.4.6. Storage Requirements
RHEL Atomic Host installations normally have a very small root file system. However, the etcd, master, and node containers persist data in the /var/lib/ directory. Ensure that you have enough space on the root file system before installing OpenShift Container Platform. See the System Requirements section for details.
2.4.7. Open vSwitch SDN Initialization
OpenShift SDN initialization requires that the Docker bridge be reconfigured and that Docker is restarted. This complicates the situation when the node is running within a container. When using the Open vSwitch (OVS) SDN, you will see the node start, reconfigure Docker, restart Docker (which restarts all containers), and finally start successfully.
In this case, the node service may fail to start and be restarted a few times, because the master services are also restarted along with Docker. The current implementation uses a workaround which relies on setting the Restart=always
parameter in the Docker based systemd units.
2.5. Quick Installation
2.5.1. Overview
The quick installation method allows you to use an interactive CLI utility, the atomic-openshift-installer
command, to install OpenShift Container Platform across a set of hosts. This installer can deploy OpenShift Container Platform components on targeted hosts by either installing RPMs or running containerized services.
While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the installer is provided by an RPM and not available by default in RHEL Atomic Host. Therefore, it must be run from a Red Hat Enterprise Linux 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be.
This installation method is provided to make the installation experience easier by interactively gathering the data needed to run on each host. The installer is a self-contained wrapper intended for usage on a Red Hat Enterprise Linux (RHEL) 7 system.
In addition to running interactive installations from scratch, the atomic-openshift-installer
command can also be run or re-run using a predefined installation configuration file. This file can be used with the installer to:
- run an unattended installation,
- add nodes to an existing cluster,
- upgrade your cluster, or
- reinstall the OpenShift Container Platform cluster completely.
Alternatively, you can use the advanced installation method for more complex environments.
To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.
2.5.2. Before You Begin
The installer allows you to install OpenShift Container Platform master and node components on a defined set of hosts.
By default, any hosts you designate as masters during the installation process are automatically also configured as nodes so that the masters are configured as part of the OpenShift Container Platform SDN. The node component on the masters, however, are marked unschedulable, which blocks pods from being scheduled on it. After the installation, you can mark them schedulable if you want.
Before installing OpenShift Container Platform, you must first satisfy the prerequisites on your hosts, which includes verifying system and environment requirements and properly installing and configuring Docker. You must also be prepared to provide or validate the following information for each of your targeted hosts during the course of the installation:
- User name on the target host that should run the Ansible-based installation (can be root or non-root)
- Host name
- Whether to install components for master, node, or both
- Whether to use the RPM or containerized method
- Internal and external IP addresses
If you are installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see the Installing on Containerized Hosts topic to ensure that you understand the differences between these methods, then return to this topic to continue.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue to running an interactive or unattended installation.
2.5.3. Running an Interactive Installation
Ensure you have read through Before You Begin.
You can start the interactive installation by running:
$ atomic-openshift-installer install
Then follow the on-screen instructions to install a new OpenShift Container Platform cluster.
After it has finished, ensure that you back up the ~/.config/openshift/installer.cfg.ymlinstallation configuration file that is created, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.
2.5.4. Defining an Installation Configuration File
The installer can use a predefined installation configuration file, which contains information about your installation, individual hosts, and cluster. When running an interactive installation, an installation configuration file based on your answers is created for you in ~/.config/openshift/installer.cfg.yml. The file is created if you are instructed to exit the installation to manually modify the configuration or when the installation completes. You can also create the configuration file manually from scratch to perform an unattended installation.
Installation Configuration File Specification
version: v2 1 variant: openshift-enterprise 2 variant_version: 3.7 3 ansible_log_path: /tmp/ansible.log 4 deployment: ansible_ssh_user: root 5 hosts: 6 - ip: 10.0.0.1 7 hostname: master-private.example.com 8 public_ip: 24.222.0.1 9 public_hostname: master.example.com 10 roles: 11 - master - node containerized: true 12 connect_to: 24.222.0.1 13 - ip: 10.0.0.2 hostname: node1-private.example.com public_ip: 24.222.0.2 public_hostname: node1.example.com node_labels: {'region': 'infra'} 14 roles: - node connect_to: 10.0.0.2 - ip: 10.0.0.3 hostname: node2-private.example.com public_ip: 24.222.0.3 public_hostname: node2.example.com roles: - node connect_to: 10.0.0.3 roles: 15 master: <variable_name1>: "<value1>" 16 <variable_name2>: "<value2>" node: <variable_name1>: "<value1>" 17
- 1
- The version of this installation configuration file. As of OpenShift Container Platform 3.3, the only valid version here is
v2
. - 2
- The OpenShift Container Platform variant to install. For OpenShift Container Platform, set this to
openshift-enterprise
. - 3
- A valid version of your selected variant:
3.7
,3.6
,3.5
,3.4
,3.3
,3.2
, or3.1
. If not specified, this defaults to the latest version for the specified variant. - 4
- Defines where the Ansible logs are stored. By default, this is the /tmp/ansible.log file.
- 5
- Defines which user Ansible uses to SSH in to remote systems for gathering facts and for the installation. By default, this is the root user, but you can set it to any user that has sudo privileges.
- 6
- Defines a list of the hosts onto which you want to install the OpenShift Container Platform master and node components.
- 7 8
- Required. Allows the installer to connect to the system and gather facts before proceeding with the install.
- 9 10
- Required for unattended installations. If these details are not specified, then this information is pulled from the facts gathered by the installer, and you are asked to confirm the details. If undefined for an unattended installation, the installation fails.
- 11
- Determines the type of services that are installed. Specified as a list.
- 12
- If set to true, containerized OpenShift Container Platform services are run on target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details.
- 13
- The IP address that Ansible attempts to connect to when installing, upgrading, or uninstalling the systems. If the configuration file was auto-generated, then this is the value you first enter for the host during that interactive install process.
- 14
- Node labels can optionally be set per-host.
- 15
- Defines a dictionary of roles across the deployment.
- 16 17
- Any ansible variables that should only be applied to hosts assigned a role can be defined. For examples, see Configuring Ansible.
2.5.5. Running an Unattended Installation
Ensure you have read through the Before You Begin.
Unattended installations allow you to define your hosts and cluster configuration in an installation configuration file before running the installer so that you do not have to go through all of the interactive installation questions and answers. It also allows you to resume an interactive installation you may have left unfinished, and quickly get back to where you left off.
To run an unattended installation, first define an installation configuration file at ~/.config/openshift/installer.cfg.yml. Then, run the installer with the -u
flag:
$ atomic-openshift-installer -u install
By default in interactive or unattended mode, the installer uses the configuration file located at ~/.config/openshift/installer.cfg.yml if the file exists. If it does not exist, attempting to start an unattended installation fails.
Alternatively, you can specify a different location for the configuration file using the -c
option, but doing so will require you to specify the file location every time you run the installation:
$ atomic-openshift-installer -u -c </path/to/file> install
After the unattended installation finishes, ensure that you back up the ~/.config/openshift/installer.cfg.yml file that was used, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.
2.5.6. Verifying the Installation
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.
For example, for a master host with a host name of
master.openshift.com
and using the default port of8443
, the web console would be found athttps://master.openshift.com:8443/console
.- Then, see What’s Next for the next steps on configuring your OpenShift Container Platform cluster.
2.5.7. Uninstalling OpenShift Container Platform
You can uninstall OpenShift Container Platform from all hosts in your cluster using the installer’s uninstall
command. By default, the installer uses the installation configuration file located at ~/.config/openshift/installer.cfg.yml if the file exists:
$ atomic-openshift-installer uninstall
Alternatively, you can specify a different location for the configuration file using the -c
option:
$ atomic-openshift-installer -c </path/to/file> uninstall
See the advanced installation method for more options.
2.5.8. What’s Next?
Now that you have a working OpenShift Container Platform instance, you can:
- Configure authentication; by default, authentication is set to Deny All.
- Configure the automatically-deployed integrated Docker registry.
- Configure the automatically-deployed router.
2.6. Advanced Installation
2.6.1. Overview
A reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing a OpenShift Container Platform cluster. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.
While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host, and must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be.
Alternatively, a containerized version of the installer is available as a system container, which is currently a Technology Preview feature.
Alternatively, you can use the quick installation method if you prefer an interactive installation experience.
To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.
Running Ansible playbooks with the --tags
or --check
options is not supported by Red Hat.
2.6.2. Before You Begin
Before installing OpenShift Container Platform, you must first see the Prerequisites and Host Preparation topics to prepare your hosts. This includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 2.3 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
If you are interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to ensure that you understand the differences between these methods, then return to this topic to continue.
For large-scale installs, including suggestions for optimizing install time, see the Scaling and Performance Guide.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible Inventory Files.
2.6.3. Configuring Ansible Inventory Files
The /etc/ansible/hosts file is Ansible’s inventory file for the playbook used to install OpenShift Container Platform. The inventory file describes the configuration for your OpenShift Container Platform cluster. You must replace the default contents of the file with your desired configuration.
The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation.
Many of the Ansible variables described are optional. Accepting the default values should suffice for development environments, but for production environments, it is recommended you read through and become familiar with the various options available.
The example inventories describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.
Image Version Policy
Images require a version number policy in order to maintain updates. See the Image Version Tag Policy section in the Architecture Guide for more information.
2.6.3.1. Configuring Cluster Variables
To assign environment variables during the Ansible install that apply more globally to your OpenShift Container Platform cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] openshift_master_default_subdomain=apps.test.example.com
If a parameter value in the Ansible inventory file contains special characters, such as #
, {
or }
, you must double-escape the value (that is enclose the value in both single and double quotation marks). For example, to use mypasswordwith###hashsigns
as a value for the variable openshift_cloudprovider_openstack_password
, declare it as openshift_cloudprovider_openstack_password='"mypasswordwith###hashsigns"'
in the Ansible host inventory file.
The following table describe variables for use with the Ansible installer that can be assigned cluster-wide:
Variable | Purpose |
---|---|
|
This variable sets the SSH user for the installer to use and defaults to |
|
If |
|
This variable sets which INFO messages are logged to the
For more information on debug log levels, see Configuring Logging Levels. |
|
If set to |
|
Whether to enable Network Time Protocol (NTP) on cluster nodes. Important To prevent masters and nodes in the cluster from going out of sync, do not change the default value of this parameter. |
| This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. For example: openshift_master_admission_plugin_config={"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","memoryRequestToLimitPercent":"25","cpuRequestToLimitPercent":"25","limitCPUToMemoryPercent":"200"}}} |
| This variable enables API service auditing. See Audit Configuration for more information. |
| This variable overrides the host name for the cluster, which defaults to the host name of the master. |
| This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer. For example: openshift_master_cluster_public_hostname=openshift-ansible.public.example.com |
|
Optional. This variable defines the HA method when deploying multiple masters. Supports the |
|
This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when running the upgrade playbook directly. It defaults to |
|
This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to |
| This variable sets the identity provider. The default value is Deny All. If you use a supported identity provider, configure OpenShift Container Platform to use it. |
| These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information. |
| |
| Provide the location of the custom certificates for the hosted router. |
|
Validity of the auto-generated registry certificate in days. Defaults to |
|
Validity of the auto-generated CA certificate in days. Defaults to |
|
Validity of the auto-generated node certificate in days. Defaults to |
|
Validity of the auto-generated master certificate in days. Defaults to |
|
Validity of the auto-generated external etcd certificates in days. Controls validity for etcd CA, peer, server and client certificates. Defaults to |
|
Set to |
| These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information. |
| |
| |
| |
|
This variable configures |
|
This variable configures the subnet in which services will be created within the OpenShift Container Platform SDN. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to |
| This variable overrides the default subdomain to use for exposed routes. |
|
This variable specifies the service proxy mode to use: either |
| Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details. |
| Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details. |
| This variable enables the template service broker by specifying one or more namespaces whose templates will be served by the broker. |
|
Default node selector for automatically deploying template service broker pods, defaults |
| This variable overrides the node selector that projects will use by default when placing pods. |
|
This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to |
|
This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift Container Platform SDN. Defaults to |
|
This variable enables flannel as an alternative networking layer instead of the default SDN. If enabling flannel, disable the default SDN with the |
| OpenShift Container Platform adds the specified additional registry or registries to the docker configuration. These are the registries to search. |
|
OpenShift Container Platform adds the specified additional insecure registry or registries to the docker configuration. For any of these registries, secure sockets layer (SSL) is not verified. Also, add these registries to |
|
OpenShift Container Platform adds the specified blocked registry or registries to the docker configuration. Block the listed registries. Setting this to |
|
This variable sets the host name for integration with the metrics console by overriding |
| This variable is a cluster identifier unique to the AWS Availability Zone. Using this avoids potential issues in Amazon Web Service (AWS) with multiple zones or multiple clusters. See Labeling Clusters for AWS for details. |
| Use this variable to specify a container image tag to install or configure. |
| Use this variable to specify an RPM version to install or configure. |
If you modify the openshift_image_tag
or the openshift_pkg_version
variables after the cluster is set up, then an upgrade can be triggered, resulting in downtime.
-
If
openshift_image_tag
is set, its value is used for all hosts in containerized environments, even those that have another version installed. If -
openshift_pkg_version
is set, its value is used for all hosts in RPM-based environments, even those that have another version installed.
2.6.3.2. Configuring Deployment Type
Various defaults used throughout the playbooks and roles used by the installer are based on the deployment type configuration (usually defined in an Ansible inventory file).
Ensure the openshift_deployment_type
parameter in your inventory file’s [OSEv3:vars]
section is set to openshift-enterprise
to install the OpenShift Container Platform variant:
[OSEv3:vars] openshift_deployment_type=openshift-enterprise
2.6.3.3. Configuring Host Variables
To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
| This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name. |
| This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. |
| This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
| If set to true, containerized OpenShift Container Platform services are run on the target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. Containerized installations are supported starting in OpenShift Container Platform 3.1.1. |
| This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details. |
|
This variable is used to configure |
|
This variable configures additional "--log-driver json-file --log-opt max-size=1M --log-opt max-file=3" "--log-driver journald"
Do not use when running |
| This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See Configuring Schedulability on Masters. |
2.6.3.4. Configuring Master API and Console Ports
To configure the default ports used by the master API and web console, configure the following variables in the /etc/ansible/hosts file:
Variable | Purpose |
---|---|
| This variable sets the port number to access the OpenShift Container Platform API. |
| This variable sets the console port number to access the OpenShift Container Platform console with a web browser. |
For example:
openshift_master_api_port=3443 openshift_master_console_port=8756
2.6.3.5. Configuring Cluster Pre-install Checks
Pre-install checks are a set of diagnostic tasks that run as part of the openshift_health_checker Ansible role. They run prior to an Ansible installation of OpenShift Container Platform, ensure that required inventory values are set, and identify potential issues on a host that can prevent or interfere with a successful installation.
The following table describes available pre-install checks that will run before every Ansible installation of OpenShift Container Platform:
Check Name | Purpose |
---|---|
|
This check ensures that a host has the recommended amount of memory for the specific deployment of OpenShift Container Platform. Default values have been derived from the latest installation documentation. A user-defined value for minimum memory requirements may be set by setting the |
|
This check only runs on etcd, master, and node hosts. It ensures that the mount path for an OpenShift Container Platform installation has sufficient disk space remaining. Recommended disk values are taken from the latest installation documentation. A user-defined value for minimum disk space requirements may be set by setting |
|
Only runs on hosts that depend on the docker daemon (nodes and containerized installations). Checks that docker's total usage does not exceed a user-defined limit. If no user-defined limit is set, docker's maximum usage threshold defaults to 90% of the total size available. The threshold limit for total percent usage can be set with a variable in your inventory file: |
|
Ensures that the docker daemon is using a storage driver supported by OpenShift Container Platform. If the |
| Attempts to ensure that images required by an OpenShift Container Platform installation are available either locally or in at least one of the configured container image registries on the host machine. |
|
Specifies the generic release of OpenShift Container Platform for containerized installations. For RPM installations, set a |
|
Runs on |
| Runs prior to non-containerized installations of OpenShift Container Platform. Ensures that RPM packages required for the current installation are available. |
|
Checks whether a |
To disable specific pre-install checks, include the variable openshift_disable_check
with a comma-delimited list of check names in your inventory file. For example:
openshift_disable_check=memory_availability,disk_availability
A similar set of health checks meant to run for diagnostics on existing clusters can be found in Ansible-based Health Checks. Another set of checks for checking certificate expiration can be found in Redeploying Certificates.
2.6.3.6. Configuring System Containers
All system container components are Technology Preview features in OpenShift Container Platform 3.7. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.7. During this phase, they are only meant for use with new cluster installations in non-production environments.
System containers provide a way to containerize services that need to run before the docker
daemon is running. They are Docker-formatted containers that use:
System containers are therefore stored and run outside of the traditional docker
service. For more details on system container technology, see Running System Containers in the Red Hat Enterprise Linux Atomic Host: Managing Containers documentation.
You can configure your OpenShift Container Platform installation to run certain components as system containers instead of their RPM or standard containerized methods. Currently, the docker
and etcd components can be run as system containers in OpenShift Container Platform.
System containers are currently OS-specific because they require specific versions of atomic
and systemd. For example, different system containers are created for RHEL, Fedora, or CentOS. Ensure that the system containers you are using match the OS of the host they will run on. OpenShift Container Platform only supports RHEL and RHEL Atomic as the host OS, so by default system containers built for RHEL are used.
2.6.3.6.1. Running Docker as a System Container
All system container components are Technology Preview features in OpenShift Container Platform 3.7. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.7. During this phase, they are only meant for use with new cluster installations in non-production environments.
The traditional method for using docker
in an OpenShift Container Platform cluster is an RPM package installation. For Red Hat Enterprise Linux (RHEL) systems, it must be specifically installed; for RHEL Atomic Host systems, it is provided by default.
However, you can configure your OpenShift Container Platform installation to alternatively run docker
on node hosts as a system container. When using the system container method, the container-engine
container image and systemd service is used on the host instead of the docker
package and service.
To run docker
as a system container:
Because the default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, for any RHEL systems you must still configure a thin pool logical volume for
docker
to use before running the OpenShift Container Platform installation. You can skip these steps for any RHEL Atomic Host systems.For any RHEL systems, perform the steps described in the following sections:
After completing the storage configuration steps, you can leave the RPM installed.
Set the following cluster variable to
True
in your inventory file in the[OSEv3:vars]
section:openshift_docker_use_system_container=True
When using the system container method, the following inventory variables for docker
are ignored:
-
docker_version
-
docker_upgrade
Further, the following inventory variable must not be used:
-
openshift_docker_options
You can also force docker
in the system container to use a specific container registry and repository when pulling the container-engine
image instead of from the default registry.access.redhat.com/openshift3/
. To do so, set the following cluster variable in your inventory file in the [OSEv3:vars]
section:
openshift_docker_systemcontainer_image_override="<registry>/<user>/<image>:<tag>"
2.6.3.6.2. Running etcd as a System Container
All system container components are Technology Preview features in OpenShift Container Platform 3.7. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.7. During this phase, they are only meant for use with new cluster installations in non-production environments.
When using the RPM-based installation method for OpenShift Container Platform, etcd is installed using RPM packages on any RHEL systems. When using the containerized installation method, the rhel7/etcd
image is used instead for RHEL or RHEL Atomic Hosts.
However, you can configure your OpenShift Container Platform installation to alternatively run etcd as a system container. Whereas the standard containerized method uses a systemd service named etcd_container
, the system container method uses the service name etcd, same as the RPM-based method. The data directory for etcd using this method is /var/lib/etcd.
To run etcd as a system container, set the following cluster variable in your inventory file in the [OSEv3:vars]
section:
openshift_use_etcd_system_container=True
2.6.3.7. Configuring a Registry Location
If you are using an image registry other than the default at registry.access.redhat.com
, specify the desired registry within the /etc/ansible/hosts file.
oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true
Variable | Purpose |
---|---|
|
Set to the alternate image location. Necessary if you are not using the default registry at |
|
Set to |
2.6.3.8. Configuring a Registry Route
To allow users to push and pull images to the internal Docker registry from outside of the OpenShift Container Platform cluster, configure the registry route in the /etc/ansible/hosts file. By default, the registry route is docker-registry-default.router.default.svc.cluster.local.
Variable | Purpose |
---|---|
|
Set to the value of the desired registry route. The route contains either a name that resolves to an infrastructure node where a router manages communication or the subdomain that you set as the default application subdomain wildcard value. For example, if you set the |
| Set the paths to the registry certificates. If you do not provide values for the certificate locations, certificates are generated. You can define locations for the following certificates:
|
| Set to one of the following values:
|
For example:
openshift_hosted_registry_routehost=<path> openshift_hosted_registry_routetermination=reencrypt openshift_hosted_registry_routecertificates= "{'certfile': '<path>/org-cert.pem', 'keyfile': '<path>/org-privkey.pem', 'cafile': '<path>/org-chain.pem'}"
2.6.3.9. Configuring the Registry Console
If you are using a Cockpit registry console image other than the default or require a specific version of the console, specify the desired registry within the /etc/ansible/hosts file.
openshift_cockpit_deployer_prefix=<registry-name>/<namespace>/ openshift_cockpit_deployer_version=<cockpit-image-tag>
Variable | Purpose |
---|---|
| Specify the URL and path to the directory where the image is located. |
| Specify the Cockpit image verion. |
For example: If your image is at registry.example.com/openshift3/registry-console and you require version 1.4.1, enter:
openshift_cockpit_deployer_prefix='registry.example.com/openshift3/' openshift_cockpit_deployer_version='1.4.1'
2.6.3.9.1. Configuring Registry Storage
There are several options for enabling registry storage when using the advanced install:
Option A: NFS Host Group
The use of NFS for registry storage is not recommended in OpenShift Container Platform.
When the following variables are set, an NFS volume is created during an advanced install with the path <nfs_directory>/<volume_name> on the host within the [nfs]
host group. For example, the volume path using these options would be /exports/registry:
[OSEv3:vars] openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
Option B: External NFS Host
The use of NFS for registry storage is not recommended in OpenShift Container Platform.
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host. The remote volume path using the following options would be nfs.example.com:/exports/registry.
[OSEv3:vars] openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_host=nfs.example.com openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
Option C: OpenStack Platform
An OpenStack storage configuration must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_kind=openstack openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_openstack_filesystem=ext4 openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57 openshift_hosted_registry_storage_volume_size=10Gi
Option D: AWS or Another S3 Storage Solution
The simple storage solution (S3) bucket must already exist.
[OSEv3:vars] #openshift_hosted_registry_storage_kind=object #openshift_hosted_registry_storage_provider=s3 #openshift_hosted_registry_storage_s3_accesskey=access_key_id #openshift_hosted_registry_storage_s3_secretkey=secret_access_key #openshift_hosted_registry_storage_s3_bucket=bucket_name #openshift_hosted_registry_storage_s3_region=bucket_region #openshift_hosted_registry_storage_s3_chunksize=26214400 #openshift_hosted_registry_storage_s3_rootdirectory=/registry #openshift_hosted_registry_pullthrough=true #openshift_hosted_registry_acceptschema2=true #openshift_hosted_registry_enforcequota=true
If you are using a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:
openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/
Option E: Google Cloud Storage (GCS) bucket on Google Compute Engine (GCE)
A GCS bucket must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_provider=gcs openshift_hosted_registry_storage_gcs_bucket=bucket01 openshift_hosted_registry_storage_gcs_keyfile=test.key openshift_hosted_registry_storage_gcs_rootdirectory=/registry
2.6.3.10. Configuring Router Sharding
Router sharding support is enabled by supplying the correct data to the inventory. The variable openshift_hosted_routers
holds the data, which is in the form of a list. If no data is passed, then a default router is created. There are multiple combinations of router sharding. The following example supports routers on separate nodes:
openshift_hosted_routers=[{'name': 'router1', 'certificate': {'certfile': '/path/to/certificate/abc.crt', 'keyfile': '/path/to/certificate/abc.key', 'cafile': '/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router', 'namespace': 'default', 'stats_port': 1936, 'edits': [], 'images': 'openshift3/ose-${component}:${version}', 'selector': 'type=router1', 'ports': ['80:80', '443:443']}, {'name': 'router2', 'certificate': {'certfile': '/path/to/certificate/xyz.crt', 'keyfile': '/path/to/certificate/xyz.key', 'cafile': '/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router', 'namespace': 'default', 'stats_port': 1936, 'edits': [{'action': 'append', 'key': 'spec.template.spec.containers[0].env', 'value': {'name': 'ROUTE_LABELS', 'value': 'route=external'}}], 'images': 'openshift3/ose-${component}:${version}', 'selector': 'type=router2', 'ports': ['80:80', '443:443']}]
2.6.3.11. Configuring GlusterFS Persistent Storage
GlusterFS can be configured to provide peristent storage and dynamic provisioning for OpenShift Container Platform. It can be used both containerized within OpenShift Container Platform and non-containerized on its own nodes.
2.6.3.11.1. Configuring Containerized GlusterFS Persistent Storage
This option utilizes Red Hat Container Native Storage (CNS) for configuring containerized GlusterFS persistent storage in OpenShift Container Platform.
See Containerized GlusterFS Considerations for specific host preparations and prerequisites.
In your inventory file, add
glusterfs
in the[OSEv3:children]
section to enable the[glusterfs]
group:[OSEv3:children] masters nodes glusterfs
(Optional) Include any of the following role variables in the
[OSEv3:vars]
section you wish to change:[OSEv3:vars] openshift_storage_glusterfs_namespace=glusterfs 1 openshift_storage_glusterfs_name=storage 2
Add a
[glusterfs]
section with entries for each storage node that will host the GlusterFS storage and include theglusterfs_ip
andglusterfs_devices
parameters in the form:<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs] 192.168.10.11 glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' 192.168.10.12 glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' 192.168.10.13 glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
Set
glusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Setglusterfs_ip
to the IP address that will be used by pods to communicate with the GlusterFS node.Add the hosts listed under
[glusterfs]
to the[nodes]
group as well:[nodes] 192.168.10.11 192.168.10.12 192.168.10.13
After completing the cluster installation per Running the Advanced Installation, run the following from a master to verify the necessary objects were successfully created:
Verfiy that the GlusterFS
StorageClass
was created:# oc get storageclass NAME TYPE glusterfs-storage kubernetes.io/glusterfs
Verify that the route was created:
# oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD heketi-glusterfs-route heketi-glusterfs-default.cloudapps.example.com heketi-glusterfs <all> None
NoteThe name for the route will be
heketi-glusterfs-route
unless the defaultglusterfs
value was overridden using theopenshift_glusterfs_storage_name
variable in the inventory file.Use
curl
to verify the route works correctly:# curl http://heketi-glusterfs-default.cloudapps.example.com/hello Hello from Heketi.
After successful installation, see Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment to check the status of the GlusterFS clusters.
Dynamic provisioning of GlusterFS volumes can occur by creating a PVC to request storage.
2.6.3.12. Configuring the OpenShift Container Registry
Additional configuration options are available at installation time for the OpenShift Container Registry.
If no registry storage options are used, the default OpenShift Container Platform registry is ephermal and all data will be lost if the pod no longer exists. OpenShift Container Platform also supports a single node NFS-backed registry, but this option lacks redundancy and reliability compared with the GlusterFS-backed option.
2.6.3.12.1. Configuring a Containerized GlusterFS-Backed Registry
Similar to configuring containerized GlusterFS for persistent storage, GlusterFS storage can be configured and deployed for an OpenShift Container Registry during the initial installation of the cluster to offer redundant and more reliable storage for the registry.
See Containerized GlusterFS Considerations for specific host preparations and prerequisites.
Configuration of storage for an OpenShift Container Registry is very similar to configuration for GlusterFS persistent storage in that it can be either containerized or non-containerized. For this containerized method, the following exceptions and additions apply:
In your inventory file, add
glusterfs_registry
in the[OSEv3:children]
section to enable the[glusterfs_registry]
group:[OSEv3:children] masters nodes glusterfs_registry
Add the following role variable in the
[OSEv3:vars]
section to enable the GlusterFS-backed registry, provided that theglusterfs_registry
group name and the[glusterfs_registry]
group exist:[OSEv3:vars] openshift_hosted_registry_storage_kind=glusterfs
It is recommended to have at least three registry pods, so set the following role variable in the
[OSEv3:vars]
section:openshift_hosted_registry_replicas=3
If you want to specify the volume size for the GlusterFS-backed registry, set the following role variable in
[OSEv3:vars]
section:openshift_hosted_registry_storage_volume_size=10Gi
If unspecified, the volume size defaults to
5Gi
.The installer will deploy the OpenShift Container Registry pods and associated routers on nodes containing the
region=infra
label. Add this label on at least one node entry in the[nodes]
section, otherwise the registry deployment will fail. For example:[nodes] 192.168.10.14 openshift_schedulable=True openshift_node_labels="{'region': 'infra'}"
Add a
[glusterfs_registry]
section with entries for each storage node that will host the GlusterFS-backed registry and include theglusterfs_ip
andglusterfs_devices
parameters in the form:<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs_registry] 192.168.10.14 glusterfs_ip=192.168.10.14 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' 192.168.10.15 glusterfs_ip=192.168.10.15 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' 192.168.10.16 glusterfs_ip=192.168.10.16 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
Set
glusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Setglusterfs_ip
to the IP address that will be used by pods to communicate with the GlusterFS node.Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group as well:[nodes] 192.168.10.14 192.168.10.15 192.168.10.16
After successful installation, see Operations on a Red Hat Gluster Storage Pod in an OpenShift Environment to check the status of the GlusterFS clusters.
2.6.3.13. Configuring Global Proxy Options
If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.
In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.
See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds.
Variable | Purpose |
---|---|
|
This variable specifies the |
|
This variable specifices the |
|
This variable is used to set the The host names that do not use the defined proxy include:
|
|
This boolean variable specifies whether or not the names of all defined OpenShift hosts and |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the HTTP proxy used by |
|
This variable defines the HTTPS proxy used by |
2.6.3.14. Configuring the Firewall
- If you are changing the default firewall, ensure that each host in your cluster is using the same firewall type to prevent inconsistencies.
- Do not use firewalld with the OpenShift Container Platform installed on Atomic Host. firewalld is not supported on Atomic host.
While iptables is the default firewall, firewalld is recommended for new installations.
OpenShift Container Platform uses iptables as the default firewall, but you can configure your cluster to use firewalld during the install process.
Because iptables is the default firewall, OpenShift Container Platform is designed to have it configured automatically. However, iptables rules can break OpenShift Container Platform if not configured correctly. The advantages of firewalld include allowing multiple objects to safely share the firewall rules.
To use firewalld as the firewall for an OpenShift Container Platform installation, add the os_firewall_use_firewalld
variable to the list of configuration variables in the Ansible host file at install:
[OSEv3:vars]
os_firewall_use_firewalld=True 1
- 1
- Setting this variable to
true
opens the required ports and adds rules to the default zone, ensuring that firewalld is configured correctly.
Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.
2.6.3.15. Configuring Schedulability on Masters
Any hosts you designate as masters during the installation process should also be configured as nodes so that the masters are configured as part of the OpenShift SDN. You must do so by adding entries for these hosts to the [nodes]
section:
[nodes] master.example.com
In order to ensure that your masters are not burdened with running pods, they are automatically marked unschedulable by default by the installer, meaning that new pods cannot be placed on the hosts. This is the same as setting the openshift_schedulable=False
host variable.
You can manually set a master host to schedulable during installation using the openshift_schedulable=true
host variable, though this is not recommended in production environments:
[nodes] master.example.com openshift_schedulable=true
If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable or Schedulable.
2.6.3.16. Configuring Node Host Labels
You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler. Other than region=infra
(discussed in Configuring Dedicated Infrastructure Nodes), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.
To assign labels to a node host during an Ansible install, use the openshift_node_labels
variable with the desired labels added to the desired node host entry in the [nodes]
section. In the following example, labels are set for a region called primary
and a zone called east
:
[nodes] node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
2.6.3.16.1. Configuring Dedicated Infrastructure Nodes
The openshift_router_selector
and openshift_registry_selector
Ansible settings determine the label selectors used when placing registry and router pods. They are set to region=infra
by default:
# default selectors for router and registry services # openshift_router_selector='region=infra' # openshift_registry_selector='region=infra'
The default router and registry will be automatically deployed during installation if nodes exist in the [nodes]
section that match the selector settings. For example:
[nodes] infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}"
The registry and router are only able to run on node hosts with the region=infra
label. Ensure that at least one node host in your OpenShift Container Platform environment has the region=infra
label.
It is recommended for production environments that you maintain dedicated infrastructure nodes where the registry and router pods can run separately from pods used for user applications.
If you do not intend to use OpenShift Container Platform to manage the registry and router, configure the following Ansible settings:
openshift_hosted_manage_registry=false openshift_hosted_manage_router=false
If you are using an image registry other than the default registry.access.redhat.com
, you need to specify the desired registry in the /etc/ansible/hosts file.
As described in Configuring Schedulability on Masters, master hosts are marked unschedulable by default. If you label a master host with region=infra
and have no other dedicated infrastructure nodes, you must also explicitly mark these master hosts as schedulable. Otherwise, the registry and router pods cannot be placed anywhere:
[nodes] master.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}" openshift_schedulable=true
2.6.3.17. Configuring Session Options
Session options in the OAuth configuration are configurable in the inventory file. By default, Ansible populates a sessionSecretsFile
with generated authentication and encryption secrets so that sessions generated by one master can be decoded by the others. The default location is /etc/origin/master/session-secrets.yaml, and this file will only be re-created if deleted on all masters.
You can set the session name and maximum number of seconds with openshift_master_session_name
and openshift_master_session_max_seconds
:
openshift_master_session_name=ssn openshift_master_session_max_seconds=3600
If provided, openshift_master_session_auth_secrets
and openshift_master_encryption_secrets
must be equal length.
For openshift_master_session_auth_secrets
, used to authenticate sessions using HMAC, it is recommended to use secrets with 32 or 64 bytes:
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
For openshift_master_encryption_secrets
, used to encrypt sessions, secrets must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
2.6.3.18. Configuring Custom Certificates
Custom serving certificates for the public host names of the OpenShift Container Platform API and web console can be deployed during an advanced installation and are configurable in the inventory file.
Custom certificates should only be configured for the host name associated with the publicMasterURL
which can be set using openshift_master_cluster_public_hostname
. Using a custom serving certificate for the host name associated with the masterURL
(openshift_master_cluster_hostname
) will result in TLS errors as infrastructure components will attempt to contact the master API using the internal masterURL
host.
Certificate and key file paths can be configured using the openshift_master_named_certificates
cluster variable:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "cafile": "/path/to/custom-ca1.crt"}]
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name
and Subject Alternative Names
. Detected names can be overridden by providing the "names"
key when setting openshift_master_named_certificates
:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"], "cafile": "/path/to/custom-ca1.crt"}]
Certificates configured using openshift_master_named_certificates
are cached on masters, meaning that each additional Ansible run with a different set of certificates results in all previously deployed certificates remaining in place on master hosts and within the master configuration file.
If you would like openshift_master_named_certificates
to be overwritten with the provided value (or no value), specify the openshift_master_overwrite_named_certificates
cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native openshift_master_cluster_hostname=lb-internal.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, you could set the following:
openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"]}] openshift_master_overwrite_named_certificates=true
2.6.3.19. Configuring Certificate Validity
By default, the certificates used to govern the etcd, master, and kubelet expire after two to five years. The validity (length in days until they expire) for the auto-generated registry, CA, node, and master certificates can be configured during installation using the following variables (default values shown):
[OSEv3:vars] openshift_hosted_registry_cert_expire_days=730 openshift_ca_cert_expire_days=1825 openshift_node_cert_expire_days=730 openshift_master_cert_expire_days=730 etcd_ca_default_days=1825
These values are also used when redeploying certificates via Ansible post-installation.
2.6.3.20. Configuring Cluster Metrics
The OpenShift Container Platform web console uses the data coming from the Hawkular Metrics service to display its graphs. The metrics public URL can be set during cluster installation using the openshift_metrics_hawkular_hostname
Ansible variable, which defaults to:
https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics
If you alter this variable, ensure the host name is accessible via your router.
In accordance with upstream Kubernetes rules, metrics can be collected only on the default interface of eth0
.
You must set an openshift_master_default_subdomain
value to deploy metrics.
2.6.3.20.1. Configuring Metrics Storage
The openshift_metrics_cassandra_storage_type
variable must be set in order to use persistent storage for metrics. If openshift_metrics_cassandra_storage_type
is not set, then cluster metrics data is stored in an emptyDir
volume, which will be deleted when the Cassandra pod terminates.
There are three options for enabling cluster metrics storage when using the advanced install:
Option A: Dynamic
If your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider, use the following variable:
[OSEv3:vars] openshift_metrics_cassandra_storage_type=dynamic
If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. For example, openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block
.
Check Volume Configuration for more information on using DynamicProvisioningEnabled
to enable or disable dynamic provisioning.
Option B: NFS Host Group
The use of NFS for metrics storage is not recommended in OpenShift Container Platform.
When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs]
host group. For example, the volume path using these options would be /exports/metrics:
[OSEv3:vars] openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_nfs_options='*(rw,root_squash)' openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
Option C: External NFS Host
The use of NFS for metrics storage is not recommended in OpenShift Container Platform.
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_host=nfs.example.com openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/metrics.
2.6.3.21. Configuring Cluster Logging
Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging when using the advanced installation method:
[OSEv3:vars] openshift_logging_install_logging=true
2.6.3.21.1. Configuring Logging Storage
The openshift_logging_es_pvc_dynamic
variable must be set in order to use persistent storage for logging. If openshift_logging_es_pvc_dynamic
is not set, then cluster logging data is stored in an emptyDir
volume, which will be deleted when the Elasticsearch pod terminates.
There are three options for enabling cluster logging storage when using the advanced install:
Option A: Dynamic
If your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider, use the following variable:
[OSEv3:vars] openshift_logging_es_pvc_dynamic=true
If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. For example, openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block
.
Check Volume Configuration for more information on using DynamicProvisioningEnabled
to enable or disable dynamic provisioning.
Option B: NFS Host Group
The use of NFS for logging storage is not recommended in OpenShift Container Platform.
When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs]
host group. For example, the volume path using these options would be /exports/logging:
[OSEv3:vars] openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_nfs_options='*(rw,root_squash)' openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=10Gi
Option C: External NFS Host
The use of NFS for logging storage is not recommended in OpenShift Container Platform.
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_host=nfs.example.com openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/logging.
2.6.3.22. Customizing Service Catalog Options
Starting with OpenShift Container Platform 3.7, the service catalog is enabled by default during installation. Enabling the service broker allows you to register service brokers with the catalog. When the service catalog is enabled, the OpenShift Ansible broker and template service broker are both installed as well; see Configuring the OpenShift Ansible Broker and Configuring the Template Service Broker for more information. If you disable the service catalog, the OpenShift Ansible broker and template service broker are not installed.
To disable automatic deployment of the service catalog, set the following cluster variable in your inventory file:
openshift_enable_service_catalog=false
When the service catalog is enabled, the web console shows the updated landing page. The OpenShift Ansible broker and template service broker are both enabled as well; see Configuring the OpenShift Ansible Broker and Configuring the Template Service Broker for more information.
2.6.3.22.1. Configuring the OpenShift Ansible Broker
Starting with OpenShift Container Platform 3.7, the OpenShift Ansible broker (OAB) is enabled by default during installation.
If you do not want to install the OAB, set the ansible_service_broker_install
parameter value to false
in the inventory file:
ansible_service_broker_install=false
2.6.3.22.1.1. Configuring Persistent Storage for the OpenShift Ansible Broker
The OAB deploys its own etcd instance separate from the etcd used by the rest of the OpenShift Container Platform cluster. The OAB’s etcd instance requires separate storage using persistent volumes (PVs) to function. If no PV is available, etcd will wait until the PV can be satisfied. The OAB application will enter a CrashLoop
state until its etcd instance is available.
You can use the installer with the following variables to configure persistent storage for the OAB using NFS.
Variable | Purpose |
---|---|
|
Storage type to use for the etcd PV. |
| Name of etcd PV. |
|
Defaults to |
|
Size of the etcd PV. Defaults to |
|
Labels to use for the etcd PV. Defaults to |
|
NFS options to use. Defaults to |
|
Directory for NFS exports. Defaults to |
Some Ansible playbook bundles (APBs) may also require a PV for their own usage. Two APBs are currently provided with OpenShift Container Platform 3.7: MediaWiki and PostgreSQL. Both of these require their own PV to deploy.
To configure persistent storage for the OAB:
In your inventory file, add
nfs
to the[OSEv3:children]
section to enable the[nfs]
group:[OSEv3:children] masters nodes nfs
Add a
[nfs]
group section and add the host name for the system that will be the NFS host:[nfs] master1.example.com
Add the following in the
[OSEv3:vars]
section:openshift_hosted_etcd_storage_kind=nfs openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd 1 openshift_hosted_etcd_storage_volume_name=etcd-vol2 2 openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
These settings create a persistent volume that is attached to the OAB’s etcd instance during cluster installation.
2.6.3.22.1.2. Configuring the OpenShift Ansible Broker for Local APB Development
In order to do APB development with the OpenShift Container Registry in conjunction with the OAB, a whitelist of images the OAB can access must be defined. If a whitelist is not defined, the broker will ignore APBs and users will not see any APBs available.
By default, the whitelist is empty so that a user cannot add APB images to the broker without a cluster administrator configuring the broker. To whitelist all images that end in -apb
:
In your inventory file, add the following to the
[OSEv3:vars]
section:ansible_service_broker_local_registry_whitelist=['.*-apb$']
2.6.3.22.2. Configuring the Template Service Broker
Starting with OpenShift Container Platform 3.7, the template service broker (TSB) is enabled by default.
If you do not want to install the TSB, set the template_service_broker_install
parameter value to false
:
template_service_broker_install=false
To configure the TSB, one or more projects must be defined as the broker’s source namespace(s) for loading templates and image streams into the service catalog. Set the desired projects by modifying the following in your inventory file’s [OSEv3:vars]
section:
openshift_template_service_broker_namespaces=['openshift','myproject']
By default, the TSB will use the nodeselector {"region": "infra"}
for deploying its pods. You can modify this by setting the desired nodeselector in your inventory file’s [OSEv3:vars]
section:
template_service_broker_selector={"region": "infra"}
2.6.3.23. Configuring Web Console Customization
The following Ansible variables set master configuration options for customizing the web console. See Customizing the Web Console for more details on these customization options.
Variable | Purpose |
---|---|
|
Sets |
|
Sets |
|
Sets |
|
Sets |
|
Sets the OAuth template in the master configuration. See Customizing the Login Page for details. Example value: |
|
Sets |
|
Sets |
2.6.4. Example Inventory Files
2.6.4.1. Single Master Examples
You can configure an environment with a single master and multiple nodes, and either a single or multiple number of external etcd hosts.
Moving from a single master cluster to multiple masters after installation is not supported.
Single Master, Single etcd, and Multiple Nodes
The following table describes an example environment for a single master (with a single etcd on the same host), two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master, etcd, and node |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], and [nodes] sections of the following example inventory file:
Single Master, Single etcd, and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_become must be set to true #ansible_become=true openshift_deployment_type=openshift-enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] master.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Single Master, Multiple etcd, and Multiple Nodes
The following table describes an example environment for a single master, three etcd hosts, two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master and node |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
Single Master, Multiple etcd, and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
2.6.4.2. Multiple Masters Examples
You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.
Moving from a single master cluster to multiple masters after installation is not supported.
When configuring multiple masters, the advanced installation supports the native
high availability (HA) method. This method leverages the native HA master capabilities built into OpenShift Container Platform and can be combined with any load balancing solution.
If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.
This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.
For an external load balancing solution, you must have:
- A pre-created load balancer VIP configured for SSL passthrough.
-
A VIP listening on the port specified by the
openshift_master_api_port
andopenshift_master_console_port
values (8443 by default) and proxying back to all master hosts on that port. A domain name for VIP registered in DNS.
-
The domain name will become the value of both
openshift_master_cluster_public_hostname
andopenshift_master_cluster_hostname
in the OpenShift Container Platform installer.
-
The domain name will become the value of both
See External Load Balancer Integrations for more information. For more on the high availability master architecture, see Kubernetes Infrastructure.
The advanced installation method does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments.
To configure multiple masters, refer to the following section.
Multiple Masters with Multiple etcd
The following describes an example environment for three masters using the native
HA method:, one HAProxy load balancer, three etcd hosts, two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
Multiple Masters Using HAProxy Inventory File
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # apply updated node defaults openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']} # enable ntp on masters to ensure proper failover openshift_clock_enabled=true # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Multiple Masters with Master and etcd on the Same Host
The following describes an example environment for three masters using the native
HA method (with etcd on each host), one HAProxy load balancer, two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node with etcd on each host |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availability cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] master1.example.com master2.example.com master3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
2.6.5. Running the Advanced Installation
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you run the advanced installation playbook via Ansible. OpenShift Container Platform installations are currently supported using the RPM-based installer, while the containerized installer is currently a Technology Preview feature.
The installer uses modularized playbooks allowing administrators to install specific components as needed. By breaking up the roles and playbooks, there is better targeting of ad hoc administration tasks. This results in an increased level of control during installations and results in time savings.
The playbooks and their ordering are detailed below in Running Individual Component Playbooks.
Due to a known issue, after running the installation, if NFS volumes are provisioned for any component, the following directories might be created whether their components are being deployed to NFS volumes or not:
- /exports/logging-es
- /exports/logging-es-ops/
- /exports/metrics/
- /exports/prometheus
- /exports/prometheus-alertbuffer/
- /exports/prometheus-alertmanager/
You can delete these directories after installation, as needed.
2.6.5.1. Running the RPM-based Installer
The RPM-based installer uses Ansible installed via RPM packages to run playbooks and configuration files available on the local host. To run the installer, use the following command, specifying -i
if your inventory file located somewhere other than /etc/ansible/hosts:
If you are using a proxy, you must add the IP address of the etcd endpoints to the openshift_no_proxy
cluster variable in your inventory file.
If you are not using a proxy, you can skip this step.
Do not run OpenShift Ansible playbooks under nohup
. Using nohup
with the playbooks causes file descriptors to be created and not closed. Therefore, the system can run out of files to open and the playbook will fail.
In OpenShift Container Platform:
# ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
The installer caches playbook configuration values for 10 minutes, by default. If you change any system, network, or inventory configuration, and then re-run the installer within that 10 minute period, the new values are not used, and the previous values are used instead. You can delete the contents of the cache, which is defined by the fact_caching_connection
value in the /etc/ansible/ansible.cfg file. An example of this file is shown in Recommended Installation Practices.
2.6.5.2. Running the Containerized Installer
The openshift3/ose-ansible image is a containerized version of the OpenShift Container Platform installer. This installer image provides the same functionality as the RPM-based installer, but it runs in a containerized environment that provides all of its dependencies rather than being installed directly on the host. The only requirement to use it is the ability to run a container.
2.6.5.2.1. Running the Installer as a System Container
All system container components are Technology Preview features in OpenShift Container Platform 3.7. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.7. During this phase, they are only meant for use with new cluster installations in non-production environments.
The installer image can be used as a system container. System containers are stored and run outside of the traditional docker service. This enables running the installer image from one of the target hosts without concern for the install restarting docker on the host.
As the
root
user, use the Atomic CLI to run the installer as a run-once system container:# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ 1 registry.access.redhat.com/openshift3/ose-ansible:v3.7
- 1
- Specify the location on the local host for your inventory file.
This command initiates the cluster installation by using the inventory file specified and the
root
user’s SSH configuration. It logs the output on the terminal and also saves it in the /var/log/ansible.log file. The first time this command is run, the image is imported into OSTree storage (system containers use this rather than docker daemon storage). On subsequent runs, it reuses the stored image.If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
2.6.5.2.2. Running Other Playbooks
You can use the PLAYBOOK_FILE
environment variable to specify other playbooks you want to run by using the containerized installer. The default value of the PLAYBOOK_FILE
is /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml, which is the main cluster installation playbook, but you can set it to the path of another playbook inside the container.
For example, to run the pre-install checks playbook before installation, use the following command:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml \ 1 --set OPTS="-v" \ 2 registry.access.redhat.com/openshift3/ose-ansible:v3.7
2.6.5.2.3. Running the Installer as a Docker Container
The installer image can also run as a docker container anywhere that docker can run.
This method must not be used to run the installer on one of the hosts being configured, as the install may restart docker on the host, disrupting the installer container execution.
Although this method and the system container method above use the same image, they run with different entry points and contexts, so runtime parameters are not the same.
At a minimum, when running the installer as a docker container you must provide:
- SSH key(s), so that Ansible can reach your hosts.
- An Ansible inventory file.
- The location of the Ansible playbook to run against that inventory.
Here is an example of how to run an install via docker. Note that this must be run by a non-root
user with access to docker.
$ docker run -t -u `id -u` \ 1 -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ 2 -v $HOME/ansible/hosts:/tmp/inventory:Z \ 3 -e INVENTORY_FILE=/tmp/inventory \ 4 -e PLAYBOOK_FILE=playbooks/byo/config.yml \ 5 -e OPTS="-v" \ 6 registry.access.redhat.com/openshift3/ose-ansible:v3.7
- 1
-u `id -u`
makes the container run with the same UID as the current user, which allows that user to use the SSH key inside the container (SSH private keys are expected to be readable only by their owner).- 2
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z
mounts your SSH key ($HOME/.ssh/id_rsa
) under the container user’s$HOME/.ssh
(/opt/app-root/src is the$HOME
of the user in the container). If you mount the SSH key into a non-standard location you can add an environment variable with-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point
or setansible_ssh_private_key_file=/the/mount/point
as a variable in the inventory to point Ansible at it.Note that the SSH key is mounted with the
:Z
flag. This is required so that the container can read the SSH key under its restricted SELinux context. This also means that your original SSH key file will be re-labeled to something likesystem_u:object_r:container_file_t:s0:c113,c247
. For more details about:Z
, check thedocker-run(1)
man page. Keep this in mind when providing these volume mount specifications because this might have unexpected consequences: for example, if you mount (and therefore re-label) your whole$HOME/.ssh
directory it will block the host’s sshd from accessing your public keys to login. For this reason you may want to use a separate copy of the SSH key (or directory), so that the original file labels remain untouched.- 3 4
-v $HOME/ansible/hosts:/tmp/inventory:Z
and-e INVENTORY_FILE=/tmp/inventory
mount a static Ansible inventory file into the container as /tmp/inventory and set the corresponding environment variable to point at it. As with the SSH key, the inventory file SELinux labels may need to be relabeled by using the:Z
flag to allow reading in the container, depending on the existing label (for files in a user$HOME
directory this is likely to be needed). So again you may prefer to copy the inventory to a dedicated location before mounting it.The inventory file can also be downloaded from a web server if you specify the
INVENTORY_URL
environment variable, or generated dynamically usingDYNAMIC_SCRIPT_URL
to specify an executable script that provides a dynamic inventory.- 5
-e PLAYBOOK_FILE=playbooks/byo/config.yml
specifies the playbook to run (in this example, the BYO installer) as a relative path from the top level directory of openshift-ansible content. The full path from the RPM can also be used, as well as the path to any other playbook file in the container.- 6
-e OPTS="-v"
supplies arbitrary command line options (in this case,-v
to increase verbosity) to theansible-playbook
command that runs inside the container.
2.6.5.3. Running Individual Component Playbooks
The main installation playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml runs a set of individual component playbooks in a specific order, and the installer reports back at the end what phases you have gone through. If the installation fails, you are notified which phase failed along with the errors from the Ansible run.
After you resolve the errors, you can continue installation:
- You can run the remaining individual installation playbooks.
- If you are installing in a new environment, you can run the deploy_cluster.yml playbook again.
If you want to run only the remaining playbooks, start by running the playbook for the phase that failed and then run each of the remaining playbooks in order:
# ansible-playbook [-i /path/to/inventory] <playbook_file_location>
The following table lists the playbooks in the order that they must run:
Playbook Name | File Location |
---|---|
Health Check | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-checks/pre-install.yml |
etcd Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-etcd/config.yml |
NFS Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-nfs/config.yml |
Load Balancer Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-loadbalancer/config.yml |
Master Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/config.yml |
Master Additional Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/additional_config.yml |
Node Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/config.yml |
GlusterFS Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-glusterfs/config.yml |
Hosted Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-hosted.yml |
Metrics Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml |
Logging Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml |
Prometheus Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-prometheus.yml |
Service Catalog Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/service-catalog.yml |
Management Install | /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-management/config.yml |
2.6.6. Verifying the Installation
After the installation completes:
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME STATUS AGE master.example.com Ready,SchedulingDisabled 165d node1.example.com Ready 165d node2.example.com Ready 165d
To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.
For example, for a master host with a host name of
master.openshift.com
and using the default port of8443
, the web console would be found athttps://master.openshift.com:8443/console
.
The default port for the console is 8443
. If this was changed during the installation, the port can be found at openshift_master_console_port in the /etc/ansible/hosts file.
Verifying Multiple etcd Hosts
If you installed multiple etcd hosts:
First, verify that the etcd package, which provides the
etcdctl
command, is installed:# yum install etcd
On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
Verifying Multiple Masters Using HAProxy
If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:
http://<lb_hostname>:9000
You can verify your installation by consulting the HAProxy Configuration documentation.
2.6.7. Optionally Securing Builds
Running docker build
is a privileged process, so the container has more access to the node than might be considered acceptable in some multi-tenant environments. If you do not trust your users, you can use a more secure option at the time of installation. Disable Docker builds on the cluster and require that users build images outside of the cluster. See Securing Builds by Strategy for more information on this optional process.
2.6.8. Uninstalling OpenShift Container Platform
You can uninstall OpenShift Container Platform hosts in your cluster by running the uninstall.yml playbook. This playbook deletes OpenShift Container Platform content installed by Ansible, including:
- Configuration
- Containers
- Default templates and image streams
- Images
- RPM packages
The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook. If you want to uninstall OpenShift Container Platform across all hosts in your cluster, run the playbook using the inventory file you used when installing OpenShift Container Platform initially or ran most recently:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
2.6.8.1. Uninstalling Nodes
You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:
This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster.
- First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
Create a different inventory file that only references those hosts. For example, to only delete content from one node:
[OSEv3:children] nodes 1 [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise [nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" 2
Specify that new inventory file using the
-i
option when running the uninstall.yml playbook:# ansible-playbook -i /path/to/new/file \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
When the playbook completes, all OpenShift Container Platform content should be removed from any specified hosts.
2.6.9. Known Issues
- On failover in multiple master clusters, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/kubernetes/kubernetes/issues/10030 for details.
If the Ansible installer fails, you can still install OpenShift Container Platform:
- If you did not modify the SDN configuration or generate new certificates, run the deploy_cluster.yml playbook again.
- If you modified the SDN configuration, generated new certificates, or the installer fails again, you must either start over with a clean operating system installation or uninstall and install again.
- If you use virtual machines, start from a fresh image or uninstall and install again.
- If you use bare metal machines, uninstall and install again.
2.6.10. What’s Next?
Now that you have a working OpenShift Container Platform instance, you can:
- Deploy an integrated Docker registry.
- Deploy a router.
2.7. Disconnected Installation
2.7.1. Overview
Frequently, portions of a datacenter may not have access to the Internet, even via proxy servers. Installing OpenShift Container Platform in these environments is considered a disconnected installation.
An OpenShift Container Platform disconnected installation differs from a regular installation in two primary ways:
- The OpenShift Container Platform software channels and repositories are not available via Red Hat’s content distribution network.
- OpenShift Container Platform uses several containerized components. Normally, these images are pulled directly from Red Hat’s Docker registry. In a disconnected environment, this is not possible.
A disconnected installation ensures the OpenShift Container Platform software is made available to the relevant servers, then follows the same installation process as a standard connected installation. This topic additionally details how to manually download the container images and transport them onto the relevant servers.
Once installed, in order to use OpenShift Container Platform, you will need source code in a source control repository (for example, Git). This topic assumes that an internal Git repository is available that can host source code and this repository is accessible from the OpenShift Container Platform nodes. Installing the source control repository is outside the scope of this document.
Also, when building applications in OpenShift Container Platform, your build may have some external dependencies, such as a Maven Repository or Gem files for Ruby applications. For this reason, and because they might require certain tags, many of the Quickstart templates offered by OpenShift Container Platform may not work on a disconnected environment. However, while Red Hat container images try to reach out to external repositories by default, you can configure OpenShift Container Platform to use your own internal repositories. For the purposes of this document, we assume that such internal repositories already exist and are accessible from the OpenShift Container Platform nodes hosts. Installing such repositories is outside the scope of this document.
You can also have a Red Hat Satellite server that provides access to Red Hat content via an intranet or LAN. For environments with Satellite, you can synchronize the OpenShift Container Platform software onto the Satellite for use with the OpenShift Container Platform servers.
Red Hat Satellite 6.1 also introduces the ability to act as a Docker registry, and it can be used to host the OpenShift Container Platform containerized components. Doing so is outside of the scope of this document.
2.7.2. Prerequisites
This document assumes that you understand OpenShift Container Platform’s overall architecture and that you have already planned out what the topology of your environment will look like.
2.7.3. Required Software and Components
In order to pull down the required software repositories and container images, you will need a Red Hat Enterprise Linux (RHEL) 7 server with access to the Internet and at least 100GB of additional free space. All steps in this section should be performed on the Internet-connected server as the root system user.
2.7.3.1. Syncing Repositories
Before you sync with the required repositories, you may need to import the appropriate GPG key:
$ rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
If the key is not imported, the indicated package is deleted after syncing the repository.
To sync the required repositories:
Register the server with the Red Hat Customer Portal. You must use the login and password associated with the account that has access to the OpenShift Container Platform subscriptions:
$ subscription-manager register
Pull the latest subscription data from RHSM:
$ subscription-manager refresh
Attach to a subscription that provides OpenShift Container Platform channels. You can find the list of available subscriptions using:
$ subscription-manager list --available --matches '*OpenShift*'
Then, find the pool ID for the subscription that provides OpenShift Container Platform, and attach it:
$ subscription-manager attach --pool=<pool_id> $ subscription-manager repos --disable="*" $ subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-ose-3.7-rpms"
The
yum-utils
command provides the reposync utility, which lets you mirror yum repositories, andcreaterepo
can create a usableyum
repository from a directory:$ sudo yum -y install yum-utils createrepo docker git
You will need up to 110GB of free space in order to sync the software. Depending on how restrictive your organization’s policies are, you could re-connect this server to the disconnected LAN and use it as the repository server. You could use USB-connected storage and transport the software to another server that will act as the repository server. This topic covers these options.
Make a path to where you want to sync the software (either locally or on your USB or other device):
$ mkdir -p </path/to/repos>
Sync the packages and create the repository for each of them. You will need to modify the command for the appropriate path you created above:
$ for repo in \ rhel-7-server-rpms \ rhel-7-server-extras-rpms \ rhel-7-fast-datapath-rpms \ rhel-7-server-ose-3.7-rpms do reposync --gpgcheck -lm --repoid=${repo} --download_path=/path/to/repos createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo} done
2.7.3.2. Syncing Images
To sync the container images:
Start the Docker daemon:
$ systemctl start docker
If you are performing a containerized install, pull all of the required OpenShift Container Platform host component images. Replace
<tag>
withv3.7.108
for the latest version.# docker pull registry.access.redhat.com/rhel7/etcd # docker pull registry.access.redhat.com/openshift3/ose:<tag> # docker pull registry.access.redhat.com/openshift3/node:<tag> # docker pull registry.access.redhat.com/openshift3/openvswitch:<tag>
Pull all of the required OpenShift Container Platform infrastructure component images. Replace
<tag>
withv3.7.108
for the latest version.$ docker pull registry.access.redhat.com/openshift3/ose-ansible:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-cluster-capacity:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-deployer:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-docker-builder:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-docker-registry:<tag> $ docker pull registry.access.redhat.com/openshift3/registry-console:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-egress-http-proxy:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-egress-router:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-f5-router:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-haproxy-router:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-pod:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-sti-builder:<tag> $ docker pull registry.access.redhat.com/openshift3/ose:<tag> $ docker pull registry.access.redhat.com/openshift3/container-engine:<tag> $ docker pull registry.access.redhat.com/openshift3/node:<tag> $ docker pull registry.access.redhat.com/openshift3/openvswitch:<tag>
NoteIf you are using NFS, you need the
ose-recycler
image. Otherwise, the volumes will not recycle, potentially causing errors.Pull all of the required OpenShift Container Platform component images for the additional centralized log aggregation and metrics aggregation components. Replace
<tag>
withv3.7.108
for the latest version.$ docker pull registry.access.redhat.com/openshift3/logging-auth-proxy:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-curator:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-elasticsearch:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-fluentd:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-kibana:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-cassandra:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-hawkular-metrics:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-hawkular-openshift-agent:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-heapster:<tag> $ docker pull registry.access.redhat.com/openshift3/oauth-proxy:<tag>
ImportantPrometheus on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.
For the service catalog, OpenShift Ansible broker, and template service broker features (as described in Advanced Installation), pull the following images.
Replace
<tag>
withv3.7.108
for the latest version.$ docker pull registry.access.redhat.com/openshift3/ose-service-catalog:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-ansible-service-broker:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-template-service-broker:<tag>
Replace
<tag>
withv3.7.108
for the latest version.$ docker pull registry.access.redhat.com/openshift3/mediawiki-apb:<tag> $ docker pull registry.access.redhat.com/openshift3/postgresql-apb:<tag>
Pull the Red Hat-certified Source-to-Image (S2I) builder images that you intend to use in your OpenShift environment. You can pull the following images:
$ docker pull registry.access.redhat.com/jboss-amq-6/amq63-openshift $ docker pull registry.access.redhat.com/jboss-datagrid-7/datagrid71-openshift $ docker pull registry.access.redhat.com/jboss-datagrid-7/datagrid71-client-openshift $ docker pull registry.access.redhat.com/jboss-datavirt-6/datavirt63-openshift $ docker pull registry.access.redhat.com/jboss-datavirt-6/datavirt63-driver-openshift $ docker pull registry.access.redhat.com/jboss-decisionserver-6/decisionserver64-openshift $ docker pull registry.access.redhat.com/jboss-processserver-6/processserver64-openshift $ docker pull registry.access.redhat.com/jboss-eap-6/eap64-openshift $ docker pull registry.access.redhat.com/jboss-eap-7/eap70-openshift $ docker pull registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift $ docker pull registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat8-openshift $ docker pull registry.access.redhat.com/openshift3/jenkins-1-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7 $ docker pull registry.access.redhat.com/rhscl/mongodb-32-rhel7 $ docker pull registry.access.redhat.com/rhscl/mysql-57-rhel7 $ docker pull registry.access.redhat.com/rhscl/perl-524-rhel7 $ docker pull registry.access.redhat.com/rhscl/php-56-rhel7 $ docker pull registry.access.redhat.com/rhscl/postgresql-95-rhel7 $ docker pull registry.access.redhat.com/rhscl/python-35-rhel7 $ docker pull registry.access.redhat.com/redhat-sso-7/sso70-openshift $ docker pull registry.access.redhat.com/rhscl/ruby-24-rhel7 $ docker pull registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift $ docker pull registry.access.redhat.com/redhat-sso-7/sso71-openshift $ docker pull registry.access.redhat.com/rhscl/nodejs-6-rhel7 $ docker pull registry.access.redhat.com/rhscl/mariadb-101-rhel7
Make sure to indicate the correct tag specifying the desired version number. For example, to pull both the previous and latest version of the Tomcat image:
$ docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest $ docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1
2.7.3.3. Preparing Images for Export
Container images can be exported from a system by first saving them to a tarball and then transporting them:
Make and change into a repository home directory:
$ mkdir </path/to/repos/images> $ cd </path/to/repos/images>
If you are performing a containerized install, export the OpenShift Container Platform host component images:
# docker save -o ose3-host-images.tar \ registry.access.redhat.com/rhel7/etcd \ registry.access.redhat.com/openshift3/ose \ registry.access.redhat.com/openshift3/node \ registry.access.redhat.com/openshift3/openvswitch
Export the OpenShift Container Platform infrastructure component images:
$ docker save -o ose3-images.tar \ registry.access.redhat.com/openshift3/ose-ansible \ registry.access.redhat.com/openshift3/ose-cluster-capacity \ registry.access.redhat.com/openshift3/ose-deployer \ registry.access.redhat.com/openshift3/ose-docker-builder \ registry.access.redhat.com/openshift3/ose-docker-registry \ registry.access.redhat.com/openshift3/registry-console registry.access.redhat.com/openshift3/ose-egress-http-proxy \ registry.access.redhat.com/openshift3/ose-egress-router \ registry.access.redhat.com/openshift3/ose-f5-router \ registry.access.redhat.com/openshift3/ose-haproxy-router \ registry.access.redhat.com/openshift3/ose-keepalived-ipfailover \ registry.access.redhat.com/openshift3/ose-pod \ registry.access.redhat.com/openshift3/ose-sti-builder \ registry.access.redhat.com/openshift3/ose \ registry.access.redhat.com/openshift3/container-engine \ registry.access.redhat.com/openshift3/node \ registry.access.redhat.com/openshift3/openvswitch
If you synchronized the metrics and log aggregation images, export them:
$ docker save -o ose3-logging-metrics-images.tar \ registry.access.redhat.com/openshift3/logging-auth-proxy \ registry.access.redhat.com/openshift3/logging-curator \ registry.access.redhat.com/openshift3/logging-elasticsearch \ registry.access.redhat.com/openshift3/logging-fluentd \ registry.access.redhat.com/openshift3/logging-kibana \ registry.access.redhat.com/openshift3/metrics-cassandra \ registry.access.redhat.com/openshift3/metrics-hawkular-metrics \ registry.access.redhat.com/openshift3/metrics-hawkular-openshift-agent \ registry.access.redhat.com/openshift3/metrics-heapster
Export the S2I builder images that you synced in the previous section. For example, if you synced only the Jenkins and Tomcat images:
$ docker save -o ose3-builder-images.tar \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1 \ registry.access.redhat.com/openshift3/jenkins-1-rhel7 \ registry.access.redhat.com/openshift3/jenkins-2-rhel7 \ registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7 \ registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 \ registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7
2.7.4. Repository Server
During the installation (and for later updates, should you so choose), you will need a webserver to host the repositories. RHEL 7 can provide the Apache webserver.
Option 1: Re-configuring as a Web server
If you can re-connect the server where you synchronized the software and images to your LAN, then you can simply install Apache on the server:
$ sudo yum install httpd
Skip to Placing the Software.
Option 2: Building a Repository Server
If you need to build a separate server to act as the repository server, install a new RHEL 7 system with at least 110GB of space. On this repository server during the installation, make sure you select the Basic Web Server option.
2.7.4.1. Placing the Software
If necessary, attach the external storage, and then copy the repository files into Apache’s root folder. Note that the below copy step (
cp -a
) should be substituted with move (mv
) if you are repurposing the server you used to sync:$ cp -a /path/to/repos /var/www/html/ $ chmod -R +r /var/www/html/repos $ restorecon -vR /var/www/html
Add the firewall rules:
$ sudo firewall-cmd --permanent --add-service=http $ sudo firewall-cmd --reload
Enable and start Apache for the changes to take effect:
$ systemctl enable httpd $ systemctl start httpd
2.7.5. OpenShift Container Platform Systems
2.7.5.1. Building Your Hosts
At this point you can perform the initial creation of the hosts that will be part of the OpenShift Container Platform environment. It is recommended to use the latest version of RHEL 7 and to perform a minimal installation. You will also want to pay attention to the other OpenShift Container Platform-specific prerequisites.
Once the hosts are initially built, the repositories can be set up.
2.7.5.2. Connecting the Repositories
On all of the relevant systems that will need OpenShift Container Platform software components, create the required repository definitions. Place the following text in the /etc/yum.repos.d/ose.repo file, replacing <server_IP>
with the IP or host name of the Apache server hosting the software repositories:
[rhel-7-server-rpms] name=rhel-7-server-rpms baseurl=http://<server_IP>/repos/rhel-7-server-rpms enabled=1 gpgcheck=0 [rhel-7-server-extras-rpms] name=rhel-7-server-extras-rpms baseurl=http://<server_IP>/repos/rhel-7-server-extras-rpms enabled=1 gpgcheck=0 [rhel-7-fast-datapath-rpms] name=rhel-7-fast-datapath-rpms baseurl=http://<server_IP>/repos/rhel-7-fast-datapath-rpms enabled=1 gpgcheck=0 [rhel-7-server-ose-3.7-rpms] name=rhel-7-server-ose-3.7-rpms baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.7-rpms enabled=1 gpgcheck=0
2.7.5.3. Host Preparation
At this point, the systems are ready to continue to be prepared following the OpenShift Container Platform documentation.
Skip the section titled Host Registration and start with Installing Base Packages.
2.7.6. Installing OpenShift Container Platform
2.7.6.1. Importing OpenShift Container Platform Component Images
To import the relevant components, securely copy the images from the connected host to the individual OpenShift Container Platform hosts:
$ scp /var/www/html/repos/images/ose3-images.tar root@<openshift_host_name>: $ ssh root@<openshift_host_name> "docker load -i ose3-images.tar" $ scp /var/www/html/images/ose3-builder-images.tar root@<openshift_master_host_name>: $ ssh root@<openshift_master_host_name> "docker load -i ose3-builder-images.tar"
Perform the same steps for the host components if your install will be containerized. Perform the same steps for the metrics and logging images, if your cluster will use them.
If you prefer, you could use wget
on each OpenShift Container Platform host to fetch the tar file, and then perform the Docker import command locally.
2.7.6.2. Running the OpenShift Container Platform Installer
You can now choose to follow the quick or advanced OpenShift Container Platform installation instructions in the documentation.
2.7.6.3. Creating the Internal Docker Registry
You now need to create the internal Docker registry.
If you want to install a stand-alone registry, you must pull the registry-console container image and set deployment_subtype=registry
in the inventory file.
2.7.7. Post-Installation Changes
In one of the previous steps, the S2I images were imported into the Docker daemon running on one of the OpenShift Container Platform master hosts. In a connected installation, these images would be pulled from Red Hat’s registry on demand. Since the Internet is not available to do this, the images must be made available in another Docker registry.
OpenShift Container Platform provides an internal registry for storing the images that are built as a result of the S2I process, but it can also be used to hold the S2I builder images. The following steps assume you did not customize the service IP subnet (172.30.0.0/16) or the Docker registry port (5000).
2.7.7.1. Re-tagging S2I Builder Images
On the master host where you imported the S2I builder images, obtain the service address of your Docker registry that you installed on the master:
$ export REGISTRY=$(oc get service -n default \ docker-registry --output=go-template='{{.spec.clusterIP}}{{"\n"}}')
Next, tag all of the builder images that you synced and exported before pushing them into the OpenShift Container Platform Docker registry. For example, if you synced and exported only the Tomcat image:
$ docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1 \ $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1 $ docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2 $ docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest
2.7.7.2. Configuring a Registry Location
If you are using an image registry other than the default at registry.access.redhat.com
, specify the desired registry within the /etc/ansible/hosts file.
oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true
Depending on your registry, you may need to configure:
openshift_docker_additional_registries=example.com openshift_docker_insecure_registries=example.com
You can also set the openshift_docker_insecure_registries
variable to the IP address of the host. 0.0.0.0/0
is not a valid setting.
Variable | Purpose |
---|---|
|
Set to the alternate image location. Necessary if you are not using the default registry at |
|
Set to |
|
Set |
|
Set |
2.7.7.3. Creating an Administrative User
Pushing the container images into OpenShift Container Platform’s Docker registry requires a user with cluster-admin privileges. Because the default OpenShift Container Platform system administrator does not have a standard authorization token, they cannot be used to log in to the Docker registry.
To create an administrative user:
Create a new user account in the authentication system you are using with OpenShift Container Platform. For example, if you are using local
htpasswd
-based authentication:$ htpasswd -b /etc/openshift/openshift-passwd <admin_username> <password>
The external authentication system now has a user account, but a user must log in to OpenShift Container Platform before an account is created in the internal database. Log in to OpenShift Container Platform for this account to be created. This assumes you are using the self-signed certificates generated by OpenShift Container Platform during the installation:
$ oc login --certificate-authority=/etc/origin/master/ca.crt \ -u <admin_username> https://<openshift_master_host>:8443
Get the user’s authentication token:
$ MYTOKEN=$(oc whoami -t) $ echo $MYTOKEN iwo7hc4XilD2KOLL4V1O55ExH2VlPmLD-W2-JOd6Fko
2.7.7.4. Modifying the Security Policies
Using
oc login
switches to the new user. Switch back to the OpenShift Container Platform system administrator in order to make policy changes:$ oc login -u system:admin
In order to push images into the OpenShift Container Platform Docker registry, an account must have the
image-builder
security role. Add this to your OpenShift Container Platform administrative user:$ oc adm policy add-role-to-user system:image-builder <admin_username>
Next, add the administrative role to the user in the openshift project. This allows the administrative user to edit the openshift project, and, in this case, push the container images:
$ oc adm policy add-role-to-user admin <admin_username> -n openshift
2.7.7.5. Editing the Image Stream Definitions
The openshift project is where all of the image streams for builder images are created by the installer. They are loaded by the installer from the /usr/share/openshift/examples directory. Change all of the definitions by deleting the image streams which had been loaded into OpenShift Container Platform’s database, then re-create them:
Delete the existing image streams:
$ oc delete is -n openshift --all
Make a backup of the files in /usr/share/openshift/examples/ if you desire. Next, edit the file image-streams-rhel7.json in the /usr/share/openshift/examples/image-streams folder. You will find an image stream section for each of the builder images. Edit the
spec
stanza to point to your internal Docker registry.For example, change:
"from": { "kind": "DockerImage", "name": "registry.access.redhat.com/rhscl/httpd-24-rhel7" }
to:
"from": { "kind": "DockerImage", "name": "172.30.69.44:5000/openshift/httpd-24-rhel7" }
In the above, the repository name was changed from rhscl to openshift. You will need to ensure the change, regardless of whether the repository is rhscl, openshift3, or another directory. Every definition should have the following format:
<registry_ip>:5000/openshift/<image_name>
Repeat this change for every image stream in the file. Ensure you use the correct IP address that you determined earlier. When you are finished, save and exit. Repeat the same process for the JBoss image streams in the /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json file.
2.7.7.6. Loading the Container Images
At this point the system is ready to load the container images.
Log in to the Docker registry using the token and registry service IP obtained earlier:
$ docker login -u adminuser -e mailto:adminuser@abc.com \ -p $MYTOKEN $REGISTRY:5000
Push the Docker images:
$ docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1 $ docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2 $ docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest
Load the updated image stream definitions:
$ oc create -f /usr/share/openshift/examples/image-streams/image-streams-rhel7.json -n openshift $ oc create -f /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json -n openshift
Verify that all the image streams now have the tags populated:
$ oc get imagestreams -n openshift NAME DOCKER REPO TAGS UPDATED jboss-webserver30-tomcat7-openshift $REGISTRY/jboss-webserver-3/webserver30-jboss-tomcat7-openshift 1.1,1.1-2,1.1-6 + 2 more... 2 weeks ago ...
2.7.8. Installing a Router
At this point, the OpenShift Container Platform environment is almost ready for use. It is likely that you will want to install and configure a router.
2.8. Installing a Stand-alone Deployment of OpenShift Container Registry
2.8.1. About OpenShift Container Registry
OpenShift Container Platform is a fully-featured enterprise solution that includes an integrated container registry called OpenShift Container Registry (OCR). Alternatively, instead of deploying OpenShift Container Platform as a full PaaS environment for developers, you can install OCR as a stand-alone container registry to run on-premise or in the cloud.
When installing a stand-alone deployment of OCR, a cluster of masters and nodes is still installed, similar to a typical OpenShift Container Platform installation. Then, the container registry is deployed to run on the cluster. This stand-alone deployment option is useful for administrators that want a container registry, but do not require the full OpenShift Container Platform environment that includes the developer-focused web console and application build and deployment tools.
OCR has replaced the upstream Atomic Registry project, which was a different implementation that used a non-Kubernetes deployment method that leveraged systemd
and local configuration files to manage services.
OCR provides the following capabilities:
- A user-focused registry web console, Cockpit.
- Secured traffic by default, served via TLS.
- Global identity provider authentication.
- A project namespace model to enable teams to collaborate through role-based access control (RBAC) authorization.
- A Kubernetes-based cluster to manage services.
- An image abstraction called image streams to enhance image management.
Administrators may want to deploy a stand-alone OCR to manage a registry separately that supports multiple OpenShift Container Platform clusters. A stand-alone OCR also enables administrators to separate their registry to satisfy their own security or compliance requirements.
2.8.2. Minimum Hardware Requirements
Installing a stand-alone OCR has the following hardware requirements:
- Physical or virtual system, or an instance running on a public or private IaaS.
- Base OS: RHEL 7.3 or 7.4 with the "Minimal" installation option and the latest packages from the RHEL 7 Extras channel, or RHEL Atomic Host 7.4.2 or later.
- NetworkManager 1.0 or later
- 2 vCPU.
- Minimum 16 GB RAM.
- Minimum 15 GB hard disk space for the file system containing /var/.
- An additional minimum 15 GB unallocated space to be used for Docker’s storage back end; see Configuring Docker Storage for details.
OpenShift Container Platform only supports servers with x86_64 architecture.
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.
2.8.3. Supported System Topologies
The following system topologies are supported for stand-alone OCR:
All-in-one | A single host that includes the master, node, etcd, and registry components. |
Multiple Masters (Highly-Available) | Three hosts with all components included on each (master, node, etcd, and registry), with the masters configured for native high-availability. |
2.8.4. Host Preparation
Before installing stand-alone OCR, all of the same steps detailed in the Host Preparation topic for installing a full OpenShift Container Platform PaaS must be performed. This includes registering and subscribing the host(s) to the proper repositories, installing or updating certain packages, and setting up Docker and its storage requirements.
Follow the steps in the Host Preparation topic, then continue to Installation Methods.
2.8.5. Installation Methods
To install a stand-alone registry, use either of the standard installation methods (quick or advanced) used to install any variant of OpenShift Container Platform.
2.8.5.1. Quick Installation for Stand-alone OpenShift Container Registry
The following shows the step-by-step process for running the quick install tool to install an OpenShift Container Registry, instead of the full OpenShift Container Platform install.
Start the interactive installation:
$ atomic-openshift-installer install
Follow the on-screen instructions to install a new registry. The installation questions will be largely the same as if you were installing a full OpenShift Container Platform PaaS. When you reach the following screen, choose
2
to follow the registry installation path:Which variant would you like to install? (1) OpenShift Container Platform (2) Registry
Specify the hosts that make up your cluster:
Enter hostname or IP address: Will this host be an OpenShift master? [y/N]: Will this host be RPM or Container based (rpm/container)? [rpm]:
See the Installing on Containerized Hosts topic for information about RPM versus containerized hosts.
Change the cluster host name, if desired:
Enter hostname or IP address [None]:
Choose the host to act as the storage host (the master host by default):
Enter hostname or IP address [master.host.example.com]:
Change the default subdomain, if desired:
New default subdomain (ENTER for none) []:
NoteAll certificates and routes are created with this subdomain. Ensure this is set to the correct desired subdomain to avoid having to change the configuration after installation.
Specify a HTTP or HTTPS proxy, if needed:
Specify your http proxy ? (ENTER for none) []: Specify your https proxy ? (ENTER for none) []:
After the previous has been entered, the next page summarizes your install and starts to gather the host information.
For further usage details on the quick installer in general, including next steps, see the full topic at Quick Installation.
2.8.5.2. Advanced Installation for Stand-alone OpenShift Container Registry
When using the advanced installation method to install stand-alone OCR, use the same steps for installing a full OpenShift Container Platform PaaS using Ansible described in the full Advanced Installation topic. The main difference is that you must set deployment_subtype=registry
in the inventory file within the [OSEv3:vars]
section for the playbooks to follow the registry installation path.
See the following example inventory files for the different supported system topologies:
All-in-one Stand-alone OpenShift Container Registry Inventory File
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root openshift_master_default_subdomain=apps.test.example.com # If ansible_ssh_user is not root, ansible_become must be set to true #ansible_become=true openshift_deployment_type=openshift-enterprise deployment_subtype=registry 1 # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] registry.example.com # host group for nodes, includes region info [nodes] registry.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true 2
Multiple Masters (Highly-Available) Stand-alone OpenShift Container Registry Inventory File
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
openshift_deployment_type=openshift-enterprise
deployment_subtype=registry 1
openshift_master_default_subdomain=apps.test.example.com
# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com
# apply updated node defaults
openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}
# override the default controller lease ttl
#osm_controller_lease_ttl=30
# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true
# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com
# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com
# Specify load balancer host
[lb]
lb.example.com
# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
- 1
- Set
deployment_subtype=registry
to ensure installation of stand-alone OCR and not a full OpenShift Container Platform environment.
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you can run the advanced installation using the following playbook:
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
For more detailed usage information on the advanced installation method, including a comprehensive list of available Ansible variables, see the full topic at Advanced Installation.
Chapter 3. Setting up the Registry
3.1. Registry Overview
3.1.1. About the Registry
OpenShift Container Platform can build container images from your source code, deploy them, and manage their lifecycle. To enable this, OpenShift Container Platform provides an internal, integrated Docker registry that can be deployed in your OpenShift Container Platform environment to locally manage images.
3.1.2. Integrated or Stand-alone Registries
During an initial installation of a full OpenShift Container Platform cluster, it is likely that the registry was deployed automatically during the installation process. If it was not, or if you want to further customize the configuration of your registry, see Deploying a Registry on Existing Clusters.
While it can be deployed to run as an integrated part of your full OpenShift Container Platform cluster, the OpenShift Container Platform registry can alternatively be installed separately as a stand-alone container image registry.
To install a stand-alone registry, follow Installing a Stand-alone Registry. This installation path deploys an all-in-one cluster running a registry and specialized web console.
3.2. Deploying a Registry on Existing Clusters
3.2.1. Overview
If the integrated registry was not previously deployed automatically during the initial installation of your OpenShift Container Platform cluster, or if it is no longer running successfully and you need to redeploy it on your existing cluster, see the following sections for options on deploying a new registry.
This topic is not required if you installed a stand-alone registry.
3.2.2. Deploying the Registry
To deploy the integrated Docker registry, use the oc adm registry
command as a user with cluster administrator privileges. For example:
$ oc adm registry --config=/etc/origin/master/admin.kubeconfig \1 --service-account=registry \2 --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' 3
This creates a service and a deployment configuration, both called docker-registry. Once deployed successfully, a pod is created with a name similar to docker-registry-1-cpty9.
To see a full list of options that you can specify when creating the registry:
$ oc adm registry --help
The value for --fs-group
must be permitted by the SCC used by the registry (typically, the restricted SCC).
3.2.3. Deploying the Registry as a DaemonSet
Use the oc adm registry
command to deploy the registry as a DaemonSet
with the --daemonset
option.
Daemonsets ensure that when nodes are created, they contain copies of a specified pod. When the nodes are removed, the pods are garbage collected.
For more information on DaemonSets
, see Using Daemonsets.
3.2.4. Registry Compute Resources
By default, the registry is created with no settings for compute resource requests or limits. For production, it is highly recommended that the deployment configuration for the registry be updated to set resource requests and limits for the registry pod. Otherwise, the registry pod will be considered a BestEffort pod.
See Compute Resources for more information on configuring requests and limits.
3.2.5. Storage for the Registry
The registry stores container images and metadata. If you simply deploy a pod with the registry, it uses an ephemeral volume that is destroyed if the pod exits. Any images anyone has built or pushed into the registry would disappear.
This section lists the supported registry storage drivers.
The following list includes storage drivers that need to be configured in the registry’s configuration file:
- Filesystem. Filesystem is the default and does not need to be configured.
- S3. Learn more about CloudFront configuration.
- OpenStack Swift
- Google Cloud Storage (GCS)
- Microsoft Azure
- Aliyun OSS
General registry storage configuration options are supported.
The following storage options need to be configured through the filesystem driver:
For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.
3.2.5.1. Production Use
For production use, attach a remote volume or define and use the persistent storage method of your choice.
For example, to use an existing persistent volume claim:
$ oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \ --claim-name=<pvc_name> --overwrite
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.
3.2.5.1.1. Use Amazon S3 as a Storage Back-end
There is also an option to use Amazon Simple Storage Service storage with the internal Docker registry. It is a secure cloud storage manageable through AWS Management Console. To use it, the registry’s configuration file must be manually edited and mounted to the registry pod. However, before you start with the configuration, look at upstream’s recommended steps.
Take a default YAML configuration file as a base and replace the filesystem entry in the storage section with s3 entry such as below. The resulting storage section may look like this:
storage: cache: layerinfo: inmemory delete: enabled: true s3: accesskey: awsaccesskey 1 secretkey: awssecretkey 2 region: us-west-1 regionendpoint: http://myobjects.local bucket: bucketname encrypt: true keyid: mykeyid secure: true v4auth: false chunksize: 5242880 rootdirectory: /s3/object/name/prefix
All of the s3 configuration options are documented in upstream’s driver reference documentation.
Overriding the registry configuration will take you through the additional steps on mounting the configuration file into pod.
When the registry runs on the S3 storage back-end, there are reported issues.
3.2.5.2. Non-Production Use
For non-production use, you can use the --mount-host=<path>
option to specify a directory for the registry to use for persistent storage. The registry volume is then created as a host-mount at the specified <path>
.
The --mount-host
option mounts a directory from the node on which the registry container lives. If you scale up the docker-registry deployment configuration, it is possible that your registry pods and containers will run on different nodes, which can result in two or more registry containers, each with its own local storage. This will lead to unpredictable behavior, as subsequent requests to pull the same image repeatedly may not always succeed, depending on which container the request ultimately goes to.
The --mount-host
option requires that the registry container run in privileged mode. This is automatically enabled when you specify --mount-host
. However, not all pods are allowed to run privileged containers by default. If you still want to use this option, create the registry and specify that it use the registry service account that was created during installation:
$ oc adm registry --service-account=registry \ --config=/etc/origin/master/admin.kubeconfig \ --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \ --mount-host=<path>
The Docker registry pod runs as user 1001. This user must be able to write to the host directory. You may need to change directory ownership to user ID 1001 with this command:
$ sudo chown 1001:root <path>
3.2.6. Enabling the Registry Console
OpenShift Container Platform provides a web-based interface to the integrated registry. This registry console is an optional component for browsing and managing images. It is deployed as a stateless service running as a pod.
If you installed OpenShift Container Platform as a stand-alone registry, the registry console is already deployed and secured automatically during installation.
If Cockpit is already running, you’ll need to shut it down before proceeding in order to avoid a port conflict (9090 by default) with the registry console.
3.2.6.1. Deploying the Registry Console
You must first have exposed the registry.
Create a passthrough route in the default project. You will need this when creating the registry console application in the next step.
$ oc create route passthrough --service registry-console \ --port registry-console \ -n default
Deploy the registry console application. Replace
<openshift_oauth_url>
with the URL of the OpenShift Container Platform OAuth provider, which is typically the master.$ oc new-app -n default --template=registry-console \ -p OPENSHIFT_OAUTH_PROVIDER_URL="https://<openshift_oauth_url>:8443" \ -p REGISTRY_HOST=$(oc get route docker-registry -n default --template='{{ .spec.host }}') \ -p COCKPIT_KUBE_URL=$(oc get route registry-console -n default --template='https://{{ .spec.host }}')
If the redirection URL is wrong when you are trying to log in to the registry console, check your OAuth client with oc get oauthclients
.
- Finally, use a web browser to view the console using the route URI.
3.2.6.2. Securing the Registry Console
By default, the registry console generates self-signed TLS certificates if deployed manually per the steps in Deploying the Registry Console. See Troubleshooting the Registry Console for more information.
Use the following steps to add your organization’s signed certificates as a secret volume. This assumes your certificates are available on the oc
client host.
Create a .cert file containing the certificate and key. Format the file with:
- One or more BEGIN CERTIFICATE blocks for the server certificate and the intermediate certificate authorities
A block containing a BEGIN PRIVATE KEY or similar for the key. The key must not be encrypted
For example:
-----BEGIN CERTIFICATE----- MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIIDUzCCAjugAwIBAgIJAPXW+CuNYS6QMA0GCSqGSIb3DQEBCwUAMD8xKTAnBgNV BAoMIGI0OGE2NGNkNmMwNTQ1YThhZTgxOTEzZDE5YmJjMmRjMRIwEAYDVQQDDAls ... -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCyOJ5garOYw0sm 8TBCDSqQ/H1awGMzDYdB11xuHHsxYS2VepPMzMzryHR137I4dGFLhvdTvJUH8lUS ... -----END PRIVATE KEY-----
The secured registry should contain the following Subject Alternative Names (SAN) list:
Two service hostnames.
For example:
docker-registry.default.svc.cluster.local docker-registry.default.svc
Service IP address.
For example:
172.30.124.220
Use the following command to get the Docker registry service IP address:
oc get service docker-registry --template='{{.spec.clusterIP}}'
Public hostname.
For example:
docker-registry-default.apps.example.com
Use the following command to get the Docker registry public hostname:
oc get route docker-registry --template '{{.spec.host}}'
For example, the server certificate should contain SAN details similar to the following:
X509v3 Subject Alternative Name: DNS:docker-registry-public.openshift.com, DNS:docker-registry.default.svc, DNS:docker-registry.default.svc.cluster.local, DNS:172.30.2.98, IP Address:172.30.2.98
The registry console loads a certificate from the /etc/cockpit/ws-certs.d directory. It uses the last file with a .cert extension in alphabetical order. Therefore, the .cert file should contain at least two PEM blocks formatted in the OpenSSL style.
If no certificate is found, a self-signed certificate is created using the
openssl
command and stored in the 0-self-signed.cert file.
Create the secret:
$ oc secrets new console-secret \ /path/to/console.cert
Add the secrets to the registry-console deployment configuration:
$ oc volume dc/registry-console --add --type=secret \ --secret-name=console-secret -m /etc/cockpit/ws-certs.d
This triggers a new deployment of the registry console to include your signed certificates.
3.2.6.3. Troubleshooting the Registry Console
3.2.6.3.1. Debug Mode
The registry console debug mode is enabled using an environment variable. The following command redeploys the registry console in debug mode:
$ oc set env dc registry-console G_MESSAGES_DEBUG=cockpit-ws,cockpit-wrapper
Enabling debug mode allows more verbose logging to appear in the registry console’s pod logs.
3.2.6.3.2. Display SSL Certificate Path
To check which certificate the registry console is using, a command can be run from inside the console pod.
List the pods in the default project and find the registry console’s pod name:
$ oc get pods -n default NAME READY STATUS RESTARTS AGE registry-console-1-rssrw 1/1 Running 0 1d
Using the pod name from the previous command, get the certificate path that the cockpit-ws process is using. This example shows the console using the auto-generated certificate:
$ oc exec registry-console-1-rssrw remotectl certificate certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert
3.3. Accessing the Registry
3.3.1. Viewing Logs
To view the logs for the Docker registry, use the oc logs
command with the deployment configuration:
$ oc logs dc/docker-registry 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002
3.3.2. File Storage
Tag and image metadata is stored in OpenShift Container Platform, but the registry stores layer and signature data in a volume that is mounted into the registry container at /registry. As oc exec
does not work on privileged containers, to view a registry’s contents you must manually SSH into the node housing the registry pod’s container, then run docker exec
on the container itself:
List the current pods to find the pod name of your Docker registry:
# oc get pods
Then, use
oc describe
to find the host name for the node running the container:# oc describe pod <pod_name>
Log into the desired node:
# ssh node.example.com
List the running containers from the default project on the node host and identify the container ID for the Docker registry:
# docker ps --filter=name=registry_docker-registry.*_default_
List the registry contents using the
oc rsh
command:# oc rsh dc/docker-registry find /registry /registry/docker /registry/docker/registry /registry/docker/registry/v2 /registry/docker/registry/v2/blobs 1 /registry/docker/registry/v2/blobs/sha256 /registry/docker/registry/v2/blobs/sha256/ed /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810 /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/data 2 /registry/docker/registry/v2/blobs/sha256/a3 /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/data /registry/docker/registry/v2/blobs/sha256/f7 /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845 /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/data /registry/docker/registry/v2/repositories 3 /registry/docker/registry/v2/repositories/p1 /registry/docker/registry/v2/repositories/p1/pause 4 /registry/docker/registry/v2/repositories/p1/pause/_manifests /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures 5 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/link 6 /registry/docker/registry/v2/repositories/p1/pause/_uploads 7 /registry/docker/registry/v2/repositories/p1/pause/_layers 8 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/link 9 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/link
- 1
- This directory stores all layers and signatures as blobs.
- 2
- This file contains the blob’s contents.
- 3
- This directory stores all the image repositories.
- 4
- This directory is for a single image repository p1/pause.
- 5
- This directory contains signatures for a particular image manifest revision.
- 6
- This file contains a reference back to a blob (which contains the signature data).
- 7
- This directory contains any layers that are currently being uploaded and staged for the given repository.
- 8
- This directory contains links to all the layers this repository references.
- 9
- This file contains a reference to a specific layer that has been linked into this repository via an image.
3.3.3. Accessing the Registry Directly
For advanced usage, you can access the registry directly to invoke docker
commands. This allows you to push images to or pull them from the integrated registry directly using operations like docker push
or docker pull
. To do so, you must be logged in to the registry using the docker login
command. The operations you can perform depend on your user permissions, as described in the following sections.
3.3.3.1. User Prerequisites
To access the registry directly, the user that you use must satisfy the following, depending on your intended usage:
For any direct access, you must have a regular user, if one does not already exist, for your preferred identity provider. A regular user can generate an access token required for logging in to the registry. System users, such as system:admin, cannot obtain access tokens and, therefore, cannot access the registry directly.
For example, if you are using
HTPASSWD
authentication, you can create one using the following command:# htpasswd /etc/origin/openshift-htpasswd <user_name>
For pulling images, for example when using the
docker pull
command, the user must have the registry-viewer role. To add this role:$ oc policy add-role-to-user registry-viewer <user_name>
For writing or pushing images, for example when using the
docker push
command, the user must have the registry-editor role. To add this role:$ oc policy add-role-to-user registry-editor <user_name>
For more information on user permissions, see Managing Role Bindings.
3.3.3.2. Logging in to the Registry
Ensure your user satisfies the prerequisites for accessing the registry directly.
To log in to the registry directly:
Ensure you are logged in to OpenShift Container Platform as a regular user:
$ oc login
Log in to the Docker registry by using your access token:
docker login -u openshift -p $(oc whoami -t) <registry_ip>:<port>
You can pass any value for the username, the token contains all necessary information. Passing a username that contains colons will result in a login failure.
3.3.3.3. Pushing and Pulling Images
After logging in to the registry, you can perform docker pull
and docker push
operations against your registry.
You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.
In the following examples, we use:
Component | Value |
<registry_ip> |
|
<port> |
|
<project> |
|
<image> |
|
<tag> |
omitted (defaults to |
Pull an arbitrary image:
$ docker pull docker.io/busybox
Tag the new image with the form
<registry_ip>:<port>/<project>/<image>
. The project name must appear in this pull specification for OpenShift Container Platform to correctly place and later access the image in the registry.$ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
NoteYour regular user must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the
docker push
in the next step will fail. To test, you can create a new project to push the busybox image.Push the newly-tagged image to your registry:
$ docker push 172.30.124.220:5000/openshift/busybox ... cf2616975b4a: Image successfully pushed Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55
3.3.4. Accessing Registry Metrics
The OpenShift Container Registry provides an endpoint for Prometheus metrics. Prometheus is a stand-alone, open source systems monitoring and alerting toolkit.
The metrics are exposed at the /extensions/v2/metrics path of the registry endpoint. However, this route must first be enabled; see Extended Registry Configuration for instructions.
The following is a simple example of a metrics query:
$ curl -s -u <user>:<secret> \ 1
http://172.30.30.30:5000/extensions/v2/metrics | grep openshift | head -n 10
# HELP openshift_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which OpenShift was built.
# TYPE openshift_build_info gauge
openshift_build_info{gitCommit="67275e1",gitVersion="v3.6.0-alpha.1+67275e1-803",major="3",minor="6+"} 1
# HELP openshift_registry_request_duration_seconds Request latency summary in microseconds for each operation
# TYPE openshift_registry_request_duration_seconds summary
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.5"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.9"} 0
openshift_registry_request_duration_seconds{name="test/origin-pod",operation="blobstore.create",quantile="0.99"} 0
openshift_registry_request_duration_seconds_sum{name="test/origin-pod",operation="blobstore.create"} 0
openshift_registry_request_duration_seconds_count{name="test/origin-pod",operation="blobstore.create"} 5
See the upstream Prometheus documentation for more advanced queries and recommended visualizers.
3.4. Securing and Exposing the Registry
3.4.1. Overview
By default, the OpenShift Container Registry is secured during cluster installation so that it serves traffic via TLS. A passthrough route is also created by default to expose the service externally.
If for any reason your registry has not been secured or exposed, see the following sections for steps on how to manually do so.
3.4.2. Manually Securing the Registry
To manually secure the registry to serve traffic via TLS:
- Deploy the registry.
Fetch the service IP and port of the registry:
$ oc get svc/docker-registry NAME LABELS SELECTOR IP(S) PORT(S) docker-registry docker-registry=default docker-registry=default 172.30.124.220 5000/TCP
You can use an existing server certificate, or create a key and server certificate valid for specified IPs and host names, signed by a specified CA. To create a server certificate for the registry service IP and the docker-registry.default.svc.cluster.local host name, run the following command from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts:
$ oc adm ca create-server-cert \ --signer-cert=/etc/origin/master/ca.crt \ --signer-key=/etc/origin/master/ca.key \ --signer-serial=/etc/origin/master/ca.serial.txt \ --hostnames='docker-registry.default.svc.cluster.local,docker-registry.default.svc,172.30.124.220' \ --cert=/etc/secrets/registry.crt \ --key=/etc/secrets/registry.key
If the router will be exposed externally, add the public route host name in the
--hostnames
flag:--hostnames='mydocker-registry.example.com,docker-registry.default.svc.cluster.local,172.30.124.220 \
See Redeploying Registry and Router Certificates for additional details on updating the default certificate so that the route is externally accessible.
NoteThe
oc adm ca create-server-cert
command generates a certificate that is valid for two years. This can be altered with the--expire-days
option, but for security reasons, it is recommended to not make it greater than this value.Create the secret for the registry certificates:
$ oc secrets new registry-certificates \ /etc/secrets/registry.crt \ /etc/secrets/registry.key
Add the secret to the registry pod’s service accounts (including the default service account):
$ oc secrets link registry registry-certificates $ oc secrets link default registry-certificates
NoteLimiting secrets to only the service accounts that reference them is disabled by default. This means that if
serviceAccountConfig.limitSecretReferences
is set tofalse
(the default setting) in the master configuration file, linking secrets to a service is not required.Pause the
docker-registry
service:$ oc rollout pause dc/docker-registry
Add the secret volume to the registry deployment configuration:
$ oc volume dc/docker-registry --add --type=secret \ --secret-name=registry-certificates -m /etc/secrets
Enable TLS by adding the following environment variables to the registry deployment configuration:
$ oc set env dc/docker-registry \ REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \ REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
See more details on overriding registry options.
Update the scheme used for the registry’s liveness probe from HTTP to HTTPS:
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{ "name":"registry", "livenessProbe": {"httpGet": {"scheme":"HTTPS"}} }]}}}}'
If your registry was initially deployed on OpenShift Container Platform 3.2 or later, update the scheme used for the registry’s readiness probe from HTTP to HTTPS:
$ oc patch dc/docker-registry -p '{"spec": {"template": {"spec": {"containers":[{ "name":"registry", "readinessProbe": {"httpGet": {"scheme":"HTTPS"}} }]}}}}'
Resume the
docker-registry
service:$ oc rollout resume dc/docker-registry
Validate the registry is running in TLS mode. Wait until the latest docker-registry deployment completes and verify the Docker logs for the registry container. You should find an entry for
listening on :5000, tls
.$ oc logs dc/docker-registry | grep tls time="2015-05-27T05:05:53Z" level=info msg="listening on :5000, tls" instance.id=deeba528-c478-41f5-b751-dc48e4935fc2
Copy the CA certificate to the Docker certificates directory. This must be done on all nodes in the cluster:
$ dcertsdir=/etc/docker/certs.d $ destdir_addr=$dcertsdir/172.30.124.220:5000 $ destdir_name=$dcertsdir/docker-registry.default.svc.cluster.local:5000 $ sudo mkdir -p $destdir_addr $destdir_name $ sudo cp ca.crt $destdir_addr 1 $ sudo cp ca.crt $destdir_name
- 1
- The ca.crt file is a copy of /etc/origin/master/ca.crt on the master.
When using authentication, some versions of
docker
also require you to configure your cluster to trust the certificate at the OS level.Copy the certificate:
$ cp /etc/origin/master/ca.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
Run:
$ update-ca-trust enable
Remove the
--insecure-registry
option only for this particular registry in the /etc/sysconfig/docker file. Then, reload the daemon and restart the docker service to reflect this configuration change:$ sudo systemctl daemon-reload $ sudo systemctl restart docker
Validate the
docker
client connection. Runningdocker push
to the registry ordocker pull
from the registry should succeed. Make sure you have logged into the registry.$ docker tag|push <registry/image> <internal_registry/project/image>
For example:
$ docker pull busybox $ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox $ docker push 172.30.124.220:5000/openshift/busybox ... cf2616975b4a: Image successfully pushed Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55
3.4.3. Manually Exposing a Secure Registry
Instead of logging in to the OpenShift Container Registry from within the OpenShift Container Platform cluster, you can gain external access to it by first securing the registry and then exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.
Each of the following prerequisite steps are performed by default during a typical cluster installation. If they have not been, perform them manually:
A passthrough route should have been created by default for the registry during the initial cluster installation:
Verify whether the route exists:
$ oc get route/docker-registry -o yaml apiVersion: v1 kind: Route metadata: name: docker-registry spec: host: <host> 1 to: kind: Service name: docker-registry 2 tls: termination: passthrough 3
NoteRe-encrypt routes are also supported for exposing the secure registry.
If it does not exist, create the route via the
oc create route passthrough
command, specifying the registry as the route’s service. By default, the name of the created route is the same as the service name:Get the docker-registry service details:
$ oc get svc NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE docker-registry 172.30.69.167 <none> 5000/TCP docker-registry=default 4h kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP <none> 4h router 172.30.172.132 <none> 80/TCP router=router 4h
Create the route:
$ oc create route passthrough \ --service=docker-registry \1 --hostname=<host> route "docker-registry" created 2
Next, you must trust the certificates being used for the registry on your host system to allow the host to push and pull images. The certificates referenced were created when you secured your registry.
$ sudo mkdir -p /etc/docker/certs.d/<host> $ sudo cp <ca_certificate_file> /etc/docker/certs.d/<host> $ sudo systemctl restart docker
Log in to the registry using the information from securing the registry. However, this time point to the host name used in the route rather than your service IP. When logging in to a secured and exposed registry, make sure you specify the registry in the
docker login
command:# docker login -e user@company.com \ -u f83j5h6 \ -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \ <host>
You can now tag and push images using the route host. For example, to tag and push a
busybox
image in a project calledtest
:$ oc get imagestreams -n test NAME DOCKER REPO TAGS UPDATED $ docker pull busybox $ docker tag busybox <host>/test/busybox $ docker push <host>/test/busybox The push refers to a repository [<host>/test/busybox] (len: 1) 8c2e06607696: Image already exists 6ce2e90b0bc7: Image successfully pushed cf2616975b4a: Image successfully pushed Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31 $ docker pull <host>/test/busybox latest: Pulling from <host>/test/busybox cf2616975b4a: Already exists 6ce2e90b0bc7: Already exists 8c2e06607696: Already exists Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31 Status: Image is up to date for <host>/test/busybox:latest $ oc get imagestreams -n test NAME DOCKER REPO TAGS UPDATED busybox 172.30.11.215:5000/test/busybox latest 2 seconds ago
NoteYour image streams will have the IP address and port of the registry service, not the route name and port. See
oc get imagestreams
for details.
3.4.4. Manually Exposing a Non-Secure Registry
Instead of securing the registry in order to expose the registry, you can simply expose a non-secure registry for non-production OpenShift Container Platform environments. This allows you to have an external route to the registry without using SSL certificates.
Only non-production environments should expose a non-secure registry to external access.
To expose a non-secure registry:
Expose the registry:
# oc expose service docker-registry --hostname=<hostname> -n default
This creates the following JSON file:
apiVersion: v1 kind: Route metadata: creationTimestamp: null labels: docker-registry: default name: docker-registry spec: host: registry.example.com port: targetPort: "5000" to: kind: Service name: docker-registry status: {}
Verify that the route has been created successfully:
# oc get route NAME HOST/PORT PATH SERVICE LABELS INSECURE POLICY TLS TERMINATION docker-registry registry.example.com docker-registry docker-registry=default
Check the health of the registry:
$ curl -v http://registry.example.com/healthz
Expect an HTTP 200/OK message.
After exposing the registry, update your /etc/sysconfig/docker file by adding the port number to the
OPTIONS
entry. For example:OPTIONS='--selinux-enabled --insecure-registry=172.30.0.0/16 --insecure-registry registry.example.com:80'
ImportantThe above options should be added on the client from which you are trying to log in.
Also, ensure that Docker is running on the client.
When logging in to the non-secured and exposed registry, make sure you specify the registry in the docker login
command. For example:
# docker login -e user@company.com \ -u f83j5h6 \ -p Ju1PeM47R0B92Lk3AZp-bWJSck2F7aGCiZ66aFGZrs2 \ <host>
3.5. Extended Registry Configuration
3.5.1. Maintaining the Registry IP Address
OpenShift Container Platform refers to the integrated registry by its service IP address, so if you decide to delete and recreate the docker-registry service, you can ensure a completely transparent transition by arranging to re-use the old IP address in the new service. If a new IP address cannot be avoided, you can minimize cluster disruption by rebooting only the masters.
- Re-using the Address
- To re-use the IP address, you must save the IP address of the old docker-registry service prior to deleting it, and arrange to replace the newly assigned IP address with the saved one in the new docker-registry service.
Make a note of the
clusterIP
for the service:$ oc get svc/docker-registry -o yaml | grep clusterIP:
Delete the service:
$ oc delete svc/docker-registry dc/docker-registry
Create the registry definition in registry.yaml, replacing
<options>
with, for example, those used in step 3 of the instructions in the Non-Production Use section:$ oc adm registry <options> -o yaml > registry.yaml
-
Edit registry.yaml, find the
Service
there, and change itsclusterIP
to the address noted in step 1. Create the registry using the modified registry.yaml:
$ oc create -f registry.yaml
- Rebooting the Masters
If you are unable to re-use the IP address, any operation that uses a pull specification that includes the old IP address will fail. To minimize cluster disruption, you must reboot the masters:
# systemctl restart atomic-openshift-master-api atomic-openshift-master-controllers
This ensures that the old registry URL, which includes the old IP address, is cleared from the cache.
NoteWe recommend against rebooting the entire cluster because that incurs unnecessary downtime for pods and does not actually clear the cache.
3.5.2. Whitelisting Docker Registries
You can specify a whitelist of docker registries, allowing you to curate a set of images and templates that are available for download by OpenShift Container Platform users. This curated set can be placed in one or more docker registries, and then added to the whitelist. When using a whitelist, only the specified registries are accessible within OpenShift Container Platform, and all other registries are denied access by default.
To configure a whitelist:
Edit the /etc/sysconfig/docker file to block all registries:
BLOCK_REGISTRY='--block-registry=all'
You may need to uncomment the
BLOCK_REGISTRY
line.In the same file, add registries to which you want to allow access:
ADD_REGISTRY='--add-registry=<registry1> --add-registry=<registry2>'
Allowing Access to Registries
ADD_REGISTRY='--add-registry=registry.access.redhat.com'
This example would restrict access to images available on the Red Hat Customer Portal.
Once the whitelist is configured, if a user tries to pull from a docker registry that is not on the whitelist, they will receive an error message stating that this registry is not allowed.
3.5.3. Setting the Registry Hostname
You can configure the hostname and port the registry is known by for both internal and external references. By doing this, image streams will provide hostname based push and pull specifications for images, allowing consumers of the images to be isolated from changes to the registry service ip and potentially allowing image streams and their references to be portable between clusters.
To set the hostname used to reference the registry from within the cluster, set the internalRegistryHostname
in the imagePolicyConfig
section of the master configuration file. The external hostname is controlled by setting the externalRegistryHostname
value in the same location.
Image Policy Configuration
imagePolicyConfig: internalRegistryHostname: docker-registry.default.svc.cluster.local:5000 externalRegistryHostname: docker-registry.mycompany.com
If you have enabled TLS for your registry the server certificate must include the hostnames by which you expect the registry to be referenced. See securing the registry for instructions on adding hostnames to the server certificate.
3.5.4. Overriding the Registry Configuration
You can override the integrated registry’s default configuration, found by default at /config.yml in a running registry’s container, with your own custom configuration.
Upstream configuration options in this file may also be overridden using environment variables. The middleware section is an exception as there are just a few options that can be overridden using environment variables. Learn how to override specific configuration options.
To enable management of the registry configuration file directly and deploy an updated configuration using a ConfigMap
:
- Deploy the registry.
Edit the registry configuration file locally as needed. The initial YAML file deployed on the registry is provided below. Review supported options.
Registry Configuration File
version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: inmemory filesystem: rootdirectory: /registry delete: enabled: true auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: acceptschema2: true pullthrough: true enforcequota: false projectcachettl: 1m blobrepositorycachettl: 10m storage: - name: openshift openshift: version: 1.0 metrics: enabled: false secret: <secret>
Create a
ConfigMap
holding the content of each file in this directory:$ oc create configmap registry-config \ --from-file=</path/to/custom/registry/config.yml>/
Add the registry-config ConfigMap as a volume to the registry’s deployment configuration to mount the custom configuration file at /etc/docker/registry/:
$ oc volume dc/docker-registry --add --type=configmap \ --configmap-name=registry-config -m /etc/docker/registry/
Update the registry to reference the configuration path from the previous step by adding the following environment variable to the registry’s deployment configuration:
$ oc set env dc/docker-registry \ REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yml
This may be performed as an iterative process to achieve the desired configuration. For example, during troubleshooting, the configuration may be temporarily updated to put it in debug mode.
To update an existing configuration:
This procedure will overwrite the currently deployed registry configuration.
- Edit the local registry configuration file, config.yml.
Delete the registry-config secret:
$ oc delete secret registry-config
Recreate the secret to reference the updated configuration file:
$ oc secrets new registry-config config.yml=</path/to/custom/registry/config.yml>
Redeploy the registry to read the updated configuration:
$ oc rollout latest docker-registry
Maintain configuration files in a source control repository.
3.5.5. Registry Configuration Reference
There are many configuration options available in the upstream docker distribution library. Not all configuration options are supported or enabled. Use this section as a reference when overriding the registry configuration.
Upstream configuration options in this file may also be overridden using environment variables. However, the middleware section may not be overridden using environment variables. Learn how to override specific configuration options.
3.5.5.1. Log
Upstream options are supported.
Example:
log: level: debug formatter: text fields: service: registry environment: staging
3.5.5.2. Hooks
Mail hooks are not supported.
3.5.5.3. Storage
This section lists the supported registry storage drivers.
The following list includes storage drivers that need to be configured in the registry’s configuration file:
- Filesystem. Filesystem is the default and does not need to be configured.
- S3. Learn more about CloudFront configuration.
- OpenStack Swift
- Google Cloud Storage (GCS)
- Microsoft Azure
- Aliyun OSS
General registry storage configuration options are supported.
The following storage options need to be configured through the filesystem driver:
For more information on supported persistent storage drivers, see Configuring Persistent Storage and Persistent Storage Examples.
General Storage Configuration Options
storage:
delete:
enabled: true 1
redirect:
disable: false
cache:
blobdescriptor: inmemory
maintenance:
uploadpurging:
enabled: true
age: 168h
interval: 24h
dryrun: false
readonly:
enabled: false
- 1
- This entry is mandatory for image pruning to work properly.
3.5.5.4. Auth
Auth options should not be altered. The openshift extension is the only supported option.
auth: openshift: realm: openshift
3.5.5.5. Middleware
The repository middleware extension allows to configure OpenShift Container Platform middleware responsible for interaction with OpenShift Container Platform and image proxying.
middleware: registry: - name: openshift 1 repository: - name: openshift 2 options: acceptschema2: true 3 pullthrough: true 4 mirrorpullthrough: true 5 enforcequota: false 6 projectcachettl: 1m 7 blobrepositorycachettl: 10m 8 storage: - name: openshift 9
- 1 2 9
- These entries are mandatory. Their presence ensures required components are loaded. These values should not be changed.
- 3
- Allows you to store manifest schema v2 during a push to the registry. See below for more details.
- 4
- Allows the registry to act as a proxy for remote blobs. See below for more details.
- 5
- Allows the registry cache blobs to be served from remote registries for fast access later. The mirroring starts when the blob is accessed for the first time. The option has no effect if the pullthrough is disabled.
- 6
- Prevents blob uploads exceeding the size limit, which are defined in the targeted project.
- 7
- An expiration timeout for limits cached in the registry. The lower the value, the less time it takes for the limit changes to propagate to the registry. However, the registry will query limits from the server more frequently and, as a consequence, pushes will be slower.
- 8
- An expiration timeout for remembered associations between blob and repository. The higher the value, the higher probability of fast lookup and more efficient registry operation. On the other hand, memory usage will raise as well as a risk of serving image layer to user, who is no longer authorized to access it.
3.5.5.5.1. CloudFront Middleware
The CloudFront middleware extension can be added to support AWS, CloudFront CDN storage provider. CloudFront middleware speeds up distribution of image content internationally. The blobs are distributed to several edge locations around the world. The client is always directed to the edge with the lowest latency.
The CloudFront middleware extension can be only used with S3 storage. It is utilized only during blob serving. Therefore, only blob downloads can be speeded up, not uploads.
The following is an example of minimal configuration of S3 storage driver with a CloudFront middleware:
version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: inmemory delete: enabled: true s3: 1 accesskey: BJKMSZBRESWJQXRWMAEQ secretkey: 5ah5I91SNXbeoUXXDasFtadRqOdy62JzlnOW1goS region: us-east-1 bucket: docker.myregistry.com auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift storage: - name: cloudfront 2 options: baseurl: https://jrpbyn0k5k88bi.cloudfront.net/ 3 privatekey: /etc/docker/cloudfront-ABCEDFGHIJKLMNOPQRST.pem 4 keypairid: ABCEDFGHIJKLMNOPQRST 5 - name: openshift
- 1
- The S3 storage must be configured the same way regardless of CloudFront middleware.
- 2
- The CloudFront storage middleware needs to be listed before OpenShift middleware.
- 3
- The CloudFront base URL. In the AWS management console, this is listed as Domain Name of CloudFront distribution.
- 4
- The location of your AWS private key on the filesystem. This must be not confused with Amazon EC2 key pair. See the AWS documentation on creating CloudFront key pairs for your trusted signers. The file needs to be mounted as a secret into the registry pod.
- 5
- The ID of your Cloudfront key pair.
3.5.5.5.2. Overriding Middleware Configuration Options
The middleware section cannot be overridden using environment variables. There are a few exceptions, however. For example:
middleware: repository: - name: openshift options: acceptschema2: true 1 pullthrough: true 2 mirrorpullthrough: true 3 enforcequota: false 4 projectcachettl: 1m 5 blobrepositorycachettl: 10m 6
- 1
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ACCEPTSCHEMA2
, which allows for the ability to accept manifest schema v2 on manifest put requests. Recognized values aretrue
andfalse
(which applies to all the other boolean variables below). - 2
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PULLTHROUGH
, which enables a proxy mode for remote repositories. - 3
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_MIRRORPULLTHROUGH
, which instructs registry to mirror blobs locally if serving remote blobs. - 4
- A configuration option that can be overridden by the boolean environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA
, which allows the ability to turn quota enforcement on or off. By default, quota enforcement is off. - 5
- A configuration option that can be overridden by the environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_PROJECTCACHETTL
, specifying an eviction timeout for project quota objects. It takes a valid time duration string (for example,2m
). If empty, you get the default timeout. If zero (0m
), caching is disabled. - 6
- A configuration option that can be overridden by the environment variable
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_BLOBREPOSITORYCACHETTL
, specifying an eviction timeout for associations between blob and containing repository. The format of the value is the same as inprojectcachettl
case.
3.5.5.5.3. Image Pullthrough
If enabled, the registry will attempt to fetch requested blob from a remote registry unless the blob exists locally. The remote candidates are calculated from DockerImage entries stored in status of the image stream, a client pulls from. All the unique remote registry references in such entries will be tried in turn until the blob is found.
Pullthrough will only occur if an image stream tag exists for the image being pulled. For example, if the image being pulled is docker-registry.default.svc:5000/yourproject/yourimage:prod
then the registry will look for an image stream tag named yourimage:prod
in the project yourproject
. If it finds one, it will attempt to pull the image using the dockerImageReference
associated with that image stream tag.
When performing pullthrough, the registry will use pull credentials found in the project associated with the image stream tag that is being referenced. This capability also makes it possible for you to pull images that reside on a registry they do not have credentials to access, as long as you have access to the image stream tag that references the image.
You must ensure that your registry has appropriate certificates to trust any external registries you do a pullthrough against. The certificates need to be placed in the /etc/pki/tls/certs directory on the pod. You can mount the certificates using a configuration map or secret. Note that the entire /etc/pki/tls/certs directory must be replaced. You must include the new certificates and replace the system certificates in your secret or configuration map that you mount.
Note that by default image stream tags use a reference policy type of Source
which means that when the image stream reference is resolved to an image pull specification, the specification used will point to the source of the image. For images hosted on external registries, this will be the external registry and as a result the resource will reference and pull the image by the external registry. For example, registry.access.redhat.com/openshift3/jenkins-2-rhel7
and pullthrough will not apply. To ensure that resources referencing image streams use a pull specification that points to the internal registry, the image stream tag should use a reference policy type of Local
. More information is available on Reference Policy.
This feature is on by default. However, it can be disabled using a configuration option.
By default, all the remote blobs served this way are stored locally for subsequent faster access unless mirrorpullthrough
is disabled. The downside of this mirroring feature is an increased storage usage.
The mirroring starts when a client tries to fetch at least a single byte of the blob. To pre-fetch a particular image into integrated registry before it is actually needed, you can run the following command:
$ oc get imagestreamtag/${IS}:${TAG} -o jsonpath='{ .image.dockerImageLayers[*].name }' | \ xargs -n1 -I {} curl -H "Range: bytes=0-1" -u user:${TOKEN} \ http://${REGISTRY_IP}:${PORT}/v2/default/mysql/blobs/{}
This OpenShift Container Platform mirroring feature should not be confused with the upstream registry pull through cache feature, which is a similar but distinct capability.
3.5.5.5.4. Manifest Schema v2 Support
Each image has a manifest describing its blobs, instructions for running it and additional metadata. The manifest is versioned, with each version having different structure and fields as it evolves over time. The same image can be represented by multiple manifest versions. Each version will have different digest though.
The registry currently supports manifest v2 schema 1 (schema1) and manifest v2 schema 2 (schema2). The former is being obsoleted but will be supported for an extended amount of time.
You should be wary of compatibility issues with various Docker clients:
- Docker clients of version 1.9 or older support only schema1. Any manifest this client pulls or pushes will be of this legacy schema.
- Docker clients of version 1.10 support both schema1 and schema2. And by default, it will push the latter to the registry if it supports newer schema.
The registry, storing an image with schema1 will always return it unchanged to the client. Schema2 will be transferred unchanged only to newer Docker client. For the older one, it will be converted on-the-fly to schema1.
This has significant consequences. For example an image pushed to the registry by a newer Docker client cannot be pulled by the older Docker by its digest. That’s because the stored image’s manifest is of schema2 and its digest can be used to pull only this version of manifest.
For this reason, the registry is configured by default not to store schema2. This ensures that any docker client will be able to pull from the registry any image pushed there regardless of client’s version.
Once you’re confident that all the registry clients support schema2, you’ll be safe to enable its support in the registry. See the middleware configuration reference above for particular option.
3.5.5.6. OpenShift
This section reviews the configuration of global settings for features specific to OpenShift Container Platform. In a future release, openshift
-related settings in the Middleware section will be obsoleted.
Currently, this section allows you to configure registry metrics collection:
openshift: version: 1.0 1 server: addr: docker-registry.default.svc 2 metrics: enabled: false 3 secret: <secret> 4 requests: read: maxrunning: 10 5 maxinqueue: 10 6 maxwaitinqueue 2m 7 write: maxrunning: 10 8 maxinqueue: 10 9 maxwaitinqueue 2m 10
- 1
- A mandatory entry specifying configuration version of this section. The only supported value is
1.0
. - 2
- The hostname of the registry. Should be set to the same value configured on the master. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_SERVER_ADDR
. - 3
- Can be set to
true
to enable metrics collection. It can be overridden by the boolean environment variableREGISTRY_OPENSHIFT_METRICS_ENABLED
. - 4
- A secret used to authorize client requests. Metrics clients must use it as a bearer token in
Authorization
header. It can be overridden by the environment variableREGISTRY_OPENSHIFT_METRICS_SECRET
. - 5
- Maximum number of simultaneous pull requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_READ_MAXRUNNING
. Zero indicates no limit. - 6
- Maximum number of queued pull requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_READ_MAXINQUEUE
. Zero indicates no limit. - 7
- Maximum time a pull request can wait in the queue before being rejected. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_READ_MAXWAITINQUEUE
. Zero indicates no limit. - 8
- Maximum number of simultaneous push requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXRUNNING
. Zero indicates no limit. - 9
- Maximum number of queued push requests. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXINQUEUE
. Zero indicates no limit. - 10
- Maximum time a push request can wait in the queue before being rejected. It can be overridden by the environment variable
REGISTRY_OPENSHIFT_REQUESTS_WRITE_MAXWAITINQUEUE
. Zero indicates no limit.
See Accessing Registry Metrics for usage information.
3.5.5.7. Reporting
Reporting is unsupported.
3.5.5.8. HTTP
Upstream options are supported. Learn how to alter these settings via environment variables. Only the tls section should be altered. For example:
http: addr: :5000 tls: certificate: /etc/secrets/registry.crt key: /etc/secrets/registry.key
3.5.5.9. Notifications
Upstream options are supported.
Example:
notifications: endpoints: - name: registry disabled: false url: https://url:port/path headers: Accept: - text/plain timeout: 500 threshold: 5 backoff: 1000
3.5.5.10. Redis
Redis is not supported.
3.5.5.11. Health
Upstream options are supported. The registry deployment configuration provides an integrated health check at /healthz.
3.5.5.12. Proxy
Proxy configuration should not be enabled. This functionality is provided by the OpenShift Container Platform repository middleware extension, pullthrough: true.
3.6. Known Issues
3.6.1. Overview
The following are the known issues when deploying or using the integrated registry.
3.6.2. Image Push Errors with Scaled Registry Using Shared NFS Volume
When using a scaled registry with a shared NFS volume, you may see one of the following errors during the push of an image:
-
digest invalid: provided digest did not match uploaded content
-
blob upload unknown
-
blob upload invalid
These errors are returned by an internal registry service when Docker attempts to push the image. Its cause originates in the synchronization of file attributes across nodes. Factors such as NFS client side caching, network latency, and layer size can all contribute to potential errors that might occur when pushing an image using the default round-robin load balancing configuration.
You can perform the following steps to minimize the probability of such a failure:
Ensure that the
sessionAffinity
of your docker-registry service is set toClientIP
:$ oc get svc/docker-registry --template='{{.spec.sessionAffinity}}'
This should return
ClientIP
, which is the default in recent OpenShift Container Platform versions. If not, change it:$ oc patch svc/docker-registry -p '{"spec":{"sessionAffinity": "ClientIP"}}'
-
Ensure that the NFS export line of your registry volume on your NFS server has the
no_wdelay
options listed. Theno_wdelay
option prevents the server from delaying writes, which greatly improves read-after-write consistency, a requirement of the registry.
Testing shows issues with using the RHEL NFS server as a storage backend for the container image registry. This includes the OpenShift Container Registry and Quay. Therefore, using the RHEL NFS server to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift core components.
3.6.3. Pull of Internally Managed Image Fails with "not found" Error
This error occurs when the pulled image is pushed to an image stream different from the one it is being pulled from. This is caused by re-tagging a built image into an arbitrary image stream:
$ oc tag srcimagestream:latest anyproject/pullimagestream:latest
And subsequently pulling from it, using an image reference such as:
internal.registry.url:5000/anyproject/pullimagestream:latest
During a manual Docker pull, this will produce a similar error:
Error: image anyproject/pullimagestream:latest not found
To prevent this, avoid the tagging of internally managed images completely, or re-push the built image to the desired namespace manually.
3.6.4. Image Push Fails with "500 Internal Server Error" on S3 Storage
There are problems reported happening when the registry runs on S3 storage back-end. Pushing to a Docker registry occasionally fails with the following error:
Received unexpected HTTP status: 500 Internal Server Error
To debug this, you need to view the registry logs. In there, look for similar error messages occurring at the time of the failed push:
time="2016-03-30T15:01:21.22287816-04:00" level=error msg="unknown error completing upload: driver.Error{DriverName:\"s3\", Enclosed:(*url.Error)(0xc20901cea0)}" http.request.method=PUT ... time="2016-03-30T15:01:21.493067808-04:00" level=error msg="response completed with error" err.code=UNKNOWN err.detail="s3: Put https://s3.amazonaws.com/oso-tsi-docker/registry/docker/registry/v2/blobs/sha256/ab/abe5af443833d60cf672e2ac57589410dddec060ed725d3e676f1865af63d2e2/data: EOF" err.message="unknown error" http.request.method=PUT ... time="2016-04-02T07:01:46.056520049-04:00" level=error msg="error putting into main store: s3: The request signature we calculated does not match the signature you provided. Check your key and signing method." http.request.method=PUT atest
If you see such errors, contact your Amazon S3 support. There may be a problem in your region or with your particular bucket.
3.6.5. Image Pruning Fails
If you encounter the following error when pruning images:
BLOB sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c error deleting blob
And your registry log contains the following information:
error deleting blob \"sha256:49638d540b2b62f3b01c388e9d8134c55493b1fa659ed84e97cb59b87a6b8e6c\": operation unsupported
It means that your custom configuration file lacks mandatory entries in the storage section, namely storage:delete:enabled
set to true. Add them, re-deploy the registry, and repeat your image pruning operation.
Chapter 4. Setting up a Router
4.1. Router Overview
4.1.1. About Routers
There are many ways to get traffic into the cluster. The most common approach is to use the OpenShift Container Platform router as the ingress point for external traffic destined for services in your OpenShift Container Platform installation.
OpenShift Container Platform provides and supports the following router plug-ins:
- The HAProxy template router is the default plug-in. It uses the openshift3/ose-haproxy-router image to run an HAProxy instance alongside the template router plug-in inside a container on OpenShift Container Platform. It currently supports HTTP(S) traffic and TLS-enabled traffic via SNI. The router’s container listens on the host network interface, unlike most containers that listen only on private IPs. The router proxies external requests for route names to the IPs of actual pods identified by the service associated with the route.
- The F5 router integrates with an existing F5 BIG-IP® system in your environment to synchronize routes. F5 BIG-IP® version 11.4 or newer is required in order to have the F5 iControl REST API.
The F5 router plug-in is available starting in OpenShift Container Platform 3.0.2.
4.1.2. Router Service Account
Before deploying an OpenShift Container Platform cluster, you must have a service account for the router. Starting in OpenShift Container Platform 3.1, a router service account is automatically created during a quick or advanced installation (previously, this required manual creation). This service account has permissions to a security context constraint (SCC) that allows it to specify host ports.
4.1.2.1. Permission to Access Labels
When namespace labels are used, for example in creating router shards, the service account for the router must have cluster-reader
permission.
$ oc adm policy add-cluster-role-to-user \ cluster-reader \ system:serviceaccount:default:router
With a service account in place, you can proceed to installing a default HAProxy Router, a customized HAProxy Router or F5 Router.
4.2. Using the Default HAProxy Router
4.2.1. Overview
The oc adm router
command is provided with the administrator CLI to simplify the tasks of setting up routers in a new installation. If you followed the quick installation, then a default router was automatically created for you. The oc adm router
command creates the service and deployment configuration objects. Use the --service-account
option to specify the service account the router will use to contact the master.
The router service account can be created in advance or created by the oc adm router --service-account
command.
Every form of communication between OpenShift Container Platform components is secured by TLS and uses various certificates and authentication methods. The --default-certificate
.pem format file can be supplied or one is created by the oc adm router
command. When routes are created, the user can provide route certificates that the router will use when handling the route.
When deleting a router, ensure the deployment configuration, service, and secret are deleted as well.
Routers are deployed on specific nodes. This makes it easier for the cluster administrator and external network manager to coordinate which IP address will run a router and which traffic the router will handle. The routers are deployed on specific nodes by using node selectors.
Routers use host networking by default, and they directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where ports 80/443 are available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers.
It is recommended to use separate distinct openshift-router service account with your router. This can be provided using the --service-account
flag to the oc adm router
command.
$ oc adm router --dry-run --service-account=router 1
Router pods created using oc adm router
have default resource requests that a node must satisfy for the router pod to be deployed. In an effort to increase the reliability of infrastructure components, the default resource requests are used to increase the QoS tier of the router pods above pods without resource requests. The default values represent the observed minimum resources required for a basic router to be deployed and can be edited in the routers deployment configuration and you may want to increase them based on the load of the router.
4.2.2. Creating a Router
The quick installation process automatically creates a default router. If the router does not exist, run the following to create a router:
$ oc adm router <router_name> --replicas=<number> --service-account=router
--replicas
is usually 1
unless a high availability configuration is being created.
To find the host IP address of the router:
$ oc get po <router-pod> --template={{.status.hostIP}}
You can also use router shards to ensure that the router is filtered to specific namespaces or routes, or set any environment variables after router creation. In this case create a router for each shard.
4.2.3. Other Basic Router Commands
- Checking the Default Router
- The default router service account, named router, is automatically created during quick and advanced installations. To verify that this account already exists:
$ oc adm router --dry-run --service-account=router
- Viewing the Default Router
- To see what the default router would look like if created:
$ oc adm router --dry-run -o yaml --service-account=router
- Deploying the Router to a Labeled Node
- To deploy the router to any node(s) that match a specified node label:
$ oc adm router <router_name> --replicas=<number> --selector=<label> \ --service-account=router
For example, if you want to create a router named router
and have it placed on a node labeled with region=infra
:
$ oc adm router router --replicas=1 --selector='region=infra' \ --service-account=router
During advanced installation, the openshift_router_selector
and openshift_registry_selector
Ansible settings are set to region=infra by default. The default router and registry will only be automatically deployed if a node exists that matches the region=infra label.
For information on updating labels, see Updating Labels on Nodes.
Multiple instances are created on different hosts according to the scheduler policy.
- Using a Different Router Image
- To use a different router image and view the router configuration that would be used:
$ oc adm router <router_name> -o <format> --images=<image> \ --service-account=router
For example:
$ oc adm router region-west -o yaml --images=myrepo/somerouter:mytag \ --service-account=router
4.2.4. Filtering Routes to Specific Routers
Using the ROUTE_LABELS
environment variable, you can filter routes so that they are used only by specific routers.
For example, if you have multiple routers, and 100 routes, you can attach labels to the routes so that a portion of them are handled by one router, whereas the rest are handled by another.
After creating a router, use the
ROUTE_LABELS
environment variable to tag the router:$ oc env dc/<router=name> ROUTE_LABELS="key=value"
Add the label to the desired routes:
oc label route <route=name> key=value
To verify that the label has been attached to the route, check the route configuration:
$ oc describe dc/<route_name>
- Setting the Maximum Number of Concurrent Connections
-
The router can handle a maximum number of 20000 connections by default. You can change that limit depending on your needs. Having too few connections prevents the health check from working, which causes unnecessary restarts. You need to configure the system to support the maximum number of connections. The limits shown in
'sysctl fs.nr_open'
and'sysctl fs.file-max'
must be large enough. Otherwise, HAproxy will not start.
When the router is created, the --max-connections=
option sets the desired limit:
$ oc adm router --max-connections=10000 ....
Edit the ROUTER_MAX_CONNECTIONS
environment variable in the router’s deployment configuration to change the value. The router pods are restarted with the new value. If ROUTER_MAX_CONNECTIONS
is not present, the default value of 20000, is used.
A connection includes the frontend and internal backend. This counts as two connections. Be sure to set ROUTER_MAX_CONNECTIONS
to double than the number of connections you intend to create.
4.2.5. HAProxy Strict SNI
The HAProxy strict-sni
can be controlled through the ROUTER_STRICT_SNI
environment variable in the router’s deployment configuration. It can also be set when the router is created by using the --strict-sni
command line option.
$ oc adm router --strict-sni
4.2.6. TLS Cipher Suites
Set the router cipher suite using the --ciphers
option when creating a router:
$ oc adm router --ciphers=modern ....
The values are: modern
, intermediate
, or old
, with intermediate
as the default. Alternatively, a set of ":" separated ciphers can be provided. The ciphers must be from the set displayed by:
$ openssl ciphers
Alternatively, use the ROUTER_CIPHERS
environment variable for an existing router.
4.2.7. Highly-Available Routers
You can set up a highly-available router on your OpenShift Container Platform cluster using IP failover. This setup has multiple replicas on different nodes so the failover software can switch to another replica if the current one fails.
4.2.8. Customizing the Router Service Ports
You can customize the service ports that a template router binds to by setting the environment variables ROUTER_SERVICE_HTTP_PORT
and ROUTER_SERVICE_HTTPS_PORT
. This can be done by creating a template router, then editing its deployment configuration.
The following example creates a router deployment with 0
replicas and customizes the router service HTTP and HTTPS ports, then scales it appropriately (to 1
replica).
$ oc adm router --replicas=0 --ports='10080:10080,10443:10443' 1
$ oc set env dc/router ROUTER_SERVICE_HTTP_PORT=10080 \
ROUTER_SERVICE_HTTPS_PORT=10443
$ oc scale dc/router --replicas=1
- 1
- Ensures exposed ports are appropriately set for routers that use the container networking mode
--host-network=false
.
If you do customize the template router service ports, you will also need to ensure that the nodes where the router pods run have those custom ports opened in the firewall (either via Ansible or iptables
, or any other custom method that you use via firewall-cmd
).
The following is an example using iptables
to open the custom router service ports.
$ iptables -A INPUT -p tcp --dport 10080 -j ACCEPT $ iptables -A INPUT -p tcp --dport 10443 -j ACCEPT
4.2.9. Working With Multiple Routers
An administrator can create multiple routers with the same definition to serve the same set of routes. Each router will be on a different node and will have a different IP address. The network administrator will need to get the desired traffic to each node.
Multiple routers can be grouped to distribute routing load in the cluster and separate tenants to different routers or shards. Each router or shard in the group admits routes based on the selectors in the router. An administrator can create shards over the whole cluster using ROUTE_LABELS
. A user can create shards over a namespace (project) by using NAMESPACE_LABELS
.
4.2.10. Adding a Node Selector to a Deployment Configuration
Making specific routers deploy on specific nodes requires two steps:
Add a label to the desired node:
$ oc label node 10.254.254.28 "router=first"
Add a node selector to the router deployment configuration:
$ oc edit dc <deploymentConfigName>
Add the
template.spec.nodeSelector
field with a key and value corresponding to the label:... template: metadata: creationTimestamp: null labels: router: router1 spec: nodeSelector: 1 router: "first" ...
- 1
- The key and value are
router
andfirst
, respectively, corresponding to therouter=first
label.
4.2.11. Using Router Shards
Router sharding uses NAMESPACE_LABELS
and ROUTE_LABELS
, to filter router namespaces and routes. This enables you to distribute subsets of routes over multiple router deployments. By using non-overlapping subsets, you can effectively partition the set of routes. Alternatively, you can define shards comprising overlapping subsets of routes.
By default, a router selects all routes from all projects (namespaces). Sharding involves adding labels to routes or namespaces and label selectors to routers. Each router shard comprises the routes that are selected by a specific set of label selectors or belong to the namespaces that are selected by a specific set of label selectors.
The router service account must have the [cluster reader
] permission set to allow access to labels in other namespaces.
Router Sharding and DNS
Because an external DNS server is needed to route requests to the desired shard, the administrator is responsible for making a separate DNS entry for each router in a project. A router will not forward unknown routes to another router.
Consider the following example:
-
Router A lives on host 192.168.0.5 and has routes with
*.foo.com
. -
Router B lives on host 192.168.1.9 and has routes with
*.example.com
.
Separate DNS entries must resolve *.foo.com to the node hosting Router A and *.example.com to the node hosting Router B:
-
*.foo.com A IN 192.168.0.5
-
*.example.com A IN 192.168.1.9
Router Sharding Examples
This section describes router sharding using namespace and route labels.
Figure 4.1. Router Sharding Based on Namespace Labels
Configure a router with a namespace label selector:
$ oc set env dc/router NAMESPACE_LABELS="router=r1"
Because the router has a selector on the namespace, the router will handle routes only for matching namespaces. In order to make this selector match a namespace, label the namespace accordingly:
$ oc label namespace default "router=r1"
Now, if you create a route in the default namespace, the route is available in the default router:
$ oc create -f route1.yaml
Create a new project (namespace) and create a route,
route2
:$ oc new-project p1 $ oc create -f route2.yaml
Notice the route is not available in your router.
Label namespace
p1
withrouter=r1
$ oc label namespace p1 "router=r1"
Adding this label makes the route available in the router.
- Example
A router deployment
finops-router
is configured with the label selectorNAMESPACE_LABELS="name in (finance, ops)"
, and a router deploymentdev-router
is configured with the label selectorNAMESPACE_LABELS="name=dev"
.If all routes are in namespaces labeled
name=finance
,name=ops
, andname=dev
, then this configuration effectively distributes your routes between the two router deployments.In the above scenario, sharding becomes a special case of partitioning, with no overlapping subsets. Routes are divided between router shards.
The criteria for route selection govern how the routes are distributed. It is possible to have overlapping subsets of routes across router deployments.
- Example
In addition to
finops-router
anddev-router
in the example above, you also havedevops-router
, which is configured with a label selectorNAMESPACE_LABELS="name in (dev, ops)"
.The routes in namespaces labeled
name=dev
orname=ops
now are serviced by two different router deployments. This becomes a case in which you have defined overlapping subsets of routes, as illustrated in the procedure in Router Sharding Based on Namespace Labels.In addition, this enables you to create more complex routing rules, allowing the diversion of higher priority traffic to the dedicated
finops-router
while sending lower priority traffic todevops-router
.
Router Sharding Based on Route Labels
NAMESPACE_LABELS
allows filtering of the projects to service and selecting all the routes from those projects, but you may want to partition routes based on other criteria associated with the routes themselves. The ROUTE_LABELS
selector allows you to slice-and-dice the routes themselves.
- Example
A router deployment
prod-router
is configured with the label selectorROUTE_LABELS="mydeployment=prod"
, and a router deploymentdevtest-router
is configured with the label selectorROUTE_LABELS="mydeployment in (dev, test)"
.This configuration partitions routes between the two router deployments according to the routes' labels, irrespective of their namespaces.
The example assumes you have all the routes you want to be serviced tagged with a label
"mydeployment=<tag>"
.
4.2.11.1. Creating Router Shards
This section describes an advanced example of router sharding. Suppose there are 26 routes, named a
— z
, with various labels:
Possible labels on routes
sla=high geo=east hw=modest dept=finance sla=medium geo=west hw=strong dept=dev sla=low dept=ops
These labels express the concepts including service level agreement, geographical location, hardware requirements, and department. The routes can have at most one label from each column. Some routes may have other labels or no labels at all.
Name(s) | SLA | Geo | HW | Dept | Other Labels |
---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
| ||
|
|
|
| ||
|
|
|
| ||
|
|
|
| ||
|
|
|
Here is a convenience script mkshard that illustrates how oc adm router
, oc set env
, and oc scale
can be used together to make a router shard.
#!/bin/bash # Usage: mkshard ID SELECTION-EXPRESSION id=$1 sel="$2" router=router-shard-$id 1 oc adm router $router --replicas=0 2 dc=dc/router-shard-$id 3 oc set env $dc ROUTE_LABELS="$sel" 4 oc scale $dc --replicas=3 5
Running mkshard several times creates several routers:
Router | Selection Expression | Routes |
---|---|---|
|
|
|
|
|
|
|
|
|
4.2.11.2. Modifying Router Shards
Because a router shard is a construct based on labels, you can modify either the labels (via oc label
) or the selection expression (via oc set env
).
This section extends the example started in the Creating Router Shards section, demonstrating how to change the selection expression.
Here is a convenience script modshard that modifies an existing router to use a new selection expression:
#!/bin/bash # Usage: modshard ID SELECTION-EXPRESSION... id=$1 shift router=router-shard-$id 1 dc=dc/$router 2 oc scale $dc --replicas=0 3 oc set env $dc "$@" 4 oc scale $dc --replicas=3 5
- 1
- The modified router has name
router-shard-<id>
. - 2
- The deployment configuration where the modifications occur.
- 3
- Scale it down.
- 4
- Set the new selection expression using
oc set env
. Unlikemkshard
from the Creating Router Shards section, the selection expression specified as the non-ID
arguments tomodshard
must include the environment variable name as well as its value. - 5
- Scale it back up.
In modshard
, the oc scale
commands are not necessary if the deployment strategy for router-shard-<id>
is Rolling
.
For example, to expand the department for router-shard-3
to include ops
as well as dev
:
$ modshard 3 ROUTE_LABELS='dept in (dev, ops)'
The result is that router-shard-3
now selects routes g
— s
(the combined sets of g
— k
and l
— s
).
This example takes into account that there are only three departments in this example scenario, and specifies a department to leave out of the shard, thus achieving the same result as the preceding example:
$ modshard 3 ROUTE_LABELS='dept != finance'
This example specifies three comma-separated qualities, and results in only route b
being selected:
$ modshard 3 ROUTE_LABELS='hw=strong,type=dynamic,geo=west'
Similarly to ROUTE_LABELS
, which involves a route’s labels, you can select routes based on the labels of the route’s namespace using the NAMESPACE_LABELS
environment variable. This example modifies router-shard-3
to serve routes whose namespace has the label frequency=weekly
:
$ modshard 3 NAMESPACE_LABELS='frequency=weekly'
The last example combines ROUTE_LABELS
and NAMESPACE_LABELS
to select routes with label sla=low
and whose namespace has the label frequency=weekly
:
$ modshard 3 \ NAMESPACE_LABELS='frequency=weekly' \ ROUTE_LABELS='sla=low'
4.2.12. Finding the Host Name of the Router
When exposing a service, a user can use the same route from the DNS name that external users use to access the application. The network administrator of the external network must make sure the host name resolves to the name of a router that has admitted the route. The user can set up their DNS with a CNAME that points to this host name. However, the user may not know the host name of the router. When it is not known, the cluster administrator can provide it.
The cluster administrator can use the --router-canonical-hostname
option with the router’s canonical host name when creating the router. For example:
# oc adm router myrouter --router-canonical-hostname="rtr.example.com"
This creates the ROUTER_CANONICAL_HOSTNAME
environment variable in the router’s deployment configuration containing the host name of the router.
For routers that already exist, the cluster administrator can edit the router’s deployment configuration and add the ROUTER_CANONICAL_HOSTNAME
environment variable:
spec: template: spec: containers: - env: - name: ROUTER_CANONICAL_HOSTNAME value: rtr.example.com
The ROUTER_CANONICAL_HOSTNAME
value is displayed in the route status for all routers that have admitted the route. The route status is refreshed every time the router is reloaded.
When a user creates a route, all of the active routers evaluate the route and, if conditions are met, admit it. When a router that defines the ROUTER_CANONICAL_HOSTNAME
environment variable admits the route, the router places the value in the routerCanonicalHostname
field in the route status. The user can examine the route status to determine which, if any, routers have admitted the route, select a router from the list, and find the host name of the router to pass along to the network administrator.
status: ingress: conditions: lastTransitionTime: 2016-12-07T15:20:57Z status: "True" type: Admitted host: hello.in.mycloud.com routerCanonicalHostname: rtr.example.com routerName: myrouter wildcardPolicy: None
oc describe
inclues the host name when available:
$ oc describe route/hello-route3 ... Requested Host: hello.in.mycloud.com exposed on router myroute (host rtr.example.com) 12 minutes ago
Using the above information, the user can ask the DNS administrator to set up a CNAME from the route’s host, hello.in.mycloud.com
, to the router’s canonical hostname, rtr.example.com
. This results in any traffic to hello.in.mycloud.com
reaching the user’s application.
4.2.13. Customizing the Default Routing Subdomain
You can customize the suffix used as the default routing subdomain for your environment by modifying the master configuration file (the /etc/origin/master/master-config.yaml file by default). Routes that do not specify a host name would have one generated using this default routing subdomain.
The following example shows how you can set the configured suffix to v3.openshift.test:
routingConfig: subdomain: v3.openshift.test
This change requires a restart of the master if it is running.
With the OpenShift Container Platform master(s) running the above configuration, the generated host name for the example of a route named no-route-hostname without a host name added to a namespace mynamespace would be:
no-route-hostname-mynamespace.v3.openshift.test
4.2.14. Forcing Route Host Names to a Custom Routing Subdomain
If an administrator wants to restrict all routes to a specific routing subdomain, they can pass the --force-subdomain
option to the oc adm router
command. This forces the router to override any host names specified in a route and generate one based on the template provided to the --force-subdomain
option.
The following example runs a router, which overrides the route host names using a custom subdomain template ${name}-${namespace}.apps.example.com
.
$ oc adm router --force-subdomain='${name}-${namespace}.apps.example.com'
4.2.15. Using Wildcard Certificates
A TLS-enabled route that does not include a certificate uses the router’s default certificate instead. In most cases, this certificate should be provided by a trusted certificate authority, but for convenience you can use the OpenShift Container Platform CA to create the certificate. For example:
$ CA=/etc/origin/master $ oc adm ca create-server-cert --signer-cert=$CA/ca.crt \ --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \ --hostnames='*.cloudapps.example.com' \ --cert=cloudapps.crt --key=cloudapps.key
The oc adm ca create-server-cert
command generates a certificate that is valid for two years. This can be altered with the --expire-days
option, but for security reasons, it is recommended to not make it greater than this value.
Run oc adm
commands only from the first master listed in the Ansible host inventory file, by default /etc/ansible/hosts.
The router expects the certificate and key to be in PEM format in a single file:
$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
From there you can use the --default-cert
flag:
$ oc adm router --default-cert=cloudapps.router.pem --service-account=router
Browsers only consider wildcards valid for subdomains one level deep. So in this example, the certificate would be valid for a.cloudapps.example.com but not for a.b.cloudapps.example.com.
4.2.16. Manually Redeploy Certificates
To manually redeploy the router certificates:
Check to see if a secret containing the default router certificate was added to the router:
$ oc volumes dc/router deploymentconfigs/router secret/router-certs as server-certificate mounted at /etc/pki/tls/private
If the certificate is added, skip the following step and overwrite the secret.
Make sure that you have a default certificate directory set for the following variable
DEFAULT_CERTIFICATE_DIR
:$ oc env dc/router --list DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private
If not, create the directory using the following command:
$ oc env dc/router DEFAULT_CERTIFICATE_DIR=/etc/pki/tls/private
Export the certificate to PEM format:
$ cat custom-router.key custom-router.crt custom-ca.crt > custom-router.crt
Overwrite or create a router certificate secret:
If the certificate secret was added to the router, overwrite the secret. If not, create a new secret.
To overwrite the secret, run the following command:
$ oc secrets new router-certs tls.crt=custom-router.crt tls.key=custom-router.key -o json --type='kubernetes.io/tls' --confirm | oc replace -f -
To create a new secret, run the following commands:
$ oc secrets new router-certs tls.crt=custom-router.crt tls.key=custom-router.key --type='kubernetes.io/tls' --confirm $ oc volume dc/router --add --mount-path=/etc/pki/tls/private --secret-name='router-certs' --name router-certs
Deploy the router.
$ oc rollout latest dc/router
4.2.17. Using Secured Routes
Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:
# openssl rsa -in <passwordProtectedKey.key> -out <new.key>
Here is an example of how to use a secure edge terminated route with TLS termination occurring on the router before traffic is proxied to the destination. The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end.
First, start up a router instance:
# oc adm router --replicas=1 --service-account=router
Next, create a private key, csr and certificate for our edge secured route. The instructions on how to do that would be specific to your certificate authority and provider. For a simple self-signed certificate for a domain named www.example.test
, see the example shown below:
# sudo openssl genrsa -out example-test.key 2048 # # sudo openssl req -new -key example-test.key -out example-test.csr \ -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=www.example.test" # # sudo openssl x509 -req -days 366 -in example-test.csr \ -signkey example-test.key -out example-test.crt
Generate a route using the above certificate and key.
$ oc create route edge --service=my-service \ --hostname=www.example.test \ --key=example-test.key --cert=example-test.crt route "my-service" created
Look at its definition.
$ oc get route/my-service -o yaml apiVersion: v1 kind: Route metadata: name: my-service spec: host: www.example.test to: kind: Service name: my-service tls: termination: edge key: | -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----
Make sure your DNS entry for www.example.test
points to your router instance(s) and the route to your domain should be available. The example below uses curl along with a local resolver to simulate the DNS lookup:
# routerip="4.1.1.1" # replace with IP address of one of your router instances. # curl -k --resolve www.example.test:443:$routerip https://www.example.test/
4.2.18. Using Wildcard Routes (for a Subdomain)
The HAProxy router has support for wildcard routes, which are enabled by setting the ROUTER_ALLOW_WILDCARD_ROUTES
environment variable to true
. Any routes with a wildcard policy of Subdomain
that pass the router admission checks will be serviced by the HAProxy router. Then, the HAProxy router exposes the associated service (for the route) per the route’s wildcard policy.
To change a route’s wildcard policy, you must remove the route and recreate it with the updated wildcard policy. Editing only the route’s wildcard policy in a route’s .yaml file does not work.
$ oc adm router --replicas=0 ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc scale dc/router --replicas=1
Learn how to configure the web console for wildcard routes.
Using a Secure Wildcard Edge Terminated Route
This example reflects TLS termination occurring on the router before traffic is proxied to the destination. Traffic sent to any hosts in the subdomain example.org
(*.example.org
) is proxied to the exposed service.
The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end for all hosts that match the subdomain (*.example.org
).
Start up a router instance:
$ oc adm router --replicas=0 --service-account=router $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc scale dc/router --replicas=1
Create a private key, certificate signing request (CSR), and certificate for the edge secured route.
The instructions on how to do this are specific to your certificate authority and provider. For a simple self-signed certificate for a domain named
*.example.test
, see this example:# sudo openssl genrsa -out example-test.key 2048 # # sudo openssl req -new -key example-test.key -out example-test.csr \ -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=*.example.test" # # sudo openssl x509 -req -days 366 -in example-test.csr \ -signkey example-test.key -out example-test.crt
Generate a wildcard route using the above certificate and key:
$ cat > route.yaml <<REOF apiVersion: v1 kind: Route metadata: name: my-service spec: host: www.example.test wildcardPolicy: Subdomain to: kind: Service name: my-service tls: termination: edge key: "$(perl -pe 's/\n/\\n/' example-test.key)" certificate: "$(perl -pe 's/\n/\\n/' example-test.cert)" REOF $ oc create -f route.yaml
Ensure your DNS entry for
*.example.test
points to your router instance(s) and the route to your domain is available.This example uses
curl
with a local resolver to simulate the DNS lookup:# routerip="4.1.1.1" # replace with IP address of one of your router instances. # curl -k --resolve www.example.test:443:$routerip https://www.example.test/ # curl -k --resolve abc.example.test:443:$routerip https://abc.example.test/ # curl -k --resolve anyname.example.test:443:$routerip https://anyname.example.test/
For routers that allow wildcard routes (ROUTER_ALLOW_WILDCARD_ROUTES
set to true
), there are some caveats to the ownership of a subdomain associated with a wildcard route.
Prior to wildcard routes, ownership was based on the claims made for a host name with the namespace with the oldest route winning against any other claimants. For example, route r1
in namespace ns1
with a claim for one.example.test
would win over another route r2
in namespace ns2
for the same host name one.example.test
if route r1
was older than route r2
.
In addition, routes in other namespaces were allowed to claim non-overlapping hostnames. For example, route rone
in namespace ns1
could claim www.example.test
and another route rtwo
in namespace d2
could claim c3po.example.test
.
This is still the case if there are no wildcard routes claiming that same subdomain (example.test
in the above example).
However, a wildcard route needs to claim all of the host names within a subdomain (host names of the form \*.example.test
). A wildcard route’s claim is allowed or denied based on whether or not the oldest route for that subdomain (example.test
) is in the same namespace as the wildcard route. The oldest route can be either a regular route or a wildcard route.
For example, if there is already a route eldest
that exists in the ns1
namespace that claimed a host named owner.example.test
and, if at a later point in time, a new wildcard route wildthing
requesting for routes in that subdomain (example.test
) is added, the claim by the wildcard route will only be allowed if it is the same namespace (ns1
) as the owning route.
The following examples illustrate various scenarios in which claims for wildcard routes will succeed or fail.
In the example below, a router that allows wildcard routes will allow non-overlapping claims for hosts in the subdomain example.test
as long as a wildcard route has not claimed a subdomain.
$ oc adm router ... $ oc set env dc/router $ oc project ns1 ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ oc expose service myservice --hostname=aname.example.test $ oc expose service myservice --hostname=bname.example.test $ oc project ns2 $ oc expose service anotherservice --hostname=second.example.test $ oc expose service anotherservice --hostname=cname.example.test $ oc project otherns $ oc expose service thirdservice --hostname=emmy.example.test $ oc expose service thirdservice --hostname=webby.example.test
In the example below, a router that allows wildcard routes will not allow the claim for owner.example.test
or aname.example.test
to succeed since the owning namespace is ns1
.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ oc expose service myservice --hostname=aname.example.test $ oc project ns2 $ oc expose service secondservice --hostname=bname.example.test $ oc expose service secondservice --hostname=cname.example.test $ # Router will not allow this claim with a different path name `/p1` as $ # namespace `ns1` has an older route claiming host `aname.example.test`. $ oc expose service secondservice --hostname=aname.example.test --path="/p1" $ # Router will not allow this claim as namespace `ns1` has an older route $ # claiming host name `owner.example.test`. $ oc expose service secondservice --hostname=owner.example.test $ oc project otherns $ # Router will not allow this claim as namespace `ns1` has an older route $ # claiming host name `aname.example.test`. $ oc expose service thirdservice --hostname=aname.example.test
In the example below, a router that allows wildcard routes will allow the claim for `\*.example.test
to succeed since the owning namespace is ns1
and the wildcard route belongs to that same namespace.
$ oc adm router ... $ oc set env dc/router ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ # Reusing the route.yaml from the previous example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # router will allow this claim.
In the example below, a router that allows wildcard routes will not allow the claim for `\*.example.test
to succeed since the owning namespace is ns1
and the wildcard route belongs to another namespace cyclone
.
$ oc adm router ... $ oc set env dc/router $ oc project ns1 ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ # Switch to a different namespace/project. $ oc project cyclone $ # Reusing the route.yaml from a prior example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # router will deny (_NOT_ allow) this claim.
Similarly, once a namespace with a wildcard route claims a subdomain, only routes within that namespace can claim any hosts in that same subdomain.
In the example below, once a route in namespace ns1
with a wildcard route claims subdomain example.test
, only routes in the namespace ns1
are allowed to claim any hosts in that same subdomain.
$ oc adm router ... $ oc set env dc/router $ oc project ns1 ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=owner.example.test $ oc project otherns $ # namespace `otherns` is allowed to claim for other.example.test $ oc expose service otherservice --hostname=other.example.test $ oc project ns1 $ # Reusing the route.yaml from the previous example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # Router will allow this claim. $ # In addition, route in namespace otherns will lose its claim to host $ # `other.example.test` due to the wildcard route claiming the subdomain. $ # namespace `ns1` is allowed to claim for deux.example.test $ oc expose service mysecondservice --hostname=deux.example.test $ # namespace `ns1` is allowed to claim for deux.example.test with path /p1 $ oc expose service mythirdservice --hostname=deux.example.test --path="/p1" $ oc project otherns $ # namespace `otherns` is not allowed to claim for deux.example.test $ # with a different path '/otherpath' $ oc expose service otherservice --hostname=deux.example.test --path="/otherpath" $ # namespace `otherns` is not allowed to claim for owner.example.test $ oc expose service yetanotherservice --hostname=owner.example.test $ # namespace `otherns` is not allowed to claim for unclaimed.example.test $ oc expose service yetanotherservice --hostname=unclaimed.example.test
In the example below, different scenarios are shown, in which the owner routes are deleted and ownership is passed within and across namespaces. While a route claiming host eldest.example.test
in the namespace ns1
exists, wildcard routes in that namespace can claim subdomain example.test
. When the route for host eldest.example.test
is deleted, the next oldest route senior.example.test
would become the oldest route and would not affect any other routes. Once the route for host senior.example.test
is deleted, the next oldest route junior.example.test
becomes the oldest route and block the wildcard route claimant.
$ oc adm router ... $ oc set env dc/router $ oc project ns1 ROUTER_ALLOW_WILDCARD_ROUTES=true $ oc project ns1 $ oc expose service myservice --hostname=eldest.example.test $ oc expose service seniorservice --hostname=senior.example.test $ oc project otherns $ # namespace `otherns` is allowed to claim for other.example.test $ oc expose service juniorservice --hostname=junior.example.test $ oc project ns1 $ # Reusing the route.yaml from the previous example. $ # spec: $ # host: www.example.test $ # wildcardPolicy: Subdomain $ oc create -f route.yaml # Router will allow this claim. $ # In addition, route in namespace otherns will lose its claim to host $ # `junior.example.test` due to the wildcard route claiming the subdomain. $ # namespace `ns1` is allowed to claim for dos.example.test $ oc expose service mysecondservice --hostname=dos.example.test $ # Delete route for host `eldest.example.test`, the next oldest route is $ # the one claiming `senior.example.test`, so route claims are unaffacted. $ oc delete route myservice $ # Delete route for host `senior.example.test`, the next oldest route is $ # the one claiming `junior.example.test` in another namespace, so claims $ # for a wildcard route would be affected. The route for the host $ # `dos.example.test` would be unaffected as there are no other wildcard $ # claimants blocking it. $ oc delete route seniorservice
4.2.19. Using the Container Network Stack
The OpenShift Container Platform router runs inside a container and the default behavior is to use the network stack of the host (i.e., the node where the router container runs). This default behavior benefits performance because network traffic from remote clients does not need to take multiple hops through user space to reach the target service and container.
Additionally, this default behavior enables the router to get the actual source IP address of the remote connection rather than getting the node’s IP address. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.
This host network behavior is controlled by the --host-network
router command line option, and the default behaviour is the equivalent of using --host-network=true
. If you wish to run the router with the container network stack, use the --host-network=false
option when creating the router. For example:
$ oc adm router --service-account=router --host-network=false
Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router.
Running with the container network stack means that the router sees the source IP address of a connection to be the NATed IP address of the node, rather than the actual remote IP address.
On OpenShift Container Platform clusters using multi-tenant network isolation, routers on a non-default namespace with the --host-network=false
option will load all routes in the cluster, but routes across the namespaces will not be reachable due to network isolation. With the --host-network=true
option, routes bypass the container network and it can access any pod in the cluster. If isolation is needed in this case, then do not add routes across the namespaces.
4.2.20. Exposing Router Metrics
The HAProxy router metrics are, by default, exposed or published in Prometheus format for consumption by external metrics collection and aggregation systems (e.g. Prometheus, statsd). Metrics are also available directly from the HAProxy router in its own HTML format for viewing in a browser or CSV download. These metrics include the HAProxy native metrics and some controller metrics.
When you create a router using the following command, OpenShift Container Platform makes metrics available in Prometheus format on the stats port, by default 1936.
$ oc adm router --service-account=router
To extract the raw statistics in Prometheus format run the following command:
curl <user>:<password>@<router_IP>:<STATS_PORT>
For example:
$ curl admin:sLzdR6SgDJ@10.254.254.35:1936/metrics
You can get the information you need to access the metrics from the router service annotations:
$ oc edit router service <router-service-name> apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "1936" prometheus.io/scrape: "true" prometheus.openshift.io/password: IImoDqON02 prometheus.openshift.io/username: admin
The
prometheus.io/port
is the stats port, by default 1936. You might need to configure your firewall to permit access. Use the previous user name and password to access the metrics. The path is /metrics.$ curl <user>:<password>@<router_IP>:<STATS_PORT> for example: $ curl admin:sLzdR6SgDJ@10.254.254.35:1936/metrics ... # HELP haproxy_backend_connections_total Total number of connections. # TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route-alt"} 0 haproxy_backend_connections_total{backend="http",namespace="default",route="hello-route01"} 0 ... # HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. # TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type="current"} 11 haproxy_exporter_server_threshold{type="limit"} 500 ... # HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. # TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend="fe_no_sni"} 0 haproxy_frontend_bytes_in_total{frontend="fe_sni"} 0 haproxy_frontend_bytes_in_total{frontend="public"} 119070 ... # HELP haproxy_server_bytes_in_total Current total of incoming bytes. # TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_no_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="",pod="",route="",server="fe_sni",service=""} 0 haproxy_server_bytes_in_total{namespace="default",pod="docker-registry-5-nk5fz",route="docker-registry",server="10.130.0.89:5000",service="docker-registry"} 0 haproxy_server_bytes_in_total{namespace="default",pod="hello-rc-vkjqx",route="hello-route",server="10.130.0.90:8080",service="hello-svc-1"} 0 ...
To get metrics in a browser:
Delete the following environment variables from the router deployment configuration file:
$ oc edit dc router - name: ROUTER_LISTEN_ADDR value: 0.0.0.0:1936 - name: ROUTER_METRICS_TYPE value: haproxy
Launch the stats window using the following URL in a browser, where the
STATS_PORT
value is1936
by default:http://admin:<Password>@<router_IP>:<STATS_PORT>
You can get the stats in CSV format by adding
;csv
to the URL:For example:
http://admin:<Password>@<router_IP>:1936;csv
To get the router IP, admin name, and password:
oc describe pod <router_pod>
To suppress metrics collection:
$ oc adm router --service-account=router --stats-port=0
4.2.21. Preventing Connection Failures During Restarts
If you connect to the router while the proxy is reloading, there is a small chance that your connection will end up in the wrong network queue and be dropped. The issue is being addressed. In the meantime, it is possible to work around the problem by installing iptables
rules to prevent connections during the reload window. However, doing so means that the router needs to run with elevated privilege so that it can manipulate iptables
on the host. It also
means that connections that happen during the reload are temporarily ignored and must retransmit their connection start, lengthening the time it takes to connect, but preventing connection failure.
To prevent this, configure the router to use iptables
by changing the service account, and setting an environment variable on the router.
Use a Privileged SCC
When creating the router, allow it to use the privileged SCC. This gives the router user the ability to create containers with root privileges on the nodes:
$ oc adm policy add-scc-to-user privileged -z router
Patch the Router Deployment Configuration to Create a Privileged Container
You can now create privileged containers. Next, configure the router deployment configuration to use the privilege so that the router can set the iptables rules it needs. This patch changes the router deployment configuration so that the container that is created runs as privileged (and therefore gets correct capabilities) and run as root:
$ oc patch dc router -p '{"spec":{"template":{"spec":{"containers":[{"name":"router","securityContext":{"privileged":true}}],"securityContext":{"runAsUser": 0}}}}}'
Configure the Router to Use iptables
Set the option on the router deployment configuration:
$ oc set env dc/router -c router DROP_SYN_DURING_RESTART=true
If you used a non-default name for the router, you must change dc/router accordingly.
4.2.22. ARP Cache Tuning for Large-scale Clusters
In OpenShift Container Platform clusters with large numbers of routes (greater than the value of net.ipv4.neigh.default.gc_thresh3
, which is 65536
by default), you must increase the default values of sysctl variables on each node in the cluster running the router pod to allow more entries in the ARP cache.
When the problem is occuring, the kernel messages would be similar to the following:
[ 1738.811139] net_ratelimit: 1045 callbacks suppressed [ 1743.823136] net_ratelimit: 293 callbacks suppressed
When this issue occurs, the oc
commands might start to fail with the following error:
Unable to connect to the server: dial tcp: lookup <hostname> on <ip>:<port>: write udp <ip>:<port>-><ip>:<port>: write: invalid argument
To verify the actual amount of ARP entries for IPv4, run the following:
# ip -4 neigh show nud all | wc -l
If the number begins to approach the net.ipv4.neigh.default.gc_thresh3
threshold, increase the values. Get the current value by running:
# sysctl net.ipv4.neigh.default.gc_thresh1 net.ipv4.neigh.default.gc_thresh1 = 128 # sysctl net.ipv4.neigh.default.gc_thresh2 net.ipv4.neigh.default.gc_thresh2 = 512 # sysctl net.ipv4.neigh.default.gc_thresh3 net.ipv4.neigh.default.gc_thresh3 = 1024
The following sysctl sets the variables to the OpenShift Container Platform current default values.
# sysctl net.ipv4.neigh.default.gc_thresh1=8192 # sysctl net.ipv4.neigh.default.gc_thresh2=32768 # sysctl net.ipv4.neigh.default.gc_thresh3=65536
To make these settings permanent, see this document.
4.2.23. Protecting Against DDoS Attacks
Add timeout http-request to the default HAProxy router image to protect the deployment against distributed denial-of-service (DDoS) attacks (for example, slowloris):
# and the haproxy stats socket is available at /var/run/haproxy.stats
global
stats socket ./haproxy.stats level admin
defaults
option http-server-close
mode http
timeout http-request 5s
timeout connect 5s 1
timeout server 10s
timeout client 30s
- 1
- timeout http-request is set up to 5 seconds. HAProxy gives a client 5 seconds *to send its whole HTTP request. Otherwise, HAProxy shuts the connection with *an error.
Also, when the environment variable ROUTER_SLOWLORIS_TIMEOUT
is set, it limits the amount of time a client has to send the whole HTTP request. Otherwise, HAProxy will shut down the connection.
Setting the environment variable allows information to be captured as part of the router’s deployment configuration and does not require manual modification of the template, whereas manually adding the HAProxy setting requires you to rebuild the router pod and maintain your router template file.
Using annotations implements basic DDoS protections in the HAProxy template router, including the ability to limit the:
- number of concurrent TCP connections
- rate at which a client can request TCP connections
- rate at which HTTP requests can be made
These are enabled on a per route basis because applications can have extremely different traffic patterns.
Setting | Description |
---|---|
| Enables the settings be configured (when set to true, for example). |
| The number of concurrent TCP connections that can be made by the same IP address on this route. |
| The number of TCP connections that can be opened by a client IP. |
| The number of HTTP requests that a client IP can make in a 3-second period. |
4.3. Deploying a Customized HAProxy Router
4.3.1. Overview
The default HAProxy router is intended to satisfy the needs of most users. However, it does not expose all of the capability of HAProxy. Therefore, users may need to modify the router for their own needs.
You may need to implement new features within the application back-ends, or modify the current operation. The router plug-in provides all the facilities necessary to make this customization.
The router pod uses a template file to create the needed HAProxy configuration file. The template file is a golang template. When processing the template, the router has access to OpenShift Container Platform information, including the router’s deployment configuration, the set of admitted routes, and some helper functions.
When the router pod starts, and every time it reloads, it creates an HAProxy configuration file, and then it starts HAProxy. The HAProxy configuration manual describes all of the features of HAProxy and how to construct a valid configuration file.
A configMap can be used to add the new template to the router pod. With this approach, the router deployment configuration is modified to mount the configMap as a volume in the router pod. The TEMPLATE_FILE
environment variable is set to the full path name of the template file in the router pod.
Alternatively, you can build a custom router image and use it when deploying some or all of your routers. There is no need for all routers to run the same image. To do this, modify the haproxy-template.config file, and rebuild the router image. The new image is pushed to the the cluster’s Docker repository, and the router’s deployment configuration image: field is updated with the new name. When the cluster is updated, the image needs to be rebuilt and pushed.
In either case, the router pod starts with the template file.
4.3.2. Obtaining the Router Configuration Template
The HAProxy template file is fairly large and complex. For some changes, it may be easier to modify the existing template rather than writing a complete replacement. You can obtain a haproxy-config.template file from a running router by running this on master, referencing the router pod:
# oc get po NAME READY STATUS RESTARTS AGE router-2-40fc3 1/1 Running 0 11d # oc rsh router-2-40fc3 cat haproxy-config.template > haproxy-config.template # oc rsh router-2-40fc3 cat haproxy.config > haproxy.config
Alternatively, you can log onto the node that is running the router:
# docker run --rm --interactive=true --tty --entrypoint=cat \ registry.access.redhat.com/openshift3/ose-haproxy-router:v3.7 haproxy-config.template
The image name is from docker images.
Save this content to a file for use as the basis of your customized template. The saved haproxy.config shows what is actually running.
4.3.3. Modifying the Router Configuration Template
4.3.3.1. Background
The template is based on the golang template. It can reference any of the environment variables in the router’s deployment configuration, any configuration information that is described below, and router provided helper functions.
The structure of the template file mirrors the resulting HAProxy configuration file. As the template is processed, anything not surrounded by {{" something "}}
is directly copied to the configuration file. Passages that are surrounded by {{" something "}}
are evaluated. The resulting text, if any, is copied to the configuration file.
4.3.3.2. Go Template Actions
The define action names the file that will contain the processed template.
{{define "/var/lib/haproxy/conf/haproxy.config"}}pipeline{{end}}
Function | Meaning |
---|---|
| Returns the list of valid endpoints. When action is "shuffle", the order of endpoints is randomized. |
| Tries to get the named environment variable from the pod. If it is not defined or empty, it returns the optional second argument. Otherwise, it returns an empty string. |
| The first argument is a string that contains the regular expression, the second argument is the variable to test. Returns a Boolean value indicating whether the regular expression provided as the first argument matches the string provided as the second argument. |
| Determines if a given variable is an integer. |
| Compares a given string to a list of allowed strings. Returns first match scanning left to right through the list. |
| Compares a given string to a list of allowed strings. Returns "true" if the string is an allowed value, otherwise returns false. |
| Generates a regular expression matching the route hosts (and paths). The first argument is the host name, the second is the path, and the third is a wildcard Boolean. |
| Generates host name to use for serving/matching certificates. First argument is the host name and the second is the wildcard Boolean. |
| Determines if a given variable contains "true". |
These functions are provided by the HAProxy template router plug-in.
4.3.3.3. Router Provided Information
This section reviews the OpenShift Container Platform information that the router makes available to the template. The router configuration parameters are the set of data that the HAProxy router plug-in is given. The fields are accessed by (dot) .Fieldname
.
The tables below the Router Configuration Parameters expand on the definitions of the various fields. In particular, .State has the set of admitted routes.
Field | Type | Description |
---|---|---|
| string | The directory that files will be written to, defaults to /var/lib/containers/router |
|
| The routes. |
|
| The service lookup. |
| string | Full path name to the default certificate in pem format. |
|
| Peers. |
| string | User name to expose stats with (if the template supports it). |
| string | Password to expose stats with (if the template supports it). |
| int | Port to expose stats with (if the template supports it). |
| bool | Whether the router should bind the default ports. |
Field | Type | Description |
---|---|---|
| string | The user-specified name of the route. |
| string | The namespace of the route. |
| string |
The host name. For example, |
| string |
Optional path. For example, |
|
| The termination policy for this back-end; drives the mapping files and router configuration. |
|
| Certificates used for securing this back-end. Keyed by the certificate ID. |
|
| Indicates the status of configuration that needs to be persisted. |
| string | Indicates the port the user wants to expose. If empty, a port will be selected for the service. |
|
|
Indicates desired behavior for insecure connections to an edge-terminated route: |
| string | Hash of the route + namespace name used to obscure the cookie ID. |
| bool | Indicates this service unit needing wildcard support. |
|
| Annotations attached to this route. |
|
| Collection of services that support this route, keyed by service name and valued on the weight attached to it with respect to other entries in the map. |
| int |
Count of the |
The ServiceAliasConfig
is a route for a service. Uniquely identified by host + path. The default template iterates over routes using {{range $cfgIdx, $cfg := .State }}
. Within such a {{range}}
block, the template can refer to any field of the current ServiceAliasConfig
using $cfg.Field
.
Field | Type | Description |
---|---|---|
| string |
Name corresponds to a service name + namespace. Uniquely identifies the |
|
| Endpoints that back the service. This translates into a final back-end implementation for routers. |
ServiceUnit
is an encapsulation of a service, the endpoints that back that service, and the routes that point to the service. This is the data that drives the creation of the router configuration files
Field | Type |
---|---|
| string |
| string |
| string |
| string |
| string |
| string |
| bool |
Endpoint
is an internal representation of a Kubernetes endpoint.
Field | Type | Description |
---|---|---|
| string |
Represents a public/private key pair. It is identified by an ID, which will become the file name. A CA certificate will not have a |
| string | Indicates that the necessary files for this configuration have been persisted to disk. Valid values: "saved", "". |
Field | Type | Description |
---|---|---|
ID | string | |
Contents | string | The certificate. |
PrivateKey | string | The private key. |
Field | Type | Description |
---|---|---|
| string | Dictates where the secure communication will stop. |
| string | Indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80. |
TLSTerminationType
and InsecureEdgeTerminationPolicyType
dictate where the secure communication will stop.
Constant | Value | Meaning |
---|---|---|
|
| Terminate encryption at the edge router. |
|
| Terminate encryption at the destination, the destination is responsible for decrypting traffic. |
|
| Terminate encryption at the edge router and re-encrypt it with a new certificate supplied by the destination. |
Type | Meaning |
---|---|
| Traffic is sent to the server on the insecure port (default). |
| No traffic is allowed on the insecure port. |
| Clients are redirected to the secure port. |
None (""
) is the same as Disable
.
4.3.3.4. Annotations
Each route can have annotations attached. Each annotation is just a name and a value.
apiVersion: v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms [...]
The name can be anything that does not conflict with existing Annotations. The value is any string. The string can have multiple tokens separated by a space. For example, aa bb cc
. The template uses {{index}}
to extract the value of an annotation. For example:
{{$balanceAlgo := index $cfg.Annotations "haproxy.router.openshift.io/balance"}}
This is an example of how this could be used for mutual client authorization.
{{ with $cnList := index $cfg.Annotations "whiteListCertCommonName" }} {{ if ne $cnList "" }} acl test ssl_c_s_dn(CN) -m str {{ $cnList }} http-request deny if !test {{ end }} {{ end }}
Then, you can handle the white-listed CNs with this command.
$ oc annotate route <route-name> --overwrite whiteListCertCommonName="CN1 CN2 CN3"
See Route-specific Annotations for more information.
4.3.3.5. Environment Variables
The template can use any environment variables that exist in the router pod. The environment variables can be set in the deployment configuration. New environment variables can be added.
They are referenced by the env
function:
{{env "ROUTER_MAX_CONNECTIONS" "20000"}}
The first string is the variable, and the second string is the default when the variable is missing or nil
. When ROUTER_MAX_CONNECTIONS
is not set or is nil
, 20000 is used. Environment variables are a map where the key is the environment variable name and the content is the value of the variable.
See Route-specific Environment variables for more information.
4.3.3.6. Example Usage
Here is a simple template based on the HAProxy template file.
Start with a comment:
{{/* Here is a small example of how to work with templates taken from the HAProxy template file. */}}
The template can create any number of output files. Use a define
construct to create an output file. The file name is specified as an argument to define, and everything inside the define
block up to the matching end is written as the contents of that file.
{{ define "/var/lib/haproxy/conf/haproxy.config" }} global {{ end }}
The above will copy global
to the /var/lib/haproxy/conf/haproxy.config file, and then close the file.
Set up logging based on environment variables.
{{ with (env "ROUTER_SYSLOG_ADDRESS" "") }} log {{.}} {{env "ROUTER_LOG_FACILITY" "local1"}} {{env "ROUTER_LOG_LEVEL" "warning"}} {{ end }}
The env
function extracts the value for the environment variable. If the environment variable is not defined or nil
, the second argument is returned.
The with construct sets the value of "." (dot) within the with block to whatever value is provided as an argument to with. The with
action tests Dot for nil
. If not nil
, the clause is processed up to the end
. In the above, assume ROUTER_SYSLOG_ADDRESS
contains /var/log/msg, ROUTER_LOG_FACILITY
is not defined, and ROUTER_LOG_LEVEL
contains info
. The following will be copied to the output file:
log /var/log/msg local1 info
Each admitted route ends up generating lines in the configuration file. Use range
to go through the admitted routes:
{{ range $cfgIdx, $cfg := .State }} backend be_http_{{$cfgIdx}} {{end}}
.State
is a map of ServiceAliasConfig
, where the key is the route name. range
steps through the map and, for each pass, it sets $cfgIdx
with the key
, and sets `$cfg
to point to the ServiceAliasConfig
that describes the route. If there are two routes named myroute
and hisroute
, the above will copy the following to the output file:
backend be_http_myroute backend be_http_hisroute
Route Annotations, $cfg.Annotations
, is also a map with the annotation name as the key and the content string as the value. The route can have as many annotations as desired and the use is defined by the template author. The user codes the annotation into the route and the template author customized the HAProxy template to handle the annotation.
The common usage is to index the annotation to get the value.
{{$balanceAlgo := index $cfg.Annotations "haproxy.router.openshift.io/balance"}}
The index extracts the value for the given annotation, if any. Therefore, `$balanceAlgo
will contain the string associated with the annotation or nil
. As above, you can test for a non-nil
string and act on it with the with
construct.
{{ with $balanceAlgo }} balance $balanceAlgo {{ end }}
Here when $balanceAlgo
is not nil
, balance $balanceAlgo
is copied to the output file.
In a second example, you want to set a server timeout based on a timeout value set in an annotation.
$value := index $cfg.Annotations "haproxy.router.openshift.io/timeout"
The $value
can now be evaluated to make sure it contains a properly constructed string. The matchPattern
function accepts a regular expression and returns true
if the argument satisfies the expression.
matchPattern "[1-9][0-9]*(us\|ms\|s\|m\|h\|d)?" $value
This would accept 5000ms
but not 7y
. The results can be used in a test.
{{if (matchPattern "[1-9][0-9]*(us\|ms\|s\|m\|h\|d)?" $value) }} timeout server {{$value}} {{ end }}
It can also be used to match tokens:
matchPattern "roundrobin|leastconn|source" $balanceAlgo
Alternatively matchValues
can be used to match tokens:
matchValues $balanceAlgo "roundrobin" "leastconn" "source"
4.3.4. Using a ConfigMap to Replace the Router Configuration Template
You can use a ConfigMap to customize the router instance without rebuilding the router image. The haproxy-config.template, reload-haproxy, and other scripts can be modified as well as creating and modifying router environment variables.
- Copy the haproxy-config.template that you want to modify as described above. Modify it as desired.
Create a ConfigMap:
$ oc create configmap customrouter --from-file=haproxy-config.template
The
customrouter
ConfigMap now contains a copy of the modified haproxy-config.template file.Modify the router deployment configuration to mount the ConfigMap as a file and point the
TEMPLATE_FILE
environment variable to it. This can be done viaoc set env
andoc volume
commands, or alternatively by editing the router deployment configuration.- Using
oc
commands $ oc volume dc/router --add --overwrite \ --name=config-volume \ --mount-path=/var/lib/haproxy/conf/custom \ --source='{"configMap": { "name": "customrouter"}}' $ oc set env dc/router \ TEMPLATE_FILE=/var/lib/haproxy/conf/custom/haproxy-config.template
- Editing the Router Deployment Configuration
Use
oc edit dc router
to edit the router deployment configuration with a text editor.... - name: STATS_USERNAME value: admin - name: TEMPLATE_FILE 1 value: /var/lib/haproxy/conf/custom/haproxy-config.template image: openshift/origin-haproxy-routerp ... terminationMessagePath: /dev/termination-log volumeMounts: 2 - mountPath: /var/lib/haproxy/conf/custom name: config-volume dnsPolicy: ClusterFirst ... terminationGracePeriodSeconds: 30 volumes: 3 - configMap: name: customrouter name: config-volume ...
Save the changes and exit the editor. This restarts the router.
- Using
4.3.5. Using Stick Tables
The following example customization can be used in a highly-available routing setup to use stick-tables that synchronize between peers.
Adding a Peer Section
In order to synchronize stick-tables amongst peers you must a define a peers section in your HAProxy configuration. This section determines how HAProxy will identify and connect to peers. The plug-in provides data to the template under the .PeerEndpoints
variable to allow you to easily identify members of the router service. You may add a peer section to the haproxy-config.template file inside the router image by adding:
{{ if (len .PeerEndpoints) gt 0 }} peers openshift_peers {{ range $endpointID, $endpoint := .PeerEndpoints }} peer {{$endpoint.TargetName}} {{$endpoint.IP}}:1937 {{ end }} {{ end }}
Changing the Reload Script
When using stick-tables, you have the option of telling HAProxy what it should consider the name of the local host in the peer section. When creating endpoints, the plug-in attempts to set the TargetName
to the value of the endpoint’s TargetRef.Name
. If TargetRef
is not set, it will set the TargetName
to the IP address. The TargetRef.Name
corresponds with the Kubernetes host name, therefore you can add the -L
option to the reload-haproxy
script to identify the local host in the peer section.
peer_name=$HOSTNAME 1
if [ -n "$old_pid" ]; then
/usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name -sf $old_pid
else
/usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name
fi
- 1
- Must match an endpoint target name that is used in the peer section.
Modifying Back Ends