이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 2. Installing
2.1. Overview
The quick installation method allows you to use an interactive CLI utility to install OpenShift across a set of hosts. This installer is a self-contained wrapper intended for usage on a Red Hat Enterprise Linux 7 host.
For production environments, a reference configuration implemented using Ansible playbooks is available as the advanced installation method.
Before beginning either installation method, start with the Prerequisites topic.
2.2. Prerequisites
2.2.1. Overview
OpenShift infrastructure components can be installed across multiple hosts. The following sections outline the system requirements and instructions for preparing your environment and hosts before installing OpenShift.
2.2.2. System Requirements
You must have an active OpenShift Enterprise subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.
OpenShift Enterprise (OSE) 3.x supports Red Hat Enterprise Linux (RHEL) 7.1 or later. Starting in OSE 3.1.1, RHEL Atomic Host 7.1.6 or later is also supported, as it requires the containerized method introduced in OSE 3.1.1.
OSE 3.1.x requires Docker 1.8.2, however Docker 1.9 is currently not supported due to performance issues. See the Red Hat Knowledgebase for details, including steps on how to configure yum so that later Docker versions are not installed via yum update
. Otherwise, running yum update
after installing OpenShift Enterprise will upgrade Docker and put your cluster in an unsupported configuration. Follow this topic to ensure you have the correct version of Docker installed on your hosts before installing or upgrading to OSE 3.1.
The system requirements vary per host type:
| |
|
OpenShift Enterprise only supports servers with x86_64 architecture.
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.
2.2.2.1. Host Recommendations
The following apply to production environments. Test or sample environments will function with the minimum requirements.
- Master Hosts
- In a highly-available OpenShift Enterprise cluster with external etcd, a master host should have 1 CPU core and 2.5 GB of memory, on top of the defaults in the table above, for each 1000 pods. Therefore, the recommended size of master host in an OpenShift Enterprise cluster of 2000 pods would be 2 CPU cores and 5 GB of RAM, as well as the minimum requirements for a master host of 2 CPU cores and 8 GB of RAM.
When planning an environment with multiple masters, a minimum of three etcd hosts as well as a load-balancer between the master hosts, is required.
- Node Hosts
- The size of a node host depends on the expected size of its workload. As an OpenShift Enterprise cluster administrator, you will need to calculate the expected workload, then add about 10% for overhead. For production environments, allocate enough resources so that node host failure does not affect your maximum capacity.
Use the above with the following table to plan the maximum loads for nodes and pods:
Host | Sizing Recommendation |
---|---|
Maximum nodes per cluster | 300 |
Maximum pods per nodes | 110 |
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.
2.2.2.2. Configuring Core Usage
By default, OpenShift masters and nodes use all available cores in the system they run on. You can choose the number of cores you want OpenShift to use by setting the GOMAXPROCS
environment variable.
For example, run the following before starting the server to make OpenShift only run on one core:
# export GOMAXPROCS=1
2.2.2.3. Security Warning
OpenShift runs Docker containers on your hosts, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access your host’s Docker daemon and perform docker build
and docker push
operations. As such, you should be aware of the inherent security risks associated with performing docker run
operations on arbitrary images as they effectively have root access.
For more information, see these articles:
To address these risks, OpenShift uses security context constraints that control the actions that pods can perform and what it has the ability to access.
2.2.3. Environment Requirements
The following must be set up in your environment before OpenShift can be installed.
2.2.3.1. DNS
A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift router.
For example, create a wildcard DNS entry for cloudapps, or something similar, that has a low TTL and points to the public IP address of the host where the router will be deployed:
*.cloudapps.example.com. 300 IN A 192.168.133.2
In almost all cases, when referencing VMs you must use host names, and the host names that you use must match the output of the hostname -f
command on each node.
In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift may fail to resolve host names properly.
2.2.3.2. Network Access
A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.
Required Ports
OpenShift infrastructure components communicate with each other using ports, which are communication endpoints that are identifiable for specific processes or services. Ensure the following ports required by OpenShift are open between hosts, for example if you have a firewall in your environment. Some ports are optional depending on your configuration and usage.
4789 | UDP | Required for SDN communication between pods on separate hosts. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
10250 | TCP |
The master proxies to node hosts via the Kubelet for |
In the following table, (L) indicates the marked port is also used in loopback mode, enabling the master to communicate with itself.
In a single-master cluster:
- Ports marked with (L) must be open.
- Ports not marked with (L) need not be open.
In a multiple-master cluster, all the listed ports must be open.
53 (L) or 8053 (L) | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations of 3.2 or later use 8053 by default so that dnsmasq may be configured. |
2049 (L) | TCP/UDP | Required when provisioning an NFS host as part of the installer. |
2379 | TCP | Used for standalone etcd (clustered) to accept changes in state. |
2380 | TCP | etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered). |
4001 (L) | TCP | Used for embedded etcd (non-clustered) to accept changes in state. |
4789 (L) | UDP | Required for SDN communication between pods on separate hosts. |
9000 | TCP |
If you choose the |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on. |
22 | TCP | Required for SSH by the installer or system administrator. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts. |
80 or 443 | TCP | For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router. |
1936 | TCP | For router statistics use. Required to be open when running the template router to access statistics, and can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. |
4001 | TCP | For embedded etcd (non-clustered) use. Only required to be internally open on the master host. 4001 is for server-client connections. |
2379 and 2380 | TCP | For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd. |
4789 | UDP | For VxLAN use (OpenShift Enterprise SDN). Required only internally on node hosts. |
8443 | TCP | For use by the OpenShift Enterprise web console, shared with the API server. |
10250 | TCP | For use by the Kubelet. Required to be externally open on nodes. |
24224 | TCP/UDP | For use by Fluentd. Required to be open on master hosts for internal connections to node hosts. |
Notes
- In the above examples, port 4789 is used for User Datagram Protocol (UDP).
- When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.
-
OpenShift internal DNS cannot be received over SDN. Depending on the detected values of
openshift_facts
, or if theopenshift_ip
andopenshift_public_ip
values are overridden, it will be the computed value ofopenshift_ip
. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata. -
The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of
openshift_hostname
andopenshift_public_hostname
.
2.2.3.3. Git Access
You must have either Internet access and a GitHub account, or read and write access to an internal, HTTP-based Git server.
2.2.3.4. Persistent Storage
The Kubernetes persistent volume framework allows you to provision an OpenShift cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.
The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.
2.2.3.5. SELinux
Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift or the installer will fail. Also, configure SELINUXTYPE=targeted
in the /etc/selinux/config file:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
2.2.3.6. Cloud Provider Considerations
Set up the Security Group
When installing on AWS or OpenStack, ensure that you set up the appropriate security groups. These are some ports that you should have in your security groups, without which the installation will fail. You may need more depending on the cluster configuration you want to install. For more information and to adjust your security groups accordingly, see Required Ports for more information.
All OpenShift Hosts | tcp/22 from host running the installer/Ansible |
etcd Security Group |
|
Master Security Group |
|
Node Security Group |
|
Infrastructure Nodes (ones that can host the |
|
If configuring ELBs for load balancing the masters and/or routers, you also need to configure Ingress and Egress security groups for the ELBs appropriately.
Override Detected IP Addresses and Host Names
Some deployments require that the user override the detected host names and IP addresses for the hosts. To see the default values, run the openshift_facts
playbook:
# ansible-playbook playbooks/byo/openshift_facts.yml
Now, verify the detected common settings. If they are not what you expect them to be, you can override them.
The Advanced Installation topic discusses the available Ansible variables in greater detail.
|
|
|
|
|
|
|
|
|
|
If openshift_hostname
is set to a value other than the metadata-provided private-dns-name
value, the native cloud integration for those providers will no longer work.
In AWS, situations that require overriding the variables include:
|
The user is installing in a VPC that is not configured for both |
|
Possibly if they have multiple network interfaces configured and they want to use one other than the default. You must first set |
|
|
|
|
If setting openshift_hostname
to something other than the metadata-provided private-dns-name
value, the native cloud integration for those providers will no longer work.
For EC2 hosts in particular, they must be deployed in a VPC that has both DNS host names
and DNS resolution
enabled, and openshift_hostname
should not be overridden.
Post-Installation Configuration for Cloud Providers
Following the installation process, you can configure OpenShift for AWS, OpenStack, or GCE.
2.2.4. Host Preparation
Before installing OpenShift, you must first prepare each host per the following.
2.2.4.1. Software Prerequisites
Installing an Operating System
A base installation of RHEL 7.1 or later or RHEL Atomic Host 7.1.6 or later is required for master and node hosts. See the following documentation for the respective installation instructions, if required:
Registering the Hosts
Each host must be registered using Red Hat Subscription Manager (RHSM) and have an active OpenShift Enterprise subscription attached to access the required packages.
On each host, register with RHSM:
# subscription-manager register --username=<user_name> --password=<password>
List the available subscriptions:
# subscription-manager list --available
In the output for the previous command, find the pool ID for an OpenShift Enterprise subscription and attach it:
# subscription-manager attach --pool=<pool_id>
NoteWhen finding the pool ID, the related subscription name might include either "OpenShift Enterprise" or "OpenShift Container Platform", due to the product name change introduced with version 3.3.
If you plan to configure multiple masters with the advanced installation using the
pacemaker
HA method, you must also attach a subscription for High Availability Add-on for Red Hat Enterprise Linux:# subscription-manager attach --pool=<pool_id_for_rhel_ha>
NoteThe High Availability Add-on for Red Hat Enterprise Linux subscription is provided separately from the OpenShift Enterprise subscription.
Disable all repositories and enable only the required ones:
# subscription-manager repos --disable="*" # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.1-rpms"
If you plan to use the
pacemaker
HA method, enable the following repository as well:# subscription-manager repos \ --enable="rhel-ha-for-rhel-7-server-rpms"
Managing Packages
For RHEL 7 systems:
Install the following base packages:
# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
Update the system to the latest packages:
# yum update
Install the following package, which provides OpenShift utilities and pulls in other tools required by the quick and advanced installation methods, such as Ansible and related configuration files:
# yum install atomic-openshift-utils
Install the following *-excluder packages on each RHEL 7 system, which helps ensure your systems stay on the correct versions of atomic-openshift and docker packages when you are not trying to upgrade, according to the OpenShift Enterprise version:
# yum install atomic-openshift-excluder atomic-openshift-docker-excluder
The *-excluder packages add entries to the
exclude
directive in the host’s /etc/yum.conf file when installed. Run the following command on each host to remove the atomic-openshift packages from the list for the duration of the installation.# atomic-openshift-excluder unexclude
For RHEL Atomic Host 7 systems:
Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:
# atomic host upgrade
After the upgrade is completed and prepared for the next boot, reboot the host:
# systemctl reboot
Installing Docker
At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OpenShift.
Docker 1.9 is currently not supported due to performance issues. See the Red Hat Knowledgebase for details, including steps on how to configure yum so that later Docker versions are not installed via yum update
. Otherwise, running yum update
after installing OpenShift Enterprise will upgrade Docker and put your cluster in an unsupported configuration.
For RHEL 7 systems, install Docker 1.8.
NoteDocker should already be installed, configured, and running by default on RHEL Atomic Host 7 systems.
The atomic-openshift-docker-excluder package that was installed in Software Prerequisites should ensure that the correct version of Docker is installed in this step:
# yum install docker
After the package installation is complete, verify that version 1.8.2 was installed:
# docker version
Edit the /etc/sysconfig/docker file and add
--insecure-registry 172.30.0.0/16
to theOPTIONS
parameter. For example:OPTIONS='--selinux-enabled --insecure-registry 172.30.0.0/16'
The
--insecure-registry
option instructs the Docker daemon to trust any Docker registry on the indicated subnet, rather than requiring a certificate.Important172.30.0.0/16 is the default value of the
servicesSubnet
variable in the master-config.yaml file. If this has changed, then the--insecure-registry
value in the above step should be adjusted to match, as it is indicating the subnet for the registry to use. Note that theopenshift_master_portal_net
variable can be set in the Ansible inventory file and used during the advanced installation method to modify theservicesSubnet
variable.NoteAfter the initial OpenShift installation is complete, you can choose to secure the integrated Docker registry, which involves adjusting the
--insecure-registry
option accordingly.
2.2.4.2. Configuring Docker Storage
Docker containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications.
For RHEL Atomic Host
The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.
If you do not have enough allocated, see Managing Storage with Docker Formatted Containers for details on using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.
For RHEL
The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.
You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. This can be done after installing Docker and should be done before creating images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:
- Option A) Use an additional block device.
- Option B) Use an existing, specified volume group.
- Option C) Use the remaining free space from the volume group where your root file system is located.
Option A is the most robust option, however it requires adding an additional block device to your host before configuring Docker storage. Options B and C both require leaving free space available when provisioning your host.
Create the docker-pool volume using one of the following three options:
Option A) Use an additional block device.
In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device you wish to use. Set VG to the volume group name you wish to create; docker-vg is a reasonable choice. For example:
# cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS=/dev/vdc VG=docker-vg EOF
Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup [5/1868] 0 Checking that no-one is using this disk right now ... OK Disk /dev/vdc: 31207 cylinders, 16 heads, 63 sectors/track sfdisk: /dev/vdc: unrecognized partition table type Old situation: sfdisk: No partitions found New situation: Units: sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/vdc1 2048 31457279 31455232 8e Linux LVM /dev/vdc2 0 - 0 0 Empty /dev/vdc3 0 - 0 0 Empty /dev/vdc4 0 - 0 0 Empty Warning: partition 1 does not start at a cylinder boundary Warning: partition 1 does not end at a cylinder boundary Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) Physical volume "/dev/vdc1" successfully created Volume group "docker-vg" successfully created Rounding up size to full physical extent 16.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker-vg/docker-pool to thin pool. Logical volume "docker-pool" changed.
Option B) Use an existing, specified volume group.
In /etc/sysconfig/docker-storage-setup, set VG to the desired volume group. For example:
# cat <<EOF > /etc/sysconfig/docker-storage-setup VG=docker-vg EOF
Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup Rounding up size to full physical extent 16.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker-vg/docker-pool to thin pool. Logical volume "docker-pool" changed.
Option C) Use the remaining free space from the volume group where your root file system is located.
Verify that the volume group where your root file system resides has the desired free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup Rounding up size to full physical extent 32.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume rhel/docker-pool and rhel/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted rhel/docker-pool to thin pool. Logical volume "docker-pool" changed.
Verify your configuration. You should have a dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool logical volume:
# cat /etc/sysconfig/docker-storage DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker--vg-docker--pool # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert docker-pool rhel twi-a-t--- 9.29g 0.00 0.12
ImportantBefore using Docker or OpenShift, verify that the docker-pool logical volume is large enough to meet your needs. The docker-pool volume should be 60% of the available volume group and will grow to fill the volume group via LVM monitoring.
Check if Docker is running:
# systemctl is-active docker
If Docker has not yet been started on the host, enable and start the service:
# systemctl enable docker # systemctl start docker
If Docker is already running, re-initialize Docker:
WarningThis will destroy any Docker containers or images currently on the host.
# systemctl stop docker # rm -rf /var/lib/docker/* # systemctl restart docker
If there is any content in /var/lib/docker/, it must be deleted. Files will be present if Docker has been used prior to the installation of OpenShift.
Reconfiguring Docker Storage
Should you need to reconfigure Docker storage after having created the docker-pool, you should first remove the docker-pool logical volume. If you are using a dedicated volume group, you should also remove the volume group and any associated physical volumes before reconfiguring docker-storage-setup according to the instructions above.
See Logical Volume Manager Administration for more detailed information on LVM management.
Managing Docker Container Logs
Sometimes a container’s log file (the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running) can increase to a problematic size. You can manage this by configuring Docker’s json-file
logging driver to restrict the size and number of log files.
Option | Purpose |
---|---|
| Sets the size at which a new log file is created. |
| Sets the file on each host to configure the options. |
For example, to set the maximum file size to 1MB and always keep the last three log files, edit the /etc/sysconfig/docker file to configure max-size=1M
and max-file=3
:
OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'
Next, restart the Docker service:
# systemctl restart docker
Viewing Available Container Logs
Container logs are stored in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:
# ls -lh /var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/ total 2.6M -rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json -rw-r--r--. 1 root root 649K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log -rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1 -rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2 -rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json drwx------. 2 root root 6 Nov 24 00:12 secrets
See Docker’s documentation for additional information on how to Configure Logging Drivers.
2.2.5. Ensuring Host Access
The quick and advanced installation methods require a user that has access to all hosts. If you want to run the installer as a non-root user, passwordless sudo rights must be configured on each destination host.
For example, you can generate an SSH key on the host where you will invoke the installation process:
# ssh-keygen
Do not use a password.
An easy way to distribute your SSH keys is by using a bash
loop:
# for host in master.example.com \ node1.example.com \ node2.example.com; \ do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
Modify the host names in the above command according to your configuration.
2.2.6. Setting Global Proxy Values
The OpenShift Enterprise installer uses the proxy settings in the _/etc/environment _ file.
Ensure the following domain suffixes and IP addresses are in the /etc/environment file in the no_proxy
parameter:
- Master and node host names (domain suffix).
- Other internal host names (domain suffix).
- Etcd IP addresses (must be IP addresses and not host names, as etcd access is done by IP address).
- Docker registry IP address.
-
Kubernetes IP address, by default 172.30.0.1. Must be the value set in the
openshift_portal_net
parameter in the Ansible inventory file, by default /etc/ansible/hosts. -
Kubernetes internal domain suffix:
cluster.local
. -
Kubernetes internal domain suffix:
.svc
.
The following example assumes http_proxy
and https_proxy
values are set:
no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1
Because noproxy
does not support CIDR, you can use domain suffixes.
2.2.7. What’s Next?
If you are interested in installing OpenShift using the containerized method (optional for RHEL but required for RHEL Atomic Host), see RPM vs Containerized to ensure that you understand the differences between these methods.
When you are ready to proceed, you can install OpenShift Enterprise using the quick installation or advanced installation method.
2.3. RPM vs Containerized
2.3.1. Overview
The default method for installing OpenShift on Red Hat Enterprise Linux (RHEL) uses RPMs. Alternatively, you can use the containerized method, which deploys containerized OpenShift master and node components. When targeting a RHEL Atomic Host system, the containerized method is the only available option, and is automatically selected for you based on the detection of the /run/ostree-booted file.
You can easily deploy environments mixing containerized and RPM based installations. For the advanced installation method, you can set the Ansible variable containerized=true
in an inventory file on a cluster-wide or per host basis. For the quick installation method, you can choose between the RPM or containerized method on a per host basis during the interactive installation, or set the values manually in an installation configuration file.
Containerized installations are supported starting in OpenShift Enterprise 3.1.1. When installing an environment with multiple masters, the load balancer cannot be deployed by the installation process as a container. See Advanced Installation for load balancer requirements using either the native HA or Pacemaker methods.
The following sections detail the differences between the RPM and containerized methods.
2.3.2. Required Images
Containerized installations make use of the following images:
- openshift3/ose
- openshift3/node
- openshift3/openvswitch
- registry.access.redhat.com/rhel7/etcd
By default, all of the above images are pulled from the Red Hat Registry at registry.access.redhat.com.
If you need to use a private registry to pull these images during the installation, you can specify the registry information ahead of time. For the advanced installation method, you can set the following Ansible variables in your inventory file, as required:
cli_docker_additional_registries=<registry_hostname> cli_docker_insecure_registries=<registry_hostname> cli_docker_blocked_registries=<registry_hostname>
For the quick installation method, you can export the following environment variables on each target host:
# export OO_INSTALL_ADDITIONAL_REGISTRIES=<registry_hostname> # export OO_INSTALL_INSECURE_REGISTRIES=<registry_hostname>
Blocked Docker registries cannot currently be specified using the quick installation method.
The configuration of additional, insecure, and blocked Docker registries occurs at the beginning of the installation process to ensure that these settings are applied before attempting to pull any of the required images.
2.3.3. CLI Wrappers
When using containerized installations, a CLI wrapper script is deployed on each master at /usr/local/bin/openshift. The following set of symbolic links are also provided to ease administrative tasks:
Symbolic Link | Usage |
---|---|
/usr/local/bin/oc | Developer CLI |
/usr/local/bin/oadm | Administrative CLI |
/usr/local/bin/kubectl | Kubernetes CLI |
The wrapper spawns a new container on each invocation, so you may notice it run slightly slower than native CLI operations.
The wrapper scripts mount a limited subset of paths:
- ~/.kube
- /etc/origin/
- /tmp/
Be mindful of this when passing in files to be processed by the oc
or oadm
commands. You may find it easier to redirect the input, for example:
# oc create -f - < my-file.json
The wrapper is intended only to be used to bootstrap an environment. You should install the CLI tools on another host after you have granted cluster-admin privileges to a user. See Managing Role Bindings and Get Started with the CLI for more information.
2.3.4. Starting and Stopping Containers
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands. For containerized installations, these unit names match those of an RPM installation, with the exception of the etcd service which is named etcd_container.
This change is necessary as currently RHEL Atomic Host ships with the etcd package installed as part of the operating system, so a containerized version is used for the OpenShift installation instead. The installation process disables the default etcd service. The etcd package is slated to be removed from RHEL Atomic Host in the future.
2.3.5. File Paths
All OpenShift configuration files are placed in the same locations during containerized installation as RPM based installations and will survive os-tree upgrades.
However, the default image stream and template files are installed at /etc/origin/examples/ for containerized installations rather than the standard /usr/share/openshift/examples/, because that directory is read-only on RHEL Atomic Host.
2.3.6. Storage Requirements
RHEL Atomic Host installations normally have a very small root file system. However, the etcd, master, and node containers persist data in the /var/lib/ directory. Ensure that you have enough space on the root file system before installing OpenShift; see the System Requirements section for details.
2.3.7. Open vSwitch SDN Initialization
OpenShift SDN initialization requires that the Docker bridge be reconfigured and that Docker is restarted. This complicates the situation when the node is running within a container. When using the Open vSwitch (OVS) SDN, you will see the node start, reconfigure Docker, restart Docker (which restarts all containers), and finally start successfully.
In this case, the node service may fail to start and be restarted a few times because the master services are also restarted along with Docker. The current implementation uses a workaround which relies on setting the Restart=always
parameter in the Docker based systemd units.
2.4. Quick Installation
2.4.1. Overview
The quick installation method allows you to use an interactive CLI utility, the atomic-openshift-installer
command, to install OpenShift across a set of hosts. This installer can deploy OpenShift components on targeted hosts by either installing RPMs or running containerized services.
This installation method is provided to make the installation experience easier by interactively gathering the data needed to run on each host. The installer is a self-contained wrapper intended for usage on a Red Hat Enterprise Linux (RHEL) 7 system. While RHEL Atomic Host is supported for running containerized OpenShift services, the installer is provided by an RPM not available by default in RHEL Atomic Host, and must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift cluster, but it can be.
In addition to running interactive installations from scratch, the atomic-openshift-installer
command can also be run or re-run using a predefined installation configuration file. This file can be used with the installer to:
- run an unattended installation,
- add nodes to an existing cluster,
- upgrade your cluster, or
- reinstall the OpenShift cluster completely.
Alternatively, you can use the advanced installation method for more complex environments.
2.4.2. Before You Begin
The installer allows you to install OpenShift master and node components on a defined set of hosts.
By default, any hosts you designate as masters during the installation process are automatically also configured as nodes so that the masters are configured as part of the OpenShift SDN. The node component on the masters, however, are marked unschedulable, which blocks pods from being scheduled on it. After the installation, you can mark them schedulable if you want.
Before installing OpenShift, you must first satisfy the prerequisites on your hosts, which includes verifying system and environment requirements and properly installing and configuring Docker. You must also be prepared to provide or validate the following information for each of your targeted hosts during the course of the installation:
- User name on the target host that should run the Ansible-based installation (can be root or non-root)
- Host name
- Whether to install components for master, node, or both
- Whether to use the RPM or containerized method
- Internal and external IP addresses
If you are interested in installing OpenShift using the containerized method (optional for RHEL but required for RHEL Atomic Host), see RPM vs Containerized to ensure that you understand the differences between these methods, then return to this topic to continue.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue to running an interactive or unattended installation.
2.4.3. Running an Interactive Installation
Ensure you have read through Before You Begin.
You can start the interactive installation by running:
$ atomic-openshift-installer install
Then follow the on-screen instructions to install a new OpenShift Enterprise cluster.
After it has finished, ensure that you back up the ~/.config/openshift/installer.cfg.ymlinstallation configuration file that is created, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.
2.4.4. Defining an Installation Configuration File
The installer can use a predefined installation configuration file, which contains information about your installation, individual hosts, and cluster. When running an interactive installation, an installation configuration file based on your answers is created for you in ~/.config/openshift/installer.cfg.yml. The file is created if you are instructed to exit the installation to manually modify the configuration or when the installation completes. You can also create the configuration file manually from scratch to perform an unattended installation.
Example 2.1. Installation Configuration File Specification
version: v1 1 variant: openshift-enterprise 2 variant_version: 3.1 3 ansible_ssh_user: root 4 ansible_log_path: /tmp/ansible.log 5 hosts: 6 - ip: 10.0.0.1 7 hostname: master-private.example.com 8 public_ip: 24.222.0.1 9 public_hostname: master.example.com 10 master: true 11 node: true 12 containerized: true 13 connect_to: 24.222.0.1 14 - ip: 10.0.0.2 hostname: node1-private.example.com public_ip: 24.222.0.2 public_hostname: node1.example.com node: true connect_to: 10.0.0.2 - ip: 10.0.0.3 hostname: node2-private.example.com public_ip: 24.222.0.3 public_hostname: node2.example.com node: true connect_to: 10.0.0.3
- 1
- The version of this installation configuration file. As of OpenShift Enterprise (OSE) 3.1, the only valid version here is
v1
. - 2
- The OpenShift variant to install. For OSE, set this to
openshift-enterprise
. - 3
- A valid version your selected variant. If not specified, this defaults to the newest version for the specified variant. For example:
3.1
or3.0
. - 4
- Defines which user Ansible uses to SSH in to remote systems for gathering facts and for the installation. By default, this is the root user, but you can set it to any user that has sudo privileges.
- 5
- Defines where the Ansible logs are stored. By default, this is the /tmp/ansible.log file.
- 6
- Defines a list of the hosts onto which you want to install the OpenShift master and node components.
- 7 8
- Required. Allows the installer to connect to the system and gather facts before proceeding with the install.
- 9 10
- Required for unattended installations. If these details are not specified, then this information is pulled from the facts gathered by the installer, and you are asked to confirm the details. If undefined for an unattended installation, the installation fails.
- 11 12
- Determines the type of services that are installed. At least one of these must be set to true for the configuration file to be considered valid.
- 13
- If set to true, containerized OpenShift services are run on target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See RPM vs Containerized for more details. Containerized installations are supported starting in OSE 3.1.1.
- 14
- The IP address that Ansible attempts to connect to when installing, upgrading, or uninstalling the systems. If the configuration file was auto-generated, then this is the value you first enter for the host during that interactive install process.
2.4.5. Running an Unattended Installation
Ensure you have read through the Before You Begin.
Unattended installations allow you to define your hosts and cluster configuration in an installation configuration file before running the installer so that you do not have to go through all of the interactive installation questions and answers. It also allows you to resume an interactive installation you may have left unfinished, and quickly get back to where you left off.
To run an unattended installation, first define an installation configuration file at ~/.config/openshift/installer.cfg.yml. Then, run the installer with the -u
flag:
$ atomic-openshift-installer -u install
By default in interactive or unattended mode, the installer uses the configuration file located at ~/.config/openshift/installer.cfg.yml if the file exists. If it does not exist, attempting to start an unattended installation fails.
Alternatively, you can specify a different location for the configuration file using the -c
option, but doing so will require you to specify the file location every time you run the installation:
$ atomic-openshift-installer -u -c </path/to/file> install
After the unattended installation finishes, ensure that you back up the ~/.config/openshift/installer.cfg.yml file that was used, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.
2.4.6. Verifying the Installation
After the installation completes:
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready,SchedulingDisabled node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready node2.example.com kubernetes.io/hostname=node2.example.com,region=primary,zone=west Ready
To verify that the web console is installed correctly, use the master host name and the console port number to access the console with a web browser.
For example, for a master host with a hostname of
master.openshift.com
and using the default port of8443
, the web console would be found at:https://master.openshift.com:8443/console
Now that the install has been verified, run the following command on each master and node host to add the atomic-openshift packages back to the list of yum excludes on the host:
# atomic-openshift-excluder exclude
Then, see What’s Next for the next steps on configuring your OpenShift cluster.
2.4.7. Adding Nodes or Reinstalling the Cluster
You can use the installer to add nodes to your existing cluster, or to reinstall the cluster entirely.
If you installed OpenShift using the installer in either interactive or unattended mode, you can re-run the installation as long as you have an installation configuration file at ~/.config/openshift/installer.cfg.yml (or specify a different location with the -c
option).
If you installed using the advanced installation method and therefore do not have an installation configuration file, you can either try creating your own based on your cluster’s current configuration, or see the advanced installation method on how to run the playbook for adding new nodes directly.
To add nodes or reinstall the cluster:
Re-run the installer with the
install
subcommand in interactive or unattended mode:$ atomic-openshift-installer [-u] [-c </path/to/file>] install
The installer will detect your installed environment and allow you to either add an additional node or perform a clean install:
Gathering information from hosts... Installed environment detected. By default the installer only adds new nodes to an installed environment. Do you want to (1) only add additional nodes or (2) perform a clean install?:
Choose one of the options and follow the on-screen instructions to complete your desired task.
2.4.8. Uninstalling OpenShift Enterprise
You can uninstall OpenShift Enterprise on all hosts in your cluster using the installer by running:
$ atomic-openshift-installer uninstall
See the advanced installation method for more options.
2.4.9. What’s Next?
Now that you have a working OpenShift Enterprise instance, you can:
- Configure authentication; by default, authentication is set to Deny All.
- Deploy an integrated Docker registry.
- Deploy a router.
2.5. Advanced Installation
2.5.1. Overview
For production environments, a reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing OpenShift hosts. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.
While RHEL Atomic Host is supported for running containerized OpenShift services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host, and must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift cluster, but it can be.
Alternatively, you can use the quick installation method if you prefer an interactive installation experience.
Running Ansible playbooks with the --tags
or --check
options is not supported by Red Hat.
2.5.2. Before You Begin
Before installing OpenShift, you must first see the Prerequisites topic to prepare your hosts, which includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 1.8.4 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
If you are interested in installing OpenShift using the containerized method (optional for RHEL but required for RHEL Atomic Host), see RPM vs Containerized to ensure that you understand the differences between these methods, then return to this topic to continue.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible.
2.5.3. Configuring Ansible
The /etc/ansible/hosts file is Ansible’s inventory file for the playbook to use during the installation. The inventory file describes the configuration for your OpenShift cluster. You must replace the default contents of the file with your desired configuration.
The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation. The examples describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.
2.5.3.1. Configuring Host Variables
To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
| This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name. |
| This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
| This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. |
| This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
| If set to true, containerized OpenShift services are run on target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See RPM vs Containerized for more details. Containerized installations are supported starting in OSE 3.1.1. |
| This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details. |
|
This variable is used to configure |
| This variable configures additional Docker options within /etc/sysconfig/docker, such as options used in Managing Docker Container Logs. Example usage: "--log-driver json-file --log-opt max-size=1M --log-opt max-file=3". |
2.5.3.2. Configuring Cluster Variables
To assign environment variables during the Ansible install that apply more globally to your OpenShift cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] osm_default_subdomain=apps.test.example.com
The following table describes variables for use with the Ansible installer that can be assigned cluster-wide:
Variable | Purpose |
---|---|
| This variable sets the SSH user for the installer to use and defaults to root. This user should allow SSH-based authentication without requiring a password. If using SSH key-based authentication, then the key should be managed by an SSH agent. |
|
If |
| If set to true, containerized OpenShift services are run on all target master and node hosts in the cluster instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See RPM vs Containerized for more details. Containerized installations are supported starting in OSE 3.1.1. |
| This variable overrides the host name for the cluster, which defaults to the host name of the master. |
| This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer. For example: ---- openshift_master_cluster_public_hostname=openshift-ansible.public.example.com ---- |
|
Optional. This variable defines the HA method when deploying multiple masters. Can be either |
|
These variables are only required when using the
For |
| |
| |
|
This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when running the upgrade playbook directly. It defaults to |
| This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to redhat/openshift-ovs-subnet for the standard SDN plug-in. Set the variable to redhat/openshift-ovs-multitenant to use the multitenant plug-in. |
| This variable overrides the identity provider, which defaults to Deny All. |
| These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information. |
| |
| These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information. |
| |
| |
| |
| This variable configures the subnet in which services will be created within the OpenShift Enterprise SDN. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to 172.30.0.0/16, and cannot be re-configured after deployment. If changing from the default, avoid 172.16.0.0/16, which the docker0 network bridge uses by default, or modify the docker0 network. |
| Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details. |
| Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details. |
| This variable overrides the default subdomain to use for exposed routes. |
| This variable overrides the node selector that projects will use by default when placing pods. |
| This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to 10.1.0.0/16 and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in the SDN master configuration. |
| This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift SDN. Defaults to 8 which means that a subnet of size /24 is allocated to each host; for example, given the default 10.1.0.0/16 cluster network, this will allocate 10.1.0.0/24, 10.1.1.0/24, 10.1.2.0/24, and so on. This cannot be re-configured after deployment. |
2.5.3.3. Configuring Node Host Labels
You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler. Other than region=infra (discussed below), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.
To assign labels to a node host during an Ansible install, use the openshift_node_labels
variable with the desired labels added to the desired node host entry in the [nodes] section. In the following example, labels are set for a region called primary and a zone called east:
[nodes] node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
The openshift_router_selector
and openshift_registry_selector
Ansible settings are set to region=infra by default:
# default selectors for router and registry services # openshift_router_selector='region=infra' # openshift_registry_selector='region=infra'
The default router and registry will be automatically deployed if nodes exist that match the selector settings above. For example:
[nodes] node1.example.com openshift_node_labels="{'region':'infra','zone':'default'}"
2.5.3.4. Marking Masters as Unschedulable Nodes
Any hosts you designate as masters during the installation process should also be configured as nodes by adding them to the [nodes] section so that the masters are configured as part of the OpenShift SDN.
However, in order to ensure that your masters are not burdened with running pods, you can make them unschedulable by adding the openshift_scheduleable=false
option any node that is also a master. For example:
[nodes] master.example.com openshift_node_labels="{'region':'infra','zone':'default'}" openshift_schedulable=false
2.5.3.5. Configuring Session Options
Session options in the OAuth configuration are configurable in the inventory file. By default, Ansible populates a sessionSecretsFile
with generated authentication and encryption secrets so that sessions generated by one master can be decoded by the others. The default location is /etc/origin/master/session-secrets.yaml, and this file will only be re-created if deleted on all masters.
You can set the session name and maximum number of seconds with openshift_master_session_name
and openshift_master_session_max_seconds
:
openshift_master_session_name=ssn openshift_master_session_max_seconds=3600
If provided, openshift_master_session_auth_secrets
and openshift_master_encryption_secrets
must be equal length.
For openshift_master_session_auth_secrets
, used to authenticate sessions using HMAC, it is recommended to use secrets with 32 or 64 bytes:
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
For openshift_master_encryption_secrets
, used to encrypt sessions, secrets must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
2.5.3.6. Configuring Custom Certificates
Custom serving certificates for the public host names of the OpenShift API and web console can be deployed during an advanced installation and are configurable in the inventory file.
Custom certificates should only be configured for the host name associated with the publicMasterURL
which can be set using openshift_master_cluster_public_hostname
. Using a custom serving certificate for the host name associated with the masterURL
(openshift_master_cluster_hostname
) will result in TLS errors as infrastructure components will attempt to contact the master API using the internal masterURL
host.
Certificate and key file paths can be configured using the openshift_master_named_certificates
cluster variable:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name
and Subject Alternative Names
. Detected names can be overridden by providing the "names"
key when setting openshift_master_named_certificates
:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]
Certificates configured using openshift_master_named_certificates
are cached on masters, meaning that each additional Ansible run with a different set of certificates results in all previously deployed certificates remaining in place on master hosts and within the master configuration file.
If you would like openshift_master_named_certificates
to be overwritten with the provided value (or no value), specify the openshift_master_overwrite_named_certificates
cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native openshift_master_cluster_hostname=lb.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, you could set the following:
openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key"}, "names": ["custom.openshift.com"]}] openshift_master_overwrite_named_certificates=true
2.5.4. Single Master Examples
You can configure an environment with a single master and multiple nodes, and either a single embedded etcd or multiple external etcd hosts.
Moving from a single master cluster to multiple masters after installation is not supported.
Single Master and Multiple Nodes
The following table describes an example environment for a single master (with embedded etcd) and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master and node |
node1.example.com | Node |
node2.example.com |
You can see these example hosts present in the [masters] and [nodes] sections of the following example inventory file:
Example 2.2. Single Master and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_sudo must be set to true #ansible_sudo=true deployment_type=openshift-enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Single Master, Multiple etcd, and Multiple Nodes
The following table describes an example environment for a single master, three etcd hosts, and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master and node |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported.
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
Example 2.3. Single Master, Multiple etcd, and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
2.5.5. Multiple Masters Examples
You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.
Moving from a single master cluster to multiple masters after installation is not supported.
When configuring multiple masters, the advanced installation supports two high availability (HA) methods:
| Leverages the native HA master capabilities built into OpenShift and can be combined with any load balancing solution. If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured a load balancing solution of your choice to balance the master API (port 8443) on all master hosts. |
| Configures Pacemaker as the load balancer for multiple masters. Requires a High Availability Add-on for Red Hat Enterprise Linux subscription, which is provided separately from the OpenShift Enterprise subscription. |
For more on these methods and the high availability master architecture, see Kubernetes Infrastructure.
To configure multiple masters, choose one of the above HA methods, and refer to the relevant example section that follows.
Multiple Masters Using Native HA
The following describes an example environment for three masters, one HAProxy load balancer, three etcd hosts, and two nodes using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported.
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
Example 2.4. Multiple Masters Using HAProxy Inventory File
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-cluster.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # override the default controller lease ttl #osm_controller_lease_ttl=30 # enable ntp on masters to ensure proper failover openshift_clock_enabled=true # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Note the following when using the native
HA method:
-
The advanced installation method does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments, or use the
pacemaker
method if you require this capability. -
In a HAProxy setup, controller manager servers run as standalone processes. They elect their active leader with a lease stored in etcd. The lease expires after 30 seconds by default. If a failure happens on an active controller server, it will take up to this number of seconds to elect another leader. The interval can be configured with the
osm_controller_lease_ttl
variable.
Multiple Masters Using Pacemaker
The following describes an example environment for three masters, three etcd hosts, and two nodes using the pacemaker
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using Pacemaker) and node |
master2.example.com | |
master3.example.com | |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift’s embedded etcd is not supported.
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
Example 2.5. Multiple Masters Using Pacemaker Inventory File
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Pacemaker high availability cluster method. # Pacemaker HA environment must be able to self provision the # configured VIP. For installation openshift_master_cluster_hostname # must resolve to the configured VIP. openshift_master_cluster_method=pacemaker openshift_master_cluster_password=openshift_cluster openshift_master_cluster_vip=192.168.133.25 openshift_master_cluster_public_vip=192.168.133.25 openshift_master_cluster_hostname=openshift-cluster.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # override the default controller lease ttl #osm_controller_lease_ttl=30 # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Note the following when using this configuration:
- Installing multiple masters with Pacemaker requires that you configure a fencing device after running the installer.
-
When specifying multiple masters, the installer handles creating and starting the HA cluster. If during that process the
pcs status
command indicates that an HA cluster already exists, the installer skips HA cluster configuration.
2.5.6. Running the Advanced Installation
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you can run the advanced installation using the following playbook:
# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
The installer caches playbook configuration values for 10 minutes, by default. If you change any system, network, or inventory configuration, and then re-run the installer within that 10 minute period, the new values are not used, and the previous values are used instead. You can delete the contents of the cache, which is defined by the fact_caching_connection
value in the /etc/ansible/ansible.cfg file.
Due to a known issue, after running the installation, if NFS volumes are provisioned for any component, the following directories might be created whether their components are being deployed to NFS volumes or not:
- /exports/logging-es
- /exports/logging-es-ops/
- /exports/metrics/
- /exports/prometheus
- /exports/prometheus-alertbuffer/
- /exports/prometheus-alertmanager/
You can delete these directories after installation, as needed.
2.5.7. Configuring Fencing
If you installed OpenShift using a configuration for multiple masters with Pacemaker as a load balancer, you must configure a fencing device. See Fencing: Configuring STONITH in the High Availability Add-on for Red Hat Enterprise Linux documentation for instructions, then continue to Verifying the Installation.
2.5.8. Verifying the Installation
After the installation completes:
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME LABELS STATUS master.example.com kubernetes.io/hostname=master.example.com,region=infra,zone=default Ready,SchedulingDisabled node1.example.com kubernetes.io/hostname=node1.example.com,region=primary,zone=east Ready node2.example.com kubernetes.io/hostname=node2.example.com,region=primary,zone=west Ready
To verify that the web console is installed correctly, use the master host name and the console port number to access the console with a web browser.
For example, for a master host with a hostname of
master.openshift.com
and using the default port of8443
, the web console would be found at:https://master.openshift.com:8443/console
Now that the install has been verified, run the following command on each master and node host to add the atomic-openshift packages back to the list of yum excludes on the host:
# atomic-openshift-excluder exclude
Multiple etcd Hosts
If you installed multiple etcd hosts:
On a etcd host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
Multiple Masters Using Pacemaker
If you installed multiple masters using Pacemaker as a load balancer:
On a master host, determine which host is currently running as the active master:
# pcs status
After determining the active master, put the specified host into standby mode:
# pcs cluster standby <host1_name>
Verify the master is now running on another host:
# pcs status
After verifying the master is running on another node, re-enable the host on standby for normal operation by running:
# pcs cluster unstandby <host1_name>
Red Hat recommends that you also verify your installation by consulting the High Availability Add-on for Red Hat Enterprise Linux documentation.
Multiple Masters Using HAProxy
If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:
http://<lb_hostname>:9000
You can verify your installation by consulting the HAProxy Configuration documentation.
2.5.9. Adding Nodes to an Existing Cluster
After your cluster is installed, you can install additional nodes (including masters) and add them to your cluster by running the scaleup.yml playbook. This playbook queries the master, generates and distributes new certificates for the new nodes, then runs the configuration playbooks on the new nodes only.
This process is similar to re-running the installer in the quick installation method to add nodes, however you have more configuration options available when using the advanced method and running the playbooks directly.
You must have an existing inventory file (for example, /etc/ansible/hosts) that is representative of your current cluster configuration in order to run the scaleup.yml playbook. If you previously used the atomic-openshift-installer
command to run your installation, you can check ~/.config/openshift/.ansible/hosts for the last inventory file that the installer generated and use or modify that as needed as your inventory file. You must then specify the file location with -i
when calling ansible-playbook
later.
To add nodes to an existing cluster:
Ensure you have the latest playbooks by updating the atomic-openshift-utils package:
# yum update atomic-openshift-utils
Edit your /etc/ansible/hosts file and add
new_nodes
to the [OSEv3:children] section:[OSEv3:children] masters nodes new_nodes
Then, create a [new_nodes] section much like the existing [nodes] section, specifying host information for any new nodes you want to add. For example:
[nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" [new_nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
See Configuring Host Variables for more options.
Now run the scaleup.yml playbook. If your inventory file is located somewhere other than the default /etc/ansible/hosts, specify the location with the
-i option
:For additional nodes:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml
For additional masters:
# ansible-playbook [-i /path/to/file] \ usr/share/ansible/openshift-ansible/playbooks/byo/openshift-master/scaleup.yml
- After the playbook completes successfully, verify the installation.
Finally, move any hosts you had defined in the [new_nodes] section up into the [nodes] section (but leave the [new_nodes] section definition itself in place) so that subsequent runs using this inventory file are aware of the nodes but do not handle them as new nodes. For example:
[nodes] master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" [new_nodes]
2.5.10. Uninstalling OpenShift Enterprise
You can uninstall OpenShift Enterprise hosts in your cluster by running the uninstall.yml playbook. This playbook deletes OpenShift Enterprise content installed by Ansible, including:
- Configuration
- Containers
- Default templates and image streams
- Images
- RPM packages
The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook. If you want to uninstall OpenShift Enterprise across all hosts in your cluster, run the playbook using the inventory file you used when installing OpenShift Enterprise initially or ran most recently:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
2.5.10.1. Uninstalling Nodes
You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:
This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster.
- First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
Create a different inventory file that only references those hosts. For example, to only delete content from one node:
[OSEv3:children] nodes 1 [OSEv3:vars] ansible_ssh_user=root deployment_type=openshift-enterprise [nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" 2
Specify that new inventory file using the
-i
option when running the uninstall.yml playbook:# ansible-playbook -i /path/to/new/file \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
When the playbook completes, all OpenShift Enterprise content should be removed from any specified hosts.
2.5.11. Known Issues
The following are known issues for specified installation configurations.
Multiple Masters
- On failover, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/GoogleCloudPlatform/kubernetes/issues/10030 for details.
On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are use bare metal machines:
Run the following on a master host with Pacemaker:
# pcs cluster destroy --all
Then, run the following on all node hosts:
# yum -y remove openshift openshift-* etcd docker # rm -rf /etc/origin /var/lib/openshift /etc/etcd \ /var/lib/etcd /etc/sysconfig/atomic-openshift* /etc/sysconfig/docker* \ /root/.kube/config /etc/ansible/facts.d /usr/share/openshift
2.5.12. What’s Next?
Now that you have a working OpenShift instance, you can:
- Configure authentication; by default, authentication is set to Deny All.
- Deploy an integrated Docker registry.
- Deploy a router.
2.6. Disconnected Installation
2.6.1. Overview
Frequently, portions of a datacenter may not have access to the Internet, even via proxy servers. Installing OpenShift Enterprise in these environments is considered a disconnected installation.
An OpenShift Enterprise disconnected installation differs from a regular installation in two primary ways:
- The OpenShift Enterprise software channels and repositories are not available via Red Hat’s content distribution network.
- OpenShift Enterprise uses several containerized components. Normally, these images are pulled directly from Red Hat’s Docker registry. In a disconnected environment, this is not possible.
A disconnected installation ensures the OpenShift Enterprise software is made available to the relevant servers, then follows the same installation process as a standard connected installation. This topic additionally details how to manually download the Docker images and transport them onto the relevant servers.
Once installed, in order to use OpenShift Enterprise, you will need source code in a source control repository (for example, Git). This topic assumes that an internal Git repository is available that can host source code and this repository is accessible from the OpenShift Enterprise nodes. Installing the source control repository is outside the scope of this document.
Also, when building applications in OpenShift Enterprise, your build may have some external dependencies, such as a Maven Repository or Gem files for Ruby applications. For this reason, and because they might require certain tags, many of the Quickstart templates offered by OpenShift Enterprise may not work on a disconnected environment. However, while Red Hat Docker images try to reach out to external repositories by default, you can configure OpenShift Enterprise to use your own internal repositories. For the purposes of this document, we assume that such internal repositories already exist and are accessible from the OpenShift Enterprise nodes hosts. Installing such repositories is outside the scope of this document.
You can also have a Red Hat Satellite server that provides access to Red Hat content via an intranet or LAN. For environments with Satellite, you can synchronize the OpenShift Enterprise software onto the Satellite for use with the OpenShift Enterprise servers.
Red Hat Satellite 6.1 also introduces the ability to act as a Docker registry, and it can be used to host the OpenShift Enterprise containerized components. Doing so is outside of the scope of this document.
2.6.2. Prerequisites
This document assumes that you understand OpenShift’s overall architecture and that you have already planned out what the topology of your environment will look like.
2.6.3. Required Software and Components
In order to pull down the required software repositories and Docker images, you will need a Red Hat Enterprise Linux (RHEL) 7 server with access to the Internet and at least 100GB of additional free space. All steps in this section should be performed on the Internet-connected server as the root system user.
2.6.3.1. Syncing Repositories
Before you sync with the required repositories, you may need to import the appropriate GPG key:
# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
If the key is not imported, the indicated package is deleted after syncing the repository.
To sync the required repositories:
Register the server with the Red Hat Customer Portal. You must use the login and password associated with the account that has access to the OpenShift Enterprise subscriptions:
# subscription-manager register
Attach to a subscription that provides OpenShift Enterprise channels. You can find the list of available subscriptions using:
# subscription-manager list --available
Then, find the pool ID for the subscription that provides OpenShift Enterprise, and attach it:
# subscription-manager attach --pool=<pool_id> # subscription-manager repos --disable="*" # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.1-rpms"
The
yum-utils
command provides the reposync utility, which lets you mirror yum repositories, andcreaterepo
can create a usableyum
repository from a directory:# yum -y install yum-utils createrepo docker git
You will need up to 110GB of free space in order to sync the software. Depending on how restrictive your organization’s policies are, you could re-connect this server to the disconnected LAN and use it as the repository server. You could use USB-connected storage and transport the software to another server that will act as the repository server. This topic covers these options.
Make a path to where you want to sync the software (either locally or on your USB or other device):
# mkdir -p </path/to/repos>
Sync the packages and create the repository for each of them. You will need to modify the command for the appropriate path you created above:
# for repo in \ rhel-7-server-rpms rhel-7-server-extras-rpms \ rhel-7-server-ose-3.1-rpms do reposync --gpgcheck -lm --repoid=${repo} --download_path=/path/to/repos createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo} done
2.6.3.2. Syncing Images
To sync the Docker images:
Start the Docker daemon:
# systemctl start docker
Pull all of the required OpenShift Enterprise containerized components:
# docker pull registry.access.redhat.com/openshift3/ose-haproxy-router:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-deployer:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-sti-builder:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-docker-builder:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-pod:v3.1.1.11 # docker pull registry.access.redhat.com/openshift3/ose-docker-registry:v3.1.1.11
Pull all of the required OpenShift Enterprise containerized components for the additional centralized log aggregation and metrics aggregation components:
# docker pull registry.access.redhat.com/openshift3/logging-deployment # docker pull registry.access.redhat.com/openshift3/logging-elasticsearch # docker pull registry.access.redhat.com/openshift3/logging-kibana # docker pull registry.access.redhat.com/openshift3/logging-fluentd # docker pull registry.access.redhat.com/openshift3/logging-auth-proxy # docker pull registry.access.redhat.com/openshift3/metrics-deployer # docker pull registry.access.redhat.com/openshift3/metrics-hawkular-metrics # docker pull registry.access.redhat.com/openshift3/metrics-cassandra # docker pull registry.access.redhat.com/openshift3/metrics-heapster
Pull the Red Hat-certified Source-to-Image (S2I) builder images that you intend to use in your OpenShift environment. You can pull the following images:
- jboss-eap70-openshift
- jboss-amq-62
- jboss-datagrid65-openshift
- jboss-decisionserver62-openshift
- jboss-eap64-openshift
- jboss-eap70-openshift
- jboss-webserver30-tomcat7-openshift
- jboss-webserver30-tomcat8-openshift
- mongodb
- mysql
- nodejs
- perl
- php
- postgresql
- python
- redhat-sso70-openshift
ruby
Make sure to indicate the correct tag specifying the desired version number. For example, to pull both the previous and latest version of the Tomcat image:
# docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest # docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1
2.6.3.3. Preparing Images for Export
Docker images can be exported from a system by first saving them to a tarball and then transporting them:
Make and change into a repository home directory:
# mkdir </path/to/repos/images> # cd </path/to/repos/images>
Export the OpenShift Enterprise containerized components:
# docker save -o ose3-images.tar \ registry.access.redhat.com/openshift3/ose-haproxy-router \ registry.access.redhat.com/openshift3/ose-deployer \ registry.access.redhat.com/openshift3/ose-sti-builder \ registry.access.redhat.com/openshift3/ose-docker-builder \ registry.access.redhat.com/openshift3/ose-pod \ registry.access.redhat.com/openshift3/ose-docker-registry
If you synchronized the metrics and log aggregation images, export:
# docker save -o ose3-logging-metrics-images.tar \ registry.access.redhat.com/openshift3/logging-deployment \ registry.access.redhat.com/openshift3/logging-elasticsearch \ registry.access.redhat.com/openshift3/logging-kibana \ registry.access.redhat.com/openshift3/logging-fluentd \ registry.access.redhat.com/openshift3/logging-auth-proxy \ registry.access.redhat.com/openshift3/metrics-deployer \ registry.access.redhat.com/openshift3/metrics-hawkular-metrics \ registry.access.redhat.com/openshift3/metrics-cassandra \ registry.access.redhat.com/openshift3/metrics-heapster
Export the S2I builder images that you synced in the previous section. For example, if you synced only the Tomcat image:
# docker save -o ose3-builder-images.tar \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1
2.6.4. Repository Server
During the installation (and for later updates, should you so choose), you will need a webserver to host the repositories. RHEL 7 can provide the Apache webserver.
Option 1: Re-configuring as a Web server
If you can re-connect the server where you synchronized the software and images to your LAN, then you can simply install Apache on the server:
# yum install httpd
Skip to Placing the Software.
Option 2: Building a Repository Server
If you need to build a separate server to act as the repository server, install a new RHEL 7 system with at least 110GB of space. On this repository server during the installation, make sure you select the Basic Web Server option.
2.6.4.1. Placing the Software
If necessary, attach the external storage, and then copy the repository files into Apache’s root folder. Note that the below copy step (
cp -a
) should be substituted with move (mv
) if you are repurposing the server you used to sync:# cp -a /path/to/repos /var/www/html/ # chmod -R +r /var/www/html/repos # restorecon -vR /var/www/html
Add the firewall rules:
# firewall-cmd --permanent --add-service=http # firewall-cmd --reload
Enable and start Apache for the changes to take effect:
# systemctl enable httpd # systemctl start httpd
2.6.5. OpenShift Enterprise Systems
2.6.5.1. Building Your Hosts
At this point you can perform the initial creation of the hosts that will be part of the OpenShift Enterprise environment. It is recommended to use the latest version of RHEL 7 and to perform a minimal installation. You will also want to pay attention to the other OpenShift Enterprise-specific prerequisites.
Once the hosts are initially built, the repositories can be set up.
2.6.5.2. Connecting the Repositories
On all of the relevant systems that will need OpenShift Enterprise software components, create the required repository definitions. Place the following text in the /etc/yum.repos.d/ose.repo file, replacing <server_IP>
with the IP or host name of the Apache server hosting the software repositories:
[rhel-7-server-rpms] name=rhel-7-server-rpms baseurl=http://<server_IP>/repos/rhel-7-server-rpms enabled=1 gpgcheck=0 [rhel-7-server-extras-rpms] name=rhel-7-server-extras-rpms baseurl=http://<server_IP>/repos/rhel-7-server-extras-rpms enabled=1 gpgcheck=0 [rhel-7-server-ose-3.1-rpms] name=rhel-7-server-ose-3.1-rpms baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.1-rpms enabled=1 gpgcheck=0
2.6.5.3. Host Preparation
At this point, the systems are ready to continue to be prepared following the OpenShift Enterprise documentation.
Skip the section titled Registering the Hosts and start with Managing Packages.
2.6.6. Installing OpenShift Enterprise
2.6.6.1. Importing OpenShift Enterprise Containerized Components
To import the relevant components, securely copy the images from the connected host to the individual OpenShift Enterprise hosts:
# scp /var/www/html/repos/images/ose3-images.tar root@<openshift_host_name>: # ssh root@<openshift_host_name> "docker load -i ose3-images.tar"
If you prefer, you could use wget
on each OpenShift Enterprise host to fetch the tar file, and then perform the Docker import command locally. Perform the same steps for the metrics and logging images, if you synchronized them.
On the host that will act as an OpenShift Enterprise master, copy and import the builder images:
# scp /var/www/html/images/ose3-builder-images.tar root@<openshift_master_host_name>: # ssh root@<openshift_master_host_name> "docker load -i ose3-builder-images.tar"
2.6.6.2. Running the OpenShift Enterprise Installer
You can now choose to follow the quick or advanced OpenShift Enterprise installation instructions in the documentation.
2.6.6.3. Creating the Internal Docker Registry
You now need to create the internal Docker registry.
2.6.7. Post-Installation Changes
In one of the previous steps, the S2I images were imported into the Docker daemon running on one of the OpenShift Enterprise master hosts. In a connected installation, these images would be pulled from Red Hat’s registry on demand. Since the Internet is not available to do this, the images must be made available in another Docker registry.
OpenShift Enterprise provides an internal registry for storing the images that are built as a result of the S2I process, but it can also be used to hold the S2I builder images. The following steps assume you did not customize the service IP subnet (172.30.0.0/16) or the Docker registry port (5000).
2.6.7.1. Re-tagging S2I Builder Images
On the master host where you imported the S2I builder images, obtain the service address of your Docker registry that you installed on the master:
# export REGISTRY=$(oc get service docker-registry -t '{{.spec.clusterIP}}{{"\n"}}')
Next, tag all of the builder images that you synced and exported before pushing them into the OpenShift Enterprise Docker registry. For example, if you synced and exported only the Tomcat image:
# docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1 \ $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1 # docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2 # docker tag \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest
2.6.7.2. Creating an Administrative User
Pushing the Docker images into OpenShift Enterprise’s Docker registry requires a user with cluster-admin privileges. Because the default OpenShift system administrator does not have a standard authorization token, they cannot be used to log in to the Docker registry.
To create an administrative user:
Create a new user account in the authentication system you are using with OpenShift Enterprise. For example, if you are using local
htpasswd
-based authentication:# htpasswd -b /etc/openshift/openshift-passwd <admin_username> <password>
The external authentication system now has a user account, but a user must log in to OpenShift Enterprise before an account is created in the internal database. Log in to OpenShift Enterprise for this account to be created. This assumes you are using the self-signed certificates generated by OpenShift Enterprise during the installation:
# oc login --certificate-authority=/etc/origin/master/ca.crt \ -u <admin_username> https://<openshift_master_host>:8443
Get the user’s authentication token:
# MYTOKEN=$(oc whoami -t) # echo $MYTOKEN iwo7hc4XilD2KOLL4V1O55ExH2VlPmLD-W2-JOd6Fko
2.6.7.3. Modifying the Security Policies
Using
oc login
switches to the new user. Switch back to the OpenShift Enterprise system administrator in order to make policy changes:# oc login -u system:admin
In order to push images into the OpenShift Enterprise Docker registry, an account must have the
image-builder
security role. Add this to your OpenShift Enterprise administrative user:# oadm policy add-role-to-user system:image-builder <admin_username>
Next, add the administrative role to the user in the openshift project. This allows the administrative user to edit the openshift project, and, in this case, push the Docker images:
# oadm policy add-role-to-user admin <admin_username> -n openshift
2.6.7.4. Editing the Image Stream Definitions
The openshift project is where all of the image streams for builder images are created by the installer. They are loaded by the installer from the /usr/share/openshift/examples directory. Change all of the definitions by deleting the image streams which had been loaded into OpenShift Enterprise’s database, then re-create them:
Delete the existing image streams:
# oc delete is -n openshift --all
Make a backup of the files in /usr/share/openshift/examples/ if you desire. Next, edit the file image-streams-rhel7.json in the /usr/share/openshift/examples/image-streams folder. You will find an image stream section for each of the builder images. Edit the
spec
stanza to point to your internal Docker registry.For example, change:
"spec": { "dockerImageRepository": "registry.access.redhat.com/rhscl/mongodb-26-rhel7",
to:
"spec": { "dockerImageRepository": "172.30.69.44:5000/openshift/mongodb-26-rhel7",
In the above, the repository name was changed from rhscl to openshift. You will need to ensure the change, regardless of whether the repository is rhscl, openshift3, or another directory. Every definition should have the following format:
<registry_ip>:5000/openshift/<image_name>
Repeat this change for every image stream in the file. Ensure you use the correct IP address that you determined earlier. When you are finished, save and exit. Repeat the same process for the JBoss image streams in the /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json file.
Load the updated image stream definitions:
# oc create -f /usr/share/openshift/examples/image-streams/image-streams-rhel7.json -n openshift # oc create -f /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json -n openshift
2.6.7.5. Loading the Docker Images
At this point the system is ready to load the Docker images.
Log in to the Docker registry using the token and registry service IP obtained earlier:
# docker login -u adminuser -e mailto:adminuser@abc.com \ -p $MYTOKEN $REGISTRY:5000
Push the Docker images:
# docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1 # docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2 # docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest
Verify that all the image streams now have the tags populated:
# oc get imagestreams -n openshift NAME DOCKER REPO TAGS UPDATED jboss-webserver30-tomcat7-openshift $REGISTRY/jboss-webserver-3/webserver30-jboss-tomcat7-openshift 1.1,1.1-2,1.1-6 + 2 more... 2 weeks ago ...
2.6.8. Installing a Router
At this point, the OpenShift Enterprise environment is almost ready for use. It is likely that you will want to install and configure a router.
2.7. Deploying a Docker Registry
2.7.1. Overview
OpenShift can build Docker images from your source code, deploy them, and manage their lifecycle. To enable this, OpenShift provides an internal, integrated Docker registry that can be deployed in your OpenShift environment to locally manage images.
2.7.2. Deploying the Registry
To deploy the integrated Docker registry, use the oadm registry
command as a user with cluster administrator privileges. For example:
$ oadm registry --config=/etc/origin/master/admin.kubeconfig \1 --credentials=/etc/origin/master/openshift-registry.kubeconfig \2 --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' 3
This creates a service and a deployment configuration, both called docker-registry. Once deployed successfully, a pod is created with a name similar to docker-registry-1-cpty9.
Use --selector
to deploy the registry to any node(s) that match a specified node label:
$ oadm registry <registry_name> --replicas=<number> --selector=<label> \ --service-account=registry
For example, if you want to create a registry named registry
and have it placed on a node labeled with region=infra
:
$ oadm registry registry --replicas=1 --selector='region=infra' \ --service-account=registry
To see a full list of options that you can specify when creating the registry:
$ oadm registry --help
2.7.2.1. Storage for the Registry
The registry stores Docker images and metadata. If you simply deploy a pod with the registry, it uses an ephemeral volume that is destroyed if the pod exits. Any images anyone has built or pushed into the registry would disappear.
2.7.2.1.1. Production Use
For production use, attach a remote volume or define and use the persistent storage method of your choice.
For example, to use an existing persistent volume claim:
$ oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \ --claim-name=<pvc_name> --overwrite
Or, to attach an existing NFS volume to the registry:
$ oc volume deploymentconfigs/docker-registry \ --add --overwrite --name=registry-storage --mount-path=/registry \ --source='{"nfs": { "server": "<fqdn>", "path": "/path/to/export"}}'
See Known Issues if using a scaled registry with a shared NFS volume.
2.7.2.1.2. Non-Production Use
For non-production use, you can use the --mount-host=<path>
option to specify a directory for the registry to use for persistent storage. The registry volume is then created as a host-mount at the specified <path>
.
The --mount-host
option mounts a directory from the node on which the registry container lives. If you scale up the docker-registry deployment configuration, it is possible that your registry pods and containers will run on different nodes, which can result in two or more registry containers, each with its own local storage. This will lead to unpredictable behavior, as subsequent requests to pull the same image repeatedly may not always succeed, depending on which container the request ultimately goes to.
The --mount-host
option requires that the registry container run in privileged mode. This is automatically enabled when you specify --mount-host
. However, not all pods are allowed to run privileged containers by default. If you still want to use this option, create the registry and specify that it use the registry service account that was created during installation:
$ oadm registry --service-account=registry \ --config=/etc/origin/master/admin.kubeconfig \ --credentials=/etc/origin/master/openshift-registry.kubeconfig \ --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' \ --mount-host=<path>
The Docker registry pod runs as user 1001. This user must be able to write to the host directory. You may need to change directory ownership to user ID 1001 with this command:
$ sudo chown 1001:root <path>
2.7.2.2. Maintaining the Registry IP Address
OpenShift refers to the integrated registry by its service IP address, so if you decide to delete and recreate the docker-registry service, you can ensure a completely transparent transition by arranging to re-use the old IP address in the new service. If a new IP address cannot be avoided, you can minimize cluster disruption by rebooting only the masters.
- Re-using the Address
- To re-use the IP address, you must save the IP address of the old docker-registry service prior to deleting it, and arrange to replace the newly assigned IP address with the saved one in the new docker-registry service.
Make a note of the
ClusterIP
for the service:$ oc get svc/docker-registry -o yaml | grep clusterIP:
Delete the service:
$ oc delete svc/docker-registry dc/docker-registry
Create the registry definition in registry.yaml, replacing
<options>
with, for example, those used in step 3 of the instructions in the Non-Production Use section:$ oadm registry <options> -o yaml > registry.yaml
-
Edit registry.yaml, find the
Service
there, and change itsClusterIP
to the address noted in step 1. Create the registry using the modified registry.yaml:
$ oc create -f registry.yaml
- Rebooting the Masters
If you are unable to re-use the IP address, any operation that uses a pull specification that includes the old IP address will fail. To minimize cluster disruption, you must reboot the masters:
# systemctl restart atomic-openshift-master
This ensures that the old registry URL, which includes the old IP address, is cleared from the cache.
NoteWe recommend against rebooting the entire cluster because that incurs unnecessary downtime for pods and does not actually clear the cache.
2.7.3. Viewing Logs
To view the logs for the Docker registry, run the oc logs
indicating the desired pod:
$ oc logs docker-registry-1-da73t 2015-05-01T19:48:36.300593110Z time="2015-05-01T19:48:36Z" level=info msg="version=v2.0.0+unknown" 2015-05-01T19:48:36.303294724Z time="2015-05-01T19:48:36Z" level=info msg="redis not configured" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time="2015-05-01T19:48:36Z" level=info msg="using inmemory layerinfo cache" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time="2015-05-01T19:48:36Z" level=info msg="Using OpenShift Auth handler" 2015-05-01T19:48:36.303439084Z time="2015-05-01T19:48:36Z" level=info msg="listening on :5000" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002
2.7.4. File Storage
Tag and image metadata is stored in OpenShift, but the registry stores layer and signature data in a volume that is mounted into the registry container at /registry. As oc exec
does not work on privileged containers, to view a registry’s contents you must manually SSH into the node housing the registry pod’s container, then run docker exec
on the container itself:
List the current pods to find the pod name of your Docker registry:
# oc get pods
Then, use
oc describe
to find the host name for the node running the container:# oc describe pod <pod_name>
Log into the desired node:
# ssh node.example.com
List the running containers on the node host and identify the container ID for the Docker registry:
# docker ps | grep ose-docker-registry
List the registry contents using the
docker exec
command:# docker exec -it 4c01db0b339c find /registry /registry/docker /registry/docker/registry /registry/docker/registry/v2 /registry/docker/registry/v2/blobs 1 /registry/docker/registry/v2/blobs/sha256 /registry/docker/registry/v2/blobs/sha256/ed /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810 /registry/docker/registry/v2/blobs/sha256/ed/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/data 2 /registry/docker/registry/v2/blobs/sha256/a3 /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 /registry/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/data /registry/docker/registry/v2/blobs/sha256/f7 /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845 /registry/docker/registry/v2/blobs/sha256/f7/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/data /registry/docker/registry/v2/repositories 3 /registry/docker/registry/v2/repositories/p1 /registry/docker/registry/v2/repositories/p1/pause 4 /registry/docker/registry/v2/repositories/p1/pause/_manifests /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures 5 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810 /registry/docker/registry/v2/repositories/p1/pause/_manifests/revisions/sha256/e9a2ac6418981897b399d3709f1b4a6d2723cd38a4909215ce2752a5c068b1cf/signatures/sha256/ede17b139a271d6b1331ca3d83c648c24f92cece5f89d95ac6c34ce751111810/link 6 /registry/docker/registry/v2/repositories/p1/pause/_uploads 7 /registry/docker/registry/v2/repositories/p1/pause/_layers 8 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/link 9 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845 /registry/docker/registry/v2/repositories/p1/pause/_layers/sha256/f72a00a23f01987b42cb26f259582bb33502bdb0fcf5011e03c60577c4284845/link
- 1
- This directory stores all layers and signatures as blobs.
- 2
- This file contains the blob’s contents.
- 3
- This directory stores all the image repositories.
- 4
- This directory is for a single image repository p1/pause.
- 5
- This directory contains signatures for a particular image manifest revision.
- 6
- This file contains a reference back to a blob (which contains the signature data).
- 7
- This directory contains any layers that are currently being uploaded and staged for the given repository.
- 8
- This directory contains links to all the layers this repository references.
- 9
- This file contains a reference to a specific layer that has been linked into this repository via an image.
2.7.5. Accessing the Registry Directly
For advanced usage, you can access the registry directly to invoke docker
commands. This allows you to push images to or pull them from the integrated registry directly using operations like docker push
or docker pull
. To do so, you must be logged in to the registry using the docker login
command. The operations you can perform depend on your user permissions, as described in the following sections.
2.7.5.1. User Prerequisites
To access the registry directly, the user that you use must satisfy the following, depending on your intended usage:
For any direct access, you must have a regular user, if one does not already exist, for your preferred identity provider. A regular user can generate an access token required for logging in to the registry. System users, such as system:admin, cannot obtain access tokens and, therefore, cannot access the registry directly.
For example, if you are using
HTPASSWD
authentication, you can create one using the following command:# htpasswd /etc/origin/openshift-htpasswd <user_name>
The user must have the system:registry role. To add this role:
# oadm policy add-role-to-user system:registry <user_name>
Have the admin role for the project associated with the Docker operation. For example, if accessing images in the global openshift project:
$ oadm policy add-role-to-user admin <user_name> -n openshift
For writing or pushing images, for example when using the
docker push
command, the user must have the system:image-builder role. To add this role:$ oadm policy add-role-to-user system:image-builder <user_name>
For more information on user permissions, see Managing Role Bindings.
2.7.5.2. Logging in to the Registry
Ensure your user satisfies the prerequisites for accessing the registry directly.
To log in to the registry directly:
Ensure you are logged in to OpenShift as a regular user:
$ oc login
Get your access token:
$ oc whoami -t
Log in to the Docker registry:
$ docker login -u <username> -e <any_email_address> \ -p <token_value> <registry_ip>:<port>
2.7.5.3. Pushing and Pulling Images
After logging in to the registry, you can perform docker pull
and docker push
operations against your registry.
You can pull arbitrary images, but if you have the system:registry role added, you can only push images to the registry in your project.
In the following examples, we use:
Component | Value |
<registry_ip> |
|
<port> |
|
<project> |
|
<image> |
|
<tag> |
omitted (defaults to |
Pull an arbitrary image:
$ docker pull docker.io/busybox
Tag the new image with the form
<registry_ip>:<port>/<project>/<image>
. The project name must appear in this pull specification for OpenShift to correctly place and later access the image in the registry.$ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox
NoteYour regular user must have the system:image-builder role for the specified project, which allows the user to write or push an image. Otherwise, the
docker push
in the next step will fail. To test, you can create a new project to push the busybox image.Push the newly-tagged image to your registry:
$ docker push 172.30.124.220:5000/openshift/busybox ... cf2616975b4a: Image successfully pushed Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55
2.7.6. Securing the Registry
Optionally, you can secure the registry so that it serves traffic via TLS:
- Deploy the registry.
Fetch the service IP and port of the registry:
$ oc get svc/docker-registry NAME LABELS SELECTOR IP(S) PORT(S) docker-registry docker-registry=default docker-registry=default 172.30.124.220 5000/TCP
You can use an existing server certificate, or create a key and server certificate valid for specified IPs and host names, signed by a specified CA. To create a server certificate for the registry service IP and the docker-registry.default.svc.cluster.local host name:
$ oadm ca create-server-cert \ --signer-cert=/etc/origin/master/ca.crt \ --signer-key=/etc/origin/master/ca.key \ --signer-serial=/etc/origin/master/ca.serial.txt \ --hostnames='docker-registry.default.svc.cluster.local,172.30.124.220' \ --cert=/etc/secrets/registry.crt \ --key=/etc/secrets/registry.key
Create the secret for the registry certificates:
$ oc secrets new registry-secret \ /etc/secrets/registry.crt \ /etc/secrets/registry.key
Add the secret to the registry pod’s service accounts (including the default service account):
$ oc secrets add serviceaccounts/registry secrets/registry-secret $ oc secrets add serviceaccounts/default secrets/registry-secret
Add the secret volume to the registry deployment configuration:
$ oc volume dc/docker-registry --add --type=secret \ --secret-name=registry-secret -m /etc/secrets
Enable TLS by adding the following environment variables to the registry deployment configuration:
$ oc env dc/docker-registry \ REGISTRY_HTTP_TLS_CERTIFICATE=/etc/secrets/registry.crt \ REGISTRY_HTTP_TLS_KEY=/etc/secrets/registry.key
See more details on overriding registry options.
Update the scheme used for the registry’s liveness probe from HTTP to HTTPS:
$ oc patch dc/docker-registry --api-version=v1 -p '{"spec": {"template": {"spec": {"containers":[{ "name":"registry", "livenessProbe": {"httpGet": {"scheme":"HTTPS"}} }]}}}}'
Validate the registry is running in TLS mode. Wait until the docker-registry pod status changes to
Running
and verify the Docker logs for the registry container. You should find an entry forlistening on :5000, tls
.$ oc get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE docker-registry-1-da73t 172.17.0.1 openshiftdev.local/127.0.0.1 deployment=docker-registry-4,deploymentconfig=docker-registry,docker-registry=default Running 38 hours $ oc log docker-registry-1-da73t | grep tls time="2015-05-27T05:05:53Z" level=info msg="listening on :5000, tls" instance.id=deeba528-c478-41f5-b751-dc48e4935fc2
Copy the CA certificate to the Docker certificates directory. This must be done on all nodes in the cluster:
$ dcertsdir=/etc/docker/certs.d $ destdir_addr=$dcertsdir/172.30.124.220:5000 $ destdir_name=$dcertsdir/docker-registry.default.svc.cluster.local:5000 $ sudo mkdir -p $destdir_addr $destdir_name $ sudo cp ca.crt $destdir_addr 1 $ sudo cp ca.crt $destdir_name
- 1
- The ca.crt file is a copy of /etc/origin/master/ca.crt on the master.
Remove the
--insecure-registry
option only for this particular registry in the /etc/sysconfig/docker file. Then, reload the daemon and restart the docker service to reflect this configuration change:$ sudo systemctl daemon-reload $ sudo systemctl restart docker
Validate the
docker
client connection. Runningdocker push
to the registry ordocker pull
from the registry should succeed. Make sure you have logged into the registry.$ docker tag|push <registry/image> <internal_registry/project/image>
For example:
$ docker pull busybox $ docker tag docker.io/busybox 172.30.124.220:5000/openshift/busybox $ docker push 172.30.124.220:5000/openshift/busybox ... cf2616975b4a: Image successfully pushed Digest: sha256:3662dd821983bc4326bee12caec61367e7fb6f6a3ee547cbaff98f77403cab55
2.7.7. Advanced: Overriding the Registry Configuration
You can override the integrated registry’s default configuration, found by default at /config.yml in a running registry’s container, with your own custom configuration. See the upstream registry documentation’s Registry Configuration Reference for the full list of available options.
To enable managing the registry configuration file directly, it is recommended that the configuration file be mounted as a secret volume:
- Deploy the registry.
Edit the registry configuration file locally as needed. The initial YAML file deployed on the registry is provided below:
version: 0.1 log: level: debug http: addr: :5000 storage: cache: layerinfo: inmemory filesystem: rootdirectory: /registry delete: enabled: true auth: openshift: realm: openshift middleware: repository: - name: openshift options: pullthrough: true
See Registry Configuration Reference for more options.
Create a new secret called registry-config from your custom registry configuration file you edited locally:
$ oc secrets new registry-config config.yml=</path/to/custom/registry/config.yml>
Add the registry-config secret as a volume to the registry’s deployment configuration to mount the custom configuration file at /etc/docker/registry/:
$ oc volume dc/docker-registry --add --type=secret \ --secret-name=registry-config -m /etc/docker/registry/
Update the registry to reference the configuration path from the previous step by adding the following environment variable to the registry’s deployment configuration:
$ oc env dc/docker-registry \ REGISTRY_CONFIGURATION_PATH=/etc/docker/registry/config.yml
This may be performed as an iterative process to achieve the desired configuration. For example, during troubleshooting, the configuration may be temporarily updated to put it in debug mode.
To update an existing configuration:
This procedure will overwrite the currently deployed registry configuration.
- Edit the local registry configuration file, config.yml.
Delete the registry-config secret:
$ oc delete secret registry-config
Recreate the secret to reference the updated configuration file the first step:
$ oc secrets new registry-config config.yml=</path/to/custom/registry/config.yml>
Redeploy the registry to read the updated configuration:
$ oc deploy docker-registry --latest
It is recommended that configuration files be maintained in a source control repository.
2.7.8. Whitelisting Docker Registries
You can specify a whitelist of docker registries, allowing you to curate a set of images and templates that are available for download by OpenShift users. This curated set can be placed in one or more docker registries, and then added to the whitelist. When using a whitelist, only the specified registries are accessible within OpenShift, and all other registries are denied access by default.
To configure a whitelist:
Edit the /etc/sysconfig/docker file to block all registries:
BLOCK_REGISTRY='--block-registry=all'
You may need to uncomment the
BLOCK_REGISTRY
line.In the same file, add registries to which you want to allow access:
ADD_REGISTRY='--add-registry=<registry1> --add-registry=<registry2>'
Example 2.6. Allowing Access to Registries
ADD_REGISTRY='--add-registry=registry.access.redhat.com'
This example would restrict access to images available on the Red Hat Customer Portal.
Once the whitelist is configured, if a user tries to pull from a docker registry that is not on the whitelist, they will receive an error message stating that this registry is not allowed.
2.7.9. Exposing the Registry
To expose your internal registry externally, it is recommended that you run a secure registry. To expose the registry you must first have deployed a router.
- Deploy the registry.
- Secure the registry.
- Deploy a router.
Create your passthrough route with
oc create -f <filename>.json
. The passthrough route will point to the registry service that you have created.apiVersion: v1 kind: Route metadata: name: registry spec: host: <host> 1 to: kind: Service name: docker-registry 2 tls: termination: passthrough 3
NotePassthrough is currently the only type of route supported for exposing the secure registry.
Next, you must trust the certificates being used for the registry on your host system. The certificates referenced were created when you secured your registry.
$ sudo mkdir -p /etc/docker/certs.d/<host> $ sudo cp <ca certificate file> /etc/docker/certs.d/<host> $ sudo systemctl restart docker
Log in to the registry using the information from securing the registry. However, this time point to the host name used in the route rather than your service IP. You should now be able to tag and push images using the route host.
$ oc get imagestreams -n test NAME DOCKER REPO TAGS UPDATED $ docker pull busybox $ docker tag busybox <host>/test/busybox $ docker push <host>/test/busybox The push refers to a repository [<host>/test/busybox] (len: 1) 8c2e06607696: Image already exists 6ce2e90b0bc7: Image successfully pushed cf2616975b4a: Image successfully pushed Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31 $ docker pull <host>/test/busybox latest: Pulling from <host>/test/busybox cf2616975b4a: Already exists 6ce2e90b0bc7: Already exists 8c2e06607696: Already exists Digest: sha256:6c7e676d76921031532d7d9c0394d0da7c2906f4cb4c049904c4031147d8ca31 Status: Image is up to date for <host>/test/busybox:latest $ oc get imagestreams -n test NAME DOCKER REPO TAGS UPDATED busybox 172.30.11.215:5000/test/busybox latest 2 seconds ago
NoteYour image streams will have the IP address and port of the registry service, not the route name and port. See
oc get imagestreams
for details.NoteIn the
<host>/test/busybox
example above,test
refers to the project name.
2.7.10. Known Issues
The following are the known issues when deploying or using the integrated registry.
Image Push Errors with Scaled Registry Using Shared NFS Volume
When using a scaled registry with a shared NFS volume, you may see one of the following errors during the push of an image:
-
digest invalid: provided digest did not match uploaded content
-
blob upload unknown
-
blob upload invalid
These errors are returned by an internal registry service when Docker attempts to push the image. Its cause originates in the synchronization of file attributes across nodes. Factors such as NFS client side caching, network latency, and layer size can all contribute to potential errors that might occur when pushing an image using the default round-robin load balancing configuration.
You can perform the following steps to minimize the probability of such a failure:
Ensure that the
sessionAffinity
of your docker-registry service is set toClientIP
:$ oc get svc/docker-registry --template='{{.spec.sessionAffinity}}'
This should return
ClientIP
, which is the default in recent OpenShift Enterprise versions. If not, change it:$ oc get -o yaml svc/docker-registry | \ sed 's/\(sessionAffinity:\s*\).*/\1ClientIP/' | \ oc replace -f -
-
Ensure that the NFS export line of your registry volume on your NFS server has the
no_wdelay
options listed. See Export Settings in the Persistent Storage Using NFS topic for details.
2.7.11. What’s Next?
After you have a registry deployed, you can:
- Configure authentication; by default, authentication is set to Deny All.
- Deploy a router.
2.8. Deploying a Router
2.8.1. Overview
The OpenShift router is the ingress point for all external traffic destined for services in your OpenShift installation. OpenShift provides and supports the following two router plug-ins:
- The HAProxy template router is the default plug-in. It uses the openshift3/ose-haproxy-router image to run an HAProxy instance alongside the template router plug-in inside a container on OpenShift. It currently supports HTTP(S) traffic and TLS-enabled traffic via SNI. The router’s container listens on the host network interface, unlike most containers that listen only on private IPs. The router proxies external requests for route names to the IPs of actual pods identified by the service associated with the route.
- The F5 router integrates with an existing F5 BIG-IP® system in your environment to synchronize routes. F5 BIG-IP® version 11.4 or newer is required in order to have the F5 iControl REST API.
The F5 router plug-in is available starting in OpenShift Enterprise 3.0.2.
2.8.2. The Router Service Account
Before deploying an OpenShift cluster, you must have a service account for the router. Starting in OpenShift Enterprise 3.1, a router service account is automatically created during a quick or advanced installation (previously, this required manual creation). This service account has permissions to a security context constraint (SCC) that allows it to specify host ports.
2.8.3. Deploying the Default HAProxy Router
The oadm router
command is provided with the administrator CLI to simplify the tasks of setting up routers in a new installation. Just about every form of communication between OpenShift components is secured by TLS and uses various certificates and authentication methods. Use the --credentials
option to specify what credentials the router should use to contact the master.
Routers directly attach to port 80 and 443 on all interfaces on a host. Restrict routers to hosts where port 80/443 is available and not being consumed by another service, and set this using node selectors and the scheduler configuration. As an example, you can achieve this by dedicating infrastructure nodes to run services such as routers.
It is recommended to use separate distinct openshift-router credentials with your router. The credentials can be provided using the --credentials
flag to the oadm router
command. Alternatively, the default cluster administrator credentials can be used from the $KUBECONFIG
environment variable.
$ oadm router --dry-run --service-account=router \
--credentials='/etc/origin/master/openshift-router.kubeconfig' 1
Router pods created using oadm router
have default resource requests that a node must satisfy for the router pod to be deployed. In an effort to increase the reliability of infrastructure components, the default resource requests are used to increase the QoS tier of the router pods above pods without resource requests. The default values represent the observed minimum resources required for a basic router to be deployed and can be edited in the routers deployment configuration and you may want to increase them based on the load of the router.
The default router service account, named router, is automatically created during quick and advanced installations. To verify that this account already exists:
$ oadm router --dry-run \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To see what the default router would look like if created:
$ oadm router -o yaml \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To create a router if it does not exist:
$ oadm router <router_name> --replicas=<number> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
Multiple instances are created on different hosts according to the scheduler policy.
To use a different router image and view the router configuration that would be used:
$ oadm router <router_name> -o <format> --images=<image> \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
For example:
$ oadm router region-west -o yaml --images=myrepo/somerouter:mytag \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router
To deploy the router to any node(s) that match a specified node label:
$ oadm router <router_name> --replicas=<number> --selector=<label> \ --service-account=router
For example, if you want to create a router named router
and have it placed on a node labeled with region=infra
:
$ oadm router router --replicas=1 --selector='region=infra' \ --service-account=router
2.8.3.1. High Availability
You can set up a highly-available router on your OpenShift Enterprise cluster using IP failover.
2.8.3.2. Customizing the Default Routing Subdomain
You can customize the suffix used as the default routing subdomain for your environment using the master configuration file (the /etc/origin/master/master-config.yaml file by default). The following example shows how you can set the configured suffix to v3.openshift.test:
Example 2.7. Master Configuration Snippet
routingConfig: subdomain: v3.openshift.test
This change requires a restart of the master if it is running.
With the OpenShift master(s) running the above configuration, the generated host name for the example of a host added to a namespace mynamespace would be:
Example 2.8. Generated Host Name
myroute-mynamespace.v3.openshift.test
2.8.3.3. Using Wildcard Certificates
A TLS-enabled route that does not include a certificate uses the router’s default certificate instead. In most cases, this certificate should be provided by a trusted certificate authority, but for convenience you can use the OpenShift CA to create the certificate. For example:
$ CA=/etc/origin/master $ oadm ca create-server-cert --signer-cert=$CA/ca.crt \ --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \ --hostnames='*.cloudapps.example.com' \ --cert=cloudapps.crt --key=cloudapps.key
The router expects the certificate and key to be in PEM format in a single file:
$ cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem
From there you can use the --default-cert
flag:
$ oadm router --default-cert=cloudapps.router.pem --service-account=router \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"}
Browsers only consider wildcards valid for subdomains one level deep. So in this example, the certificate would be valid for a.cloudapps.example.com but not for a.b.cloudapps.example.com.
2.8.3.4. Using Secured Routes
Currently, password protected key files are not supported. HAProxy prompts for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run:
# openssl rsa -in <passwordProtectedKey.key> -out <new.key>
Here is an example of how to use a secure edge terminated route with TLS termination occurring on the router before traffic is proxied to the destination. The secure edge terminated route specifies the TLS certificate and key information. The TLS certificate is served by the router front end.
First, start up a router instance:
# oadm router --replicas=1 --service-account=router \ --credentials=${ROUTER_KUBECONFIG:-"$KUBECONFIG"}
Next, create a private key, csr and certificate for our edge secured route. The instructions on how to do that would be specific to your certificate authority and provider. For a simple self-signed certificate for a domain named www.example.test
, see the example shown below:
# sudo openssl genrsa -out example-test.key 2048 # # sudo openssl req -new -key example-test.key -out example-test.csr \ -subj "/C=US/ST=CA/L=Mountain View/O=OS3/OU=Eng/CN=www.example.test" # # sudo openssl x509 -req -days 366 -in example-test.csr \ -signkey example-test.key -out example-test.crt
Generate a route configuration file using the above certificate and key. Make sure to replace servicename my-service
with the name of your service.
# servicename="my-service" # echo " apiVersion: v1 kind: Route metadata: name: secured-edge-route spec: host: www.example.test to: kind: Service name: $servicename tls: termination: edge key: | $(openssl rsa -in example-test.key | sed 's/^/ /') certificate: | $(openssl x509 -in example-test.crt | sed 's/^/ /') " > example-test-route.yaml
Finally add the route to OpenShift (and the router) via:
# oc create -f example-test-route.yaml
Make sure your DNS entry for www.example.test
points to your router instance(s) and the route to your domain should be available. The example below uses curl along with a local resolver to simulate the DNS lookup:
# routerip="4.1.1.1" # replace with IP address of one of your router instances. # curl -k --resolve www.example.test:443:$routerip https://www.example.test/
2.8.3.5. Using the Container Network Stack
The OpenShift router runs inside a Docker container and the default behavior is to use the network stack of the host (i.e., the node where the router container runs). This default behavior benefits performance because network traffic from remote clients does not need to take multiple hops through user space to reach the target service and container.
Additionally, this default behavior enables the router to get the actual source IP address of the remote connection rather than getting the node’s IP address. This is useful for defining ingress rules based on the originating IP, supporting sticky sessions, and monitoring traffic, among other uses.
This host network behavior is controlled by the --host-network
router command line option, and the default behaviour is the equivalent of using --host-network=true
. If you wish to run the router with the container network stack, use the --host-network=false
option when creating the router. For example:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router \ --host-network=false
Internally, this means the router container must publish the 80 and 443 ports in order for the external network to communicate with the router.
Running with the container network stack means that the router sees the source IP address of a connection to be the NATed IP address of the node, rather than the actual remote IP address.
On OpenShift clusters using multi-tenant network isolation, routers on a non-default namespace with the --host-network=false
option will load all routes in the cluster, but routes across the namespaces will not be reachable due to network isolation. With the --host-network=true
option, routes bypass the container network and it can access any pod in the cluster. If isolation is needed in this case, then do not add routes across the namespaces.
2.8.3.6. Exposing Router metrics
Using the --metrics-image
and --expose-metrics
options, you can configure the OpenShift Enterprise router to run a sidecar container that exposes or publishes router metrics for consumption by external metrics collection and aggregation systems (e.g. Prometheus, statsd).
Depending on your router implementation, the image is appropriately set up and the metrics sidecar container is started when the router is deployed. For example, the HAProxy-based router implementation defaults to using the prom/haproxy-exporter
image to run as a sidecar container, which can then be used as a metrics datasource by the Prometheus server.
The --metrics-image
option overrides the defaults for HAProxy-based router implementations and, in the case of custom implementations, enables the image to use for a custom metrics exporter or publisher.
Grab the HAProxy Prometheus exporter image from the Docker registry:
$ sudo docker pull prom/haproxy-exporter
Create the OpenShift Enterprise router:
$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router --expose-metrics
Or, optionally, use the
--metrics-image
option to override the HAProxy defaults:$ oadm router \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \ --service-account=router --expose-metrics \ --metrics-image=prom/haproxy-exporter
Once the haproxy-exporter containers (and your HAProxy router) have started, point Prometheus to the sidecar container on port 9101 on the node where the haproxy-exporter container is running:
$ haproxy_exporter_ip="<enter-ip-address-or-hostname>" $ cat > haproxy-scraper.yml <<CFGEOF --- global: scrape_interval: "60s" scrape_timeout: "10s" # external_labels: # source: openshift-router scrape_configs: - job_name: "haproxy" target_groups: - targets: - "${haproxy_exporter_ip}:9101" CFGEOF $ # And start prometheus as you would normally using the above config file. $ echo " - Example: prometheus -config.file=haproxy-scraper.yml " $ echo " or you can start it as a container on OpenShift!! $ echo " - Once the prometheus server is up, view the OpenShift HAProxy " $ echo " router metrics at: http://<ip>:9090/consoles/haproxy.html "
2.8.4. Deploying a Customized HAProxy Router
The HAProxy router is based on a golang template that generates the HAProxy configuration file from a list of routes. If you want a customized template router to meet your needs, you can customize the template file, build a new Docker image, and run a customized router.
One common case for this might be implementing new features within the application back ends. For example, it might be desirable in a highly-available setup to use stick-tables that synchronizes between peers. The router plug-in provides all the facilities necessary to make this customization.
You can obtain a new haproxy-config.template file from the latest router image by running:
# docker run --rm --interactive=true --tty --entrypoint=cat \ registry.access.redhat.com/openshift3/ose-haproxy-router:v3.0.2.0 haproxy-config.template
Save this content to a file for use as the basis of your customized template.
2.8.4.1. Using Stick Tables
The following example customization can be used in a highly-available routing setup to use stick-tables that synchronize between peers.
Adding a Peer Section
In order to synchronize stick-tables amongst peers you must a define a peers section in your HAProxy configuration. This section determines how HAProxy will identify and connect to peers. The plug-in provides data to the template under the .PeerEndpoints
variable to allow you to easily identify members of the router service. You may add a peer section to the haproxy-config.template file inside the router image by adding:
{{ if (len .PeerEndpoints) gt 0 }} peers openshift_peers {{ range $endpointID, $endpoint := .PeerEndpoints }} peer {{$endpoint.TargetName}} {{$endpoint.IP}}:1937 {{ end }} {{ end }}
Changing the Reload Script
When using stick-tables, you have the option of telling HAProxy what it should consider the name of the local host in the peer section. When creating endpoints, the plug-in attempts to set the TargetName
to the value of the endpoint’s TargetRef.Name
. If TargetRef
is not set, it will set the TargetName
to the IP address. The TargetRef.Name
corresponds with the Kubernetes host name, therefore you can add the -L
option to the reload-haproxy
script to identify the local host in the peer section.
peer_name=$HOSTNAME 1
if [ -n "$old_pid" ]; then
/usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name -sf $old_pid
else
/usr/sbin/haproxy -f $config_file -p $pid_file -L $peer_name
fi
- 1
- Must match an endpoint target name that is used in the peer section.
Modifying Back Ends
Finally, to use the stick-tables within back ends, you can modify the HAProxy configuration to use the stick-tables and peer set. The following is an example of changing the existing back end for TCP connections to use stick-tables:
{{ if eq $cfg.TLSTermination "passthrough" }} backend be_tcp_{{$cfgIdx}} balance leastconn timeout check 5000ms stick-table type ip size 1m expire 5m{{ if (len $.PeerEndpoints) gt 0 }} peers openshift_peers {{ end }} stick on src {{ range $endpointID, $endpoint := $serviceUnit.EndpointTable }} server {{$endpointID}} {{$endpoint.IP}}:{{$endpoint.Port}} check inter 5000ms {{ end }} {{ end }}
After this modification, you can rebuild your router.
2.8.4.2. Rebuilding Your Router
After you have made any desired modifications to the template, such as the example stick tables customization, you must rebuild your router for your changes to go in effect:
- Rebuild the Docker image to include your customized template.
- Push the resulting image to your repository.
Create the router specifying your new image, either:
- in the pod’s object definition directly, or
-
by adding the
--images=<repo>/<image>:<tag>
flag to theoadm router
command when creating a highly-available routing service.
2.8.5. Deploying the F5 Router
The F5 router plug-in is available starting in OpenShift Enterprise 3.0.2.
The F5 router plug-in is provided as a Docker image and run as a pod, just like the default HAProxy router. Deploying the F5 router is done similarly as well, using the oadm router
command but providing additional flags (or environment variables) to specify the following parameters for the F5 BIG-IP® host:
Flag | Description |
---|---|
|
Specifies that an F5 router should be launched (the default |
| Specifies the F5 BIG-IP® host’s management interface’s host name or IP address. |
| Specifies the F5 BIG-IP® user name (typically admin). |
| Specifies the F5 BIG-IP® password. |
| Specifies the name of the F5 virtual server for HTTP connections. |
| Specifies the name of the F5 virtual server for HTTPS connections. |
| Specifies the path to the SSH private key file for the F5 BIG-IP® host. Required to upload and delete key and certificate files for routes. |
| A Boolean flag that indicates that the F5 router should skip strict certificate verification with the F5 BIG-IP® host. |
As with the HAProxy router, the oadm router
command creates the service and deployment configuration objects, and thus the replication controllers and pod(s) in which the F5 router itself runs. The replication controller restarts the F5 router in case of crashes. Because the F5 router is only watching routes and endpoints and configuring F5 BIG-IP® accordingly, running the F5 router in this way along with an appropriately configured F5 BIG-IP® deployment should satisfy high-availability requirements.
To deploy the F5 router:
- First, establish a tunnel using a ramp node, which allows for the routing of traffic to pods through the OpenShift SDN.
Run the
oadm router
command with the appropriate flags. For example:$ oadm router \ --type=f5-router \ --external-host=10.0.0.2 \ --external-host-username=admin \ --external-host-password=mypassword \ --external-host-http-vserver=ose-vserver \ --external-host-https-vserver=https-ose-vserver \ --external-host-private-key=/path/to/key \ --credentials='/etc/origin/master/openshift-router.kubeconfig' \1 --service-account=router
- 1
--credentials
is the path to the CLI configuration file for the openshift-router. It is recommended using an openshift-router specific profile with appropriate permissions.
2.8.6. What’s Next?
If you deployed an HAProxy router, you can learn more about monitoring the router.
If you have not yet done so, you can:
- Configure authentication; by default, authentication is set to Deny All.
- Deploy an integrated Docker registry.