このコンテンツは選択した言語では利用できません。

Chapter 2. Installing a Cluster


2.1. Planning

2.1.1. Initial Planning

For production environments, several factors influence installation. Consider the following questions as you read through the documentation:

  • Which installation method do you want to use? The Installation Methods section provides some information about the quick and advanced installation methods.
  • How many hosts do you require in the cluster? The Environment Scenarios section provides multiple examples of Single Master and Multiple Master configurations.
  • How many pods are required in your cluster? The Sizing Considerations section provides limits for nodes and pods so you can calculate how large your environment needs to be.
  • Is high availability required? High availability is recommended for fault tolerance. In this situation, you might aim to use the Multiple Masters Using Native HA example as a basis for your environment.
  • Which installation type do you want to use: RPM or containerized? Both installations provide a working OpenShift Container Platform environment, but you might have a preference for a particular method of installing, managing, and updating your services.
  • Which identity provider do you use for authentication? If you already use a supported identity provider, it is a best practice to configure OpenShift Container Platform to use that identity provider during advanced installation.
  • Is my installation supported if integrating with other technologies? See the OpenShift Container Platform Tested Integrations for a list of tested integrations.

2.1.2. Installation Methods

Both the quick and advanced installation methods are supported for development and production environments. If you want to quickly get OpenShift Container Platform up and running to try out for the first time, use the quick installer and let the interactive CLI guide you through the configuration options relevant to your environment.

For the most control over your cluster’s configuration, you can use the advanced installation method. This method is particularly suited if you are already familiar with Ansible. However, following along with the OpenShift Container Platform documentation should equip you with enough information to reliably deploy your cluster and continue to manage its configuration post-deployment using the provided Ansible playbooks directly.

If you install initially using the quick installer, you can always further tweak your cluster’s configuration and adjust the number of hosts in the cluster using the same installer tool. If you wanted to later switch to using the advanced method, you can create an inventory file for your configuration and carry on that way.

2.1.3. Sizing Considerations

Determine how many nodes and pods you require for your OpenShift Container Platform cluster. Cluster scalability correlates to the number of pods in a cluster environment. That number influences the other numbers in your setup.

The following table provides the maximum sizing limits for nodes and pods:

TypeMaximum

Maximum nodes per cluster

1000

Maximum pods per cluster

120,000

Maximum pods per node

250

Maximum pods per core

10

Important

Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.

Determine how many pods are expected to fit per node:

Maximum Pods per Cluster / Expected Pods per Node = Total Number of Nodes

Example Scenario

If you want to scope your cluster for 2200 pods per cluster, you would need at least 9 nodes, assuming that there are 250 maximum pods per node:

2200 / 250 = 8.8

If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node:

2200 / 20 = 110

2.1.4. Environment Scenarios

This section outlines different examples of scenarios for your OpenShift Container Platform environment. Use these scenarios as a basis for planning your own OpenShift Container Platform cluster, based on your sizing needs.

Note

Moving from a single master cluster to multiple masters after installation is not supported.

For information on updating labels, see Updating Labels on Nodes.

2.1.4.1. Single Master and Node on One System

OpenShift Container Platform can be installed on a single system for a development environment only. An all-in-one environment is not considered a production environment.

2.1.4.2. Single Master and Multiple Nodes

The following table describes an example environment for a single master (with embedded etcd) and two nodes:

Host NameInfrastructure Component to Install

master.example.com

Master and node

node1.example.com

Node

node2.example.com

2.1.4.3. Single Master, Multiple etcd, and Multiple Nodes

The following table describes an example environment for a single master, three etcd hosts, and two nodes:

Host NameInfrastructure Component to Install

master.example.com

Master and node

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Node

node2.example.com

Note

When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported.

2.1.4.4. Multiple Masters Using Native HA

The following describes an example environment for three masters, one HAProxy load balancer, three etcd hosts, and two nodes using the native HA method:

Host NameInfrastructure Component to Install

master1.example.com

Master (clustered using native HA) and node

master2.example.com

master3.example.com

lb.example.com

HAProxy to load balance API master endpoints

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Node

node2.example.com

Note

When specifying multiple etcd hosts, external etcd is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported.

2.1.4.5. Stand-alone Registry

You can also install OpenShift Container Platform to act as a stand-alone registry using the OpenShift Container Platform’s integrated registry. See Installing a Stand-alone Registry for details on this scenario.

2.1.5. RPM vs Containerized

An RPM installation installs all services through package management and configures services to run within the same user space, while a containerized installation installs services using container images and runs separate services in individual containers.

See the Installing on Containerized Hosts topic for more details on configuring your installation to use containerized services.

2.2. Prerequisites

2.2.1. System Requirements

The following sections identify the hardware specifications and system-level requirements of all hosts within your OpenShift Container Platform environment.

2.2.1.1. Red Hat Subscriptions

You must have an active OpenShift Container Platform subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.

Important

OpenShift Container Platform 3.5 requires Docker 1.12.

2.2.1.2. Minimum Hardware Requirements

The system requirements vary per host type:

Masters

  • Physical or virtual system, or an instance running on a public or private IaaS.
  • Base OS: RHEL 7.3 with the "Minimal" installation option and the latest packages from the Extras channel, or RHEL Atomic Host 7.3.2 or later. RHEL 7.2 is also supported using Docker 1.12 and its dependencies.
  • 2 vCPU.
  • Minimum 16 GB RAM.
  • Minimum 40 GB hard disk space for the file system containing /var/.

Nodes

  • Physical or virtual system, or an instance running on a public or private IaaS.
  • Base OS: RHEL 7.3 or later with "Minimal" installation option, or RHEL Atomic Host 7.3.2 or later. RHEL 7.2 is also supported using Docker 1.12 and its dependencies.
  • NetworkManager 1.0 or later.
  • 1 vCPU.
  • Minimum 8 GB RAM.
  • Minimum 15 GB hard disk space for the file system containing /var/.
  • An additional minimum 15 GB unallocated space to be used for Docker’s storage back end; see Configuring Docker Storage.

Separate etcd Nodes

  • Minimum 20 GB hard disk space for etcd data.
  • Consult Hardware Recommendations to properly size your etcd nodes.
  • Currently, OpenShift Container Platform stores image, build, and deployment metadata in etcd. You must periodically prune old resources. If you are planning to leverage a large number of images/builds/deployments, place etcd on machines with large amounts of memory and fast SSD drives.
Important

OpenShift Container Platform only supports servers with the x86_64 architecture.

Note

Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.

2.2.1.3. Production Level Hardware Requirements

Test or sample environments function with the minimum requirements. For production environments, the following recommendations apply:

Master Hosts
In a highly available OpenShift Container Platform cluster with a separate etcd cluster, a master host should have, in addition to the minimum requirements in the table above, 1 CPU core and 1.5 GB of memory for each 1000 pods. Therefore, the recommended size of a master host in an OpenShift Container Platform cluster of 2000 pods would be the minimum requirements of 2 CPU cores and 16 GB of RAM, plus 2 CPU cores and 3 GB of RAM, totaling 4 CPU cores and 19 GB of RAM.

When planning an environment with multiple masters, a minimum of three etcd hosts and a load-balancer between the master hosts are required.

The OpenShift Container Platform master caches deserialized versions of resources aggressively to ease CPU load. However, in smaller clusters of less than 1000 pods, this cache can waste a lot of memory for negligible CPU load reduction. The default cache size is 50,000 entries, which, depending on the size of your resources, can grow to occupy 1 to 2 GB of memory. This cache size can be reduced using the following setting the in /etc/origin/master/master-config.yaml:

kubernetesMasterConfig:
  apiServerArguments:
    deserialization-cache-size:
    - "1000"
Node Hosts
The size of a node host depends on the expected size of its workload. As an OpenShift Container Platform cluster administrator, you will need to calculate the expected workload, then add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.

Use the above with the following table to plan the maximum loads for nodes and pods:

HostSizing Recommendation

Maximum nodes per cluster

1000

Maximum pods per cluster

120000

Maximum pods per nodes

250

Maximum pods per core

10

Important

Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.

2.2.1.4. Configuring Core Usage

By default, OpenShift Container Platform masters and nodes use all available cores in the system they run on. You can choose the number of cores you want OpenShift Container Platform to use by setting the GOMAXPROCS environment variable.

For example, run the following before starting the server to make OpenShift Container Platform only run on one core:

# export GOMAXPROCS=1

2.2.1.5. SELinux

Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift Container Platform or the installer will fail. Also, configure SELINUXTYPE=targeted in the /etc/selinux/config file:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

2.2.1.6. NTP

You must enable Network Time Protocol (NTP) to prevent masters and nodes in the cluster from going out of sync. Set openshift_clock_enabled to true in the Ansible playbook to enable NTP on masters and nodes in the cluster during Ansible installation.

# openshift_clock_enabled=true

2.2.1.7. Security Warning

OpenShift Container Platform runs containers on your hosts, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access your host’s Docker daemon and perform docker build and docker push operations. As such, you should be aware of the inherent security risks associated with performing docker run operations on arbitrary images as they effectively have root access.

For more information, see these articles:

To address these risks, OpenShift Container Platform uses security context constraints that control the actions that pods can perform and what it has the ability to access.

2.2.2. Environment Requirements

The following section defines the requirements of the environment containing your OpenShift Container Platform configuration. This includes networking considerations and access to external services, such as Git repository access, storage, and cloud infrastructure providers.

2.2.2.1. DNS

OpenShift Container Platform requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.

Important

Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform.

Key components of OpenShift Container Platform run themselves inside of containers and use the following process for name resolution:

  1. By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.
  2. OpenShift Container Platform then inserts one DNS value into the pods (above the node’s nameserver values). That value is defined in the /etc/origin/node/node-config.yaml file by the dnsIP parameter, which by default is set to the address of the host node because the host is using dnsmasq.
  3. If the dnsIP parameter is omitted from the node-config.yaml file, then the value defaults to the kubernetes service IP, which is the first nameserver in the pod’s /etc/resolv.conf file.

As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.

Note

NetworkManager is required on the nodes in order to populate dnsmasq with the DNS IP addresses. DNS does not work properly when the network interface for OpenShift Container Platform has NM_CONTROLLED=no.

DNSMSQ must be enabled (openshift_use_dnsmasq=true) or the installation will fail and critical features will not function.

The following is an example set of DNS records for the Single Master and Multiple Nodes scenario:

master    A   10.64.33.100
node1     A   10.64.33.101
node2     A   10.64.33.102

If you do not have a properly functioning DNS environment, you could experience failure with:

  • Product installation via the reference Ansible-based scripts
  • Deployment of the infrastructure containers (registry, routers)
  • Access to the OpenShift Container Platform web console, because it is not accessible via IP address alone
2.2.2.1.1. Configuring Hosts to Use DNS

Make sure each host in your environment is configured to resolve hostnames from your DNS server. The configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:

  • Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.
  • Enabled, then the NetworkManager dispatch script automatically configures DNS based on the DHCP configuration. Optionally, you can add a value to dnsIP in the node-config.yaml file to prepend the pod’s resolv.conf file. The second nameserver is then defined by the host’s first nameserver. By default, this will be the IP address of the node host.

    Note

    For most configurations, do not set the openshift_dns_ip option during the advanced installation of OpenShift Container Platform (using Ansible), because this option overrides the default IP address set by dnsIP.

    Instead, allow the installer to configure each node to use dnsmasq and forward requests to SkyDNS or the external DNS provider. If you do set the openshift_dns_ip option, then it should be set either with a DNS IP that queries SkyDNS first, or to the SkyDNS service or endpoint IP (the Kubernetes service IP).

To verify that hosts can be resolved by your DNS server:

  1. Check the contents of /etc/resolv.conf:

    $ cat /etc/resolv.conf
    # Generated by NetworkManager
    search example.com
    nameserver 10.64.33.1
    # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh

    In this example, 10.64.33.1 is the address of our DNS server.

  2. Test that the DNS servers listed in /etc/resolv.conf are able to resolve host names to the IP addresses of all masters and nodes in your OpenShift Container Platform environment:

    $ dig <node_hostname> @<IP_address> +short

    For example:

    $ dig master.example.com @10.64.33.1 +short
    10.64.33.100
    $ dig node1.example.com @10.64.33.1 +short
    10.64.33.101
2.2.2.1.2. Disabling dnsmasq

If you want to disable dnsmasq (for example, if your /etc/resolv.conf is managed by a configuration tool other than NetworkManager), then set openshift_use_dnsmasq to false in the Ansible playbook.

However, certain containers do not properly move to the next nameserver when the first issues SERVFAIL. Red Hat Enterprise Linux (RHEL)-based containers do not suffer from this, but certain versions of uclibc and musl do.

2.2.2.1.3. Configuring a DNS Wildcard

Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.

A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Container Platform router.

For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:

*.cloudapps.example.com. 300 IN  A 192.168.133.2

In almost all cases, when referencing VMs you must use host names, and the host names that you use must match the output of the hostname -f command on each node.

Warning

In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Container Platform may fail to resolve host names properly.

2.2.2.2. Network Access

A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.

2.2.2.2.1. NetworkManager

NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required. DNS does not work properly when the network interface for OpenShift Container Platform has NM_CONTROLLED=no.

2.2.2.2.2. Required Ports

The OpenShift Container Platform installation automatically creates a set of internal firewall rules on each host using iptables. However, if your network configuration uses an external firewall, such as a hardware-based firewall, you must ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.

Note

While iptables is the default firewall, firewalld is recommended for new installations. You can enable firewalld by setting os_firewall_use_firewalld=true in the Ansible inventory file.

Ensure the following ports required by OpenShift Container Platform are open on your network and configured to allow access between hosts. Some ports are optional depending on your configuration and usage.

Table 2.1. Node to Node

4789

UDP

Required for SDN communication between pods on separate hosts.

Table 2.2. Nodes to Master

53 or 8053

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured.

4789

UDP

Required for SDN communication between pods on separate hosts.

443 or 8443

TCP

Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on.

Table 2.3. Master to Node

4789

UDP

Required for SDN communication between pods on separate hosts.

10250

TCP

The master proxies to node hosts via the Kubelet for oc commands.

Note

In the following table, (L) indicates the marked port is also used in loopback mode, enabling the master to communicate with itself.

In a single-master cluster:

  • Ports marked with (L) must be open.
  • Ports not marked with (L) need not be open.

In a multiple-master cluster, all the listed ports must be open.

Table 2.4. Master to Master

53 (L) or 8053 (L)

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured.

2049 (L)

TCP/UDP

Required when provisioning an NFS host as part of the installer.

2379

TCP

Used for standalone etcd (clustered) to accept changes in state.

2380

TCP

etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered).

4001 (L)

TCP

Used for embedded etcd (non-clustered) to accept changes in state.

4789 (L)

UDP

Required for SDN communication between pods on separate hosts.

Table 2.5. External to Load Balancer

9000

TCP

If you choose the native HA method, optional to allow access to the HAProxy statistics page.

Table 2.6. External to Master

443 or 8443

TCP

Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on.

Table 2.7. IaaS Deployments

22

TCP

Required for SSH by the installer or system administrator.

53 or 8053

TCP/UDP

Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts.

80 or 443

TCP

For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router.

1936

TCP

(Optional) Required to be open when running the template router to access statistics. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section below for more information.

4001

TCP

For embedded etcd (non-clustered) use. Only required to be internally open on the master host. 4001 is for server-client connections.

2379 and 2380

TCP

For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd.

4789

UDP

For VxLAN use (OpenShift SDN). Required only internally on node hosts.

8443

TCP

For use by the OpenShift Container Platform web console, shared with the API server.

10250

TCP

For use by the Kubelet. Required to be externally open on nodes.

Notes

  • In the above examples, port 4789 is used for User Datagram Protocol (UDP).
  • When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.
  • OpenShift Container Platform internal DNS cannot be received over SDN. Depending on the detected values of openshift_facts, or if the openshift_ip and openshift_public_ip values are overridden, it will be the computed value of openshift_ip. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata.
  • The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of openshift_hostname and openshift_public_hostname.
  • Port 1936 can still be inaccessible due to your iptables rules. Use the following to configure iptables to open port 1936:

    # iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp \
        --dport 1936 -j ACCEPT
Table 2.8. Aggregated Logging

9200

TCP

For Elasticsearch API use. Required to be internally open on any infrastructure nodes so Kibana is able to retrieve logs for display. It can be externally opened for direct access to Elasticsearch by means of a route. The route can be created using oc expose.

9300

TCP

For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster may communicate with each other.

2.2.2.3. Persistent Storage

The Kubernetes persistent volume framework allows you to provision an OpenShift Container Platform cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Container Platform installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.

The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift Container Platform cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.

2.2.2.4. Cloud Provider Considerations

There are certain aspects to take into consideration if installing OpenShift Container Platform on a cloud provider.

2.2.2.4.1. Overriding Detected IP Addresses and Host Names

Some deployments require that the user override the detected host names and IP addresses for the hosts. To see the default values, run the openshift_facts playbook:

# ansible-playbook playbooks/byo/openshift_facts.yml
Important

For Amazon Web Services, see the Overriding Detected IP Addresses and Host Names section.

Now, verify the detected common settings. If they are not what you expect them to be, you can override them.

The Advanced Installation topic discusses the available Ansible variables in greater detail.

VariableUsage

hostname

  • Should resolve to the internal IP from the instances themselves.
  • openshift_hostname overrides.

ip

  • Should be the internal IP of the instance.
  • openshift_ip will overrides.

public_hostname

  • Should resolve to the external IP from hosts outside of the cloud.
  • Provider openshift_public_hostname overrides.

public_ip

  • Should be the externally accessible IP associated with the instance.
  • openshift_public_ip overrides.

use_openshift_sdn

  • Should be true unless the cloud is GCE.
  • openshift_use_openshift_sdn overrides.
Warning

If openshift_hostname is set to a value other than the metadata-provided private-dns-name value, the native cloud integration for those providers will no longer work.

2.2.2.4.2. Post-Installation Configuration for Cloud Providers

Following the installation process, you can configure OpenShift Container Platform for AWS, OpenStack, or GCE.

2.3. Host Preparation

2.3.1. Setting PATH

The PATH for the root user on each host must contain the following directories:

  • /bin
  • /sbin
  • /usr/bin
  • /usr/sbin

These should all be included by default in a fresh RHEL 7.x installation.

2.3.2. Operating System Requirements

A base installation of RHEL 7.3 (with the latest packages from the Extras channel) or RHEL Atomic Host 7.3.2 or later is required for master and node hosts. RHEL 7.2 is also supported using Docker 1.12 and its dependencies. See the following documentation for the respective installation instructions, if required:

2.3.3. Host Registration

Each host must be registered using Red Hat Subscription Manager (RHSM) and have an active OpenShift Container Platform subscription attached to access the required packages.

  1. On each host, register with RHSM:

    # subscription-manager register --username=<user_name> --password=<password>
  2. List the available subscriptions:

    # subscription-manager list --available --matches '*OpenShift*'
  3. In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:

    # subscription-manager attach --pool=<pool_id>
  4. Disable all yum repositories:

    1. Disable all the enabled RHSM repositories:

      # subscription-manager repos --disable="*"
    2. List the remaining yum repositories and note their names under repo id, if any:

      # yum repolist
    3. Use yum-config-manager to disable the remaining yum repositories:

      # yum-config-manager --disable <repo_id>

      Alternatively, disable all repositories:

       yum-config-manager --disable \*

      Note that this could take a few minutes if you have a large number of available repositories

  5. Enable only the repositories required by OpenShift Container Platform 3.5:

    # subscription-manager repos \
        --enable="rhel-7-server-rpms" \
        --enable="rhel-7-server-extras-rpms" \
        --enable="rhel-7-server-ose-3.5-rpms" \
        --enable="rhel-7-fast-datapath-rpms"

2.3.4. Installing Base Packages

For RHEL 7 systems:

  1. Install the following base packages:

    # yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
  2. Update the system to the latest packages:

    # yum update
  3. Install the following package, which provides OpenShift Container Platform utilities and pulls in other tools required by the quick and advanced installation methods, such as Ansible and related configuration files:

    # yum install atomic-openshift-utils
  4. Install the following *-excluder packages on each RHEL 7 system, which helps ensure your systems stay on the correct versions of atomic-openshift and docker packages when you are not trying to upgrade, according to the OpenShift Container Platform version:

    # yum install atomic-openshift-excluder atomic-openshift-docker-excluder
  5. The *-excluder packages add entries to the exclude directive in the host’s /etc/yum.conf file when installed. Run the following command on each host to remove the atomic-openshift packages from the list for the duration of the installation.

    # atomic-openshift-excluder unexclude

For RHEL Atomic Host 7 systems:

  1. Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:

    # atomic host upgrade
  2. After the upgrade is completed and prepared for the next boot, reboot the host:

    # systemctl reboot

2.3.5. Installing Docker

At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OpenShift Container Platform.

For RHEL 7 systems, install Docker 1.12:

Note

On RHEL Atomic Host 7 systems, Docker should already be installed, configured, and running by default.

The atomic-openshift-docker-excluder package that was installed in Installing Base Packages should ensure that the correct version of Docker is installed in this step:

# yum install docker

After the package installation is complete, verify that version 1.12 was installed:

# docker version
Note

The Advanced Installation method automatically changes /etc/sysconfig/docker.

The --insecure-registry option instructs the Docker daemon to trust any Docker registry on the indicated subnet, rather than requiring a certificate.

Important

172.30.0.0/16 is the default value of the servicesSubnet variable in the master-config.yaml file. If this has changed, then the --insecure-registry value in the above step should be adjusted to match, as it is indicating the subnet for the registry to use. Note that the openshift_portal_net variable can be set in the Ansible inventory file and used during the advanced installation method to modify the servicesSubnet variable.

Note

After the initial OpenShift Container Platform installation is complete, you can choose to secure the integrated Docker registry, which involves adjusting the --insecure-registry option accordingly.

2.3.6. Configuring Docker Storage

Containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications.

For RHEL Atomic Host

The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.

If you do not have enough allocated, see Managing Storage with Docker Formatted Containers for details on using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.

For RHEL

The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.

You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. This can be done after installing Docker and should be done before creating images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:

  • Option A) Use an additional block device.
  • Option B) Use an existing, specified volume group.
  • Option C) Use the remaining free space from the volume group where your root file system is located.

Option A is the most robust option, however it requires adding an additional block device to your host before configuring Docker storage. Options B and C both require leaving free space available when provisioning your host. Option C is known to cause issues with some applications, for example Red Hat Mobile Application Platform (RHMAP).

  1. Create the docker-pool volume using one of the following three options:

    • Option A) Use an additional block device.

      In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device you wish to use. Set VG to the volume group name you wish to create; docker-vg is a reasonable choice. For example:

      # cat <<EOF > /etc/sysconfig/docker-storage-setup
      DEVS=/dev/vdc
      VG=docker-vg
      EOF

      Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup                                                                                                                                                                                                                                [5/1868]
      0
      Checking that no-one is using this disk right now ...
      OK
      
      Disk /dev/vdc: 31207 cylinders, 16 heads, 63 sectors/track
      sfdisk:  /dev/vdc: unrecognized partition table type
      
      Old situation:
      sfdisk: No partitions found
      
      New situation:
      Units: sectors of 512 bytes, counting from 0
      
         Device Boot    Start       End   #sectors  Id  System
      /dev/vdc1          2048  31457279   31455232  8e  Linux LVM
      /dev/vdc2             0         -          0   0  Empty
      /dev/vdc3             0         -          0   0  Empty
      /dev/vdc4             0         -          0   0  Empty
      Warning: partition 1 does not start at a cylinder boundary
      Warning: partition 1 does not end at a cylinder boundary
      Warning: no primary partition is marked bootable (active)
      This does not matter for LILO, but the DOS MBR will not boot this disk.
      Successfully wrote the new partition table
      
      Re-reading the partition table ...
      
      If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
      to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
      (See fdisk(8).)
        Physical volume "/dev/vdc1" successfully created
        Volume group "docker-vg" successfully created
        Rounding up size to full physical extent 16.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted docker-vg/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
    • Option B) Use an existing, specified volume group.

      In /etc/sysconfig/docker-storage-setup, set VG to the desired volume group. For example:

      # cat <<EOF > /etc/sysconfig/docker-storage-setup
      VG=docker-vg
      EOF

      Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup
        Rounding up size to full physical extent 16.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted docker-vg/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
    • Option C) Use the remaining free space from the volume group where your root file system is located.

      Verify that the volume group where your root file system resides has the desired free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

      # docker-storage-setup
        Rounding up size to full physical extent 32.00 MiB
        Logical volume "docker-poolmeta" created.
        Logical volume "docker-pool" created.
        WARNING: Converting logical volume rhel/docker-pool and rhel/docker-poolmeta to pool's data and metadata volumes.
        THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
        Converted rhel/docker-pool to thin pool.
        Logical volume "docker-pool" changed.
  2. Verify your configuration. You should have a dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool logical volume:

    # cat /etc/sysconfig/docker-storage
    DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt
    dm.thinpooldev=/dev/mapper/docker--vg-docker--pool
    
    # lvs
      LV          VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      docker-pool rhel twi-a-t---  9.29g             0.00   0.12
    Important

    Before using Docker or OpenShift Container Platform, verify that the docker-pool logical volume is large enough to meet your needs. The docker-pool volume should be 60% of the available volume group and will grow to fill the volume group via LVM monitoring.

  3. Check if Docker is running:

    # systemctl is-active docker
  4. If Docker has not yet been started on the host, enable and start the service:

    # systemctl enable docker
    # systemctl start docker

    If Docker is already running, re-initialize Docker:

    Warning

    This will destroy any containers or images currently on the host.

    # systemctl stop docker
    # rm -rf /var/lib/docker/*
    # systemctl restart docker

    If there is any content in /var/lib/docker/, it must be deleted. Files will be present if Docker has been used prior to the installation of OpenShift Container Platform.

2.3.6.1. Reconfiguring Docker Storage

Should you need to reconfigure Docker storage after having created the docker-pool, you should first remove the docker-pool logical volume. If you are using a dedicated volume group, you should also remove the volume group and any associated physical volumes before reconfiguring docker-storage-setup according to the instructions above.

See Logical Volume Manager Administration for more detailed information on LVM management.

2.3.6.2. Managing Container Logs

Sometimes a container’s log file (the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running) can increase to a problematic size. You can manage this by configuring Docker’s json-file logging driver to restrict the size and number of log files.

Important

Aggregated logging is only supported using the journald driver in Docker. See Updating Fluentd’s Log Source After a Docker Log Driver Update for more information.

OptionPurpose

--log-opt max-size

Sets the size at which a new log file is created.

--log-opt max-file

Sets the file on each host to configure the options.

For example, to set the maximum file size to 1MB and always keep the last three log files, edit the /etc/sysconfig/docker file to configure max-size=1M and max-file=3:

OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'

Next, restart the Docker service:

# systemctl restart docker

2.3.6.3. Viewing Available Container Logs

Container logs are stored in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:

# ls -lh /var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/
total 2.6M
-rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json
-rw-r--r--. 1 root root 649K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2
-rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json
drwx------. 2 root root    6 Nov 24 00:12 secrets

See Docker’s documentation for additional information on how to configure logging drivers.

2.3.6.4. Blocking Local Volume Usage

When a volume is provisioned using the VOLUME instruction in a Dockerfile or using the docker run -v <volumename> command, a host’s storage space is used. Using this storage can lead to an unexpected out of space issue and could bring down the host.

In OpenShift Container Platform, users trying to run their own images risk filling the entire storage space on a node host. One solution to this issue is to prevent users from running images with volumes. This way, the only storage a user has access to can be limited, and the cluster administrator can assign storage quota.

Using docker-novolume-plugin solves this issue by disallowing starting a container with local volumes defined. In particular, the plug-in blocks docker run commands that contain:

  • The --volumes-from option
  • Images that have VOLUME(s) defined
  • References to existing volumes that were provisioned with the docker volume command

The plug-in does not block references to bind mounts.

To enable docker-novolume-plugin, perform the following steps on each node host:

  1. Install the docker-novolume-plugin package:

    $ yum install docker-novolume-plugin
  2. Enable and start the docker-novolume-plugin service:

    $ systemctl enable docker-novolume-plugin
    $ systemctl start docker-novolume-plugin
  3. Edit the /etc/sysconfig/docker file and append the following to the OPTIONS list:

    --authorization-plugin=docker-novolume-plugin
  4. Restart the docker service:

    $ systemctl restart docker

After you enable this plug-in, containers with local volumes defined fail to start and show the following error message:

runContainer: API error (500): authorization denied by plugin
docker-novolume-plugin: volumes are not allowed

2.3.7. Ensuring Host Access

The quick and advanced installation methods require a user that has access to all hosts. If you want to run the installer as a non-root user, passwordless sudo rights must be configured on each destination host.

For example, you can generate an SSH key on the host where you will invoke the installation process:

# ssh-keygen

Do not use a password.

An easy way to distribute your SSH keys is by using a bash loop:

# for host in master.example.com \
    node1.example.com \
    node2.example.com; \
    do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
    done

Modify the host names in the above command according to your configuration.

2.3.8. Setting Global Proxy Values

The OpenShift Container Platform installer uses the proxy settings in the _/etc/environment _ file.

Ensure the following domain suffixes and IP addresses are in the /etc/environment file in the no_proxy parameter:

  • Master and node host names (domain suffix).
  • Other internal host names (domain suffix).
  • Etcd IP addresses (must be IP addresses and not host names, as etcd access is done by IP address).
  • Docker registry IP address.
  • Kubernetes IP address, by default 172.30.0.1. Must be the value set in the openshift_portal_net parameter in the Ansible inventory file, by default /etc/ansible/hosts.
  • Kubernetes internal domain suffix: cluster.local.
  • Kubernetes internal domain suffix: .svc.

The following example assumes http_proxy and https_proxy values are set:

no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1
Note

Because noproxy does not support CIDR, you can use domain suffixes.

2.3.9. What’s Next?

If you are interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to prepare your hosts.

When you are ready to proceed, you can install OpenShift Container Platform using the quick installation or advanced installation method.

If you are installing a stand-alone registry, continue with Installing a Stand-alone Registry.

2.4. Installing on Containerized Hosts

2.4.1. Overview

You can opt to install OpenShift Container Platform using the RPM or containerized package method. Either installation method results in a working environment, but the choice comes from the operating system and how you choose to update your hosts.

Important

The default method for installing OpenShift Container Platform on Red Hat Enterprise Linux (RHEL) uses RPMs. When targeting a Red Hat Atomic Host system, the containerized method is the only available option, and is automatically selected for you based on the detection of the /run/ostree-booted file.

When using RPMs, all services are installed and updated via package management from an outside source. These modify a host’s existing configuration within the same user space. Alternatively, containerized installs instead are a complete, all-in-one resource using container images and its own operating system within the container. Any updated, newer containers replace any existing ones on your host. Choosing one method over the other depends on how you choose to update OpenShift Container Platform in the future.

The following table outlines further differences between the RPM and Containerized methods:

 RPMContainerized

Installation Method

Packages via yum

Container images via docker

Service Management

systemd

docker and systemd units

Operating System

Red Hat Enterprise Linux

Red Hat Enterprise Linux or Red Hat Atomic Host

2.4.2. Install Methods for Containerized Hosts

As with the RPM installation, you can choose between the quick and advanced install methods for the containerized install.

For the quick installation method, you can choose between the RPM or containerized method on a per host basis during the interactive installation, or set the values manually in an installation configuration file.

For the advanced installation method, you can set the Ansible variable containerized=true in an inventory file on a cluster-wide or per host basis.

For the disconnected installation method, to install the etcd container, you can set the Ansible variable osm_etcd_image to be the fully qualified name of the etcd image on your local registry, for example, registry.example.com/rhel7/etcd.

Note

When installing an environment with multiple masters, the load balancer cannot be deployed by the installation process as a container. See Advanced Installation for load balancer requirements using the native HA method.

2.4.3. Required Images

Containerized installations make use of the following images:

  • openshift3/ose
  • openshift3/node
  • openshift3/openvswitch
  • registry.access.redhat.com/rhel7/etcd

By default, all of the above images are pulled from the Red Hat Registry at registry.access.redhat.com.

If you need to use a private registry to pull these images during the installation, you can specify the registry information ahead of time. For the advanced installation method, you can set the following Ansible variables in your inventory file, as required:

cli_docker_additional_registries=<registry_hostname>
cli_docker_insecure_registries=<registry_hostname>
cli_docker_blocked_registries=<registry_hostname>

For the quick installation method, you can export the following environment variables on each target host:

# export OO_INSTALL_ADDITIONAL_REGISTRIES=<registry_hostname>
# export OO_INSTALL_INSECURE_REGISTRIES=<registry_hostname>

Blocked Docker registries cannot currently be specified using the quick installation method.

The configuration of additional, insecure, and blocked Docker registries occurs at the beginning of the installation process to ensure that these settings are applied before attempting to pull any of the required images.

2.4.4. Starting and Stopping Containers

The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands. For containerized installations, these unit names match those of an RPM installation, with the exception of the etcd service which is named etcd_container.

This change is necessary as currently RHEL Atomic Host ships with the etcd package installed as part of the operating system, so a containerized version is used for the OpenShift Container Platform installation instead. The installation process disables the default etcd service. The etcd package is slated to be removed from RHEL Atomic Host in the future.

2.4.5. File Paths

All OpenShift Container Platform configuration files are placed in the same locations during containerized installation as RPM based installations and will survive os-tree upgrades.

However, the default image stream and template files are installed at /etc/origin/examples/ for containerized installations rather than the standard /usr/share/openshift/examples/, because that directory is read-only on RHEL Atomic Host.

2.4.6. Storage Requirements

RHEL Atomic Host installations normally have a very small root file system. However, the etcd, master, and node containers persist data in the /var/lib/ directory. Ensure that you have enough space on the root file system before installing OpenShift Container Platform. See the System Requirements section for details.

2.4.7. Open vSwitch SDN Initialization

OpenShift SDN initialization requires that the Docker bridge be reconfigured and that Docker is restarted. This complicates the situation when the node is running within a container. When using the Open vSwitch (OVS) SDN, you will see the node start, reconfigure Docker, restart Docker (which restarts all containers), and finally start successfully.

In this case, the node service may fail to start and be restarted a few times, because the master services are also restarted along with Docker. The current implementation uses a workaround which relies on setting the Restart=always parameter in the Docker based systemd units.

2.5. Quick Installation

2.5.1. Overview

The quick installation method allows you to use an interactive CLI utility, the atomic-openshift-installer command, to install OpenShift Container Platform across a set of hosts. This installer can deploy OpenShift Container Platform components on targeted hosts by either installing RPMs or running containerized services.

Important

While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the installer is provided by an RPM and not available by default in RHEL Atomic Host. Therefore, it must be run from a Red Hat Enterprise Linux 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be.

This installation method is provided to make the installation experience easier by interactively gathering the data needed to run on each host. The installer is a self-contained wrapper intended for usage on a Red Hat Enterprise Linux (RHEL) 7 system.

In addition to running interactive installations from scratch, the atomic-openshift-installer command can also be run or re-run using a predefined installation configuration file. This file can be used with the installer to:

Alternatively, you can use the advanced installation method for more complex environments.

Note

To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.

2.5.2. Before You Begin

The installer allows you to install OpenShift Container Platform master and node components on a defined set of hosts.

Note

By default, any hosts you designate as masters during the installation process are automatically also configured as nodes so that the masters are configured as part of the OpenShift Container Platform SDN. The node component on the masters, however, are marked unschedulable, which blocks pods from being scheduled on it. After the installation, you can mark them schedulable if you want.

Before installing OpenShift Container Platform, you must first satisfy the prerequisites on your hosts, which includes verifying system and environment requirements and properly installing and configuring Docker. You must also be prepared to provide or validate the following information for each of your targeted hosts during the course of the installation:

  • User name on the target host that should run the Ansible-based installation (can be root or non-root)
  • Host name
  • Whether to install components for master, node, or both
  • Whether to use the RPM or containerized method
  • Internal and external IP addresses
Important

If you are installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see the Installing on Containerized Hosts topic to ensure that you understand the differences between these methods, then return to this topic to continue.

After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue to running an interactive or unattended installation.

2.5.3. Running an Interactive Installation

Note

Ensure you have read through Before You Begin.

You can start the interactive installation by running:

$ atomic-openshift-installer install

Then follow the on-screen instructions to install a new OpenShift Container Platform cluster.

After it has finished, ensure that you back up the ~/.config/openshift/installer.cfg.ymlinstallation configuration file that is created, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.

2.5.4. Defining an Installation Configuration File

The installer can use a predefined installation configuration file, which contains information about your installation, individual hosts, and cluster. When running an interactive installation, an installation configuration file based on your answers is created for you in ~/.config/openshift/installer.cfg.yml. The file is created if you are instructed to exit the installation to manually modify the configuration or when the installation completes. You can also create the configuration file manually from scratch to perform an unattended installation.

Example 2.1. Installation Configuration File Specification

version: v2 1
variant: openshift-enterprise 2
variant_version: 3.5 3
ansible_log_path: /tmp/ansible.log 4
deployment:
  ansible_ssh_user: root 5
  hosts: 6
  - ip: 10.0.0.1 7
    hostname: master-private.example.com 8
    public_ip: 24.222.0.1 9
    public_hostname: master.example.com 10
    roles: 11
      - master
      - node
    containerized: true 12
    connect_to: 24.222.0.1 13
  - ip: 10.0.0.2
    hostname: node1-private.example.com
    public_ip: 24.222.0.2
    public_hostname: node1.example.com
    node_labels: {'region': 'infra'} 14
    roles:
      - node
    connect_to: 10.0.0.2
  - ip: 10.0.0.3
    hostname: node2-private.example.com
    public_ip: 24.222.0.3
    public_hostname: node2.example.com
    roles:
      - node
    connect_to: 10.0.0.3
  roles: 15
    master:
      <variable_name1>: "<value1>" 16
      <variable_name2>: "<value2>"
    node:
      <variable_name1>: "<value1>" 17
1
The version of this installation configuration file. As of OpenShift Container Platform 3.3, the only valid version here is v2.
2
The OpenShift Container Platform variant to install. For OpenShift Container Platform, set this to openshift-enterprise.
3
A valid version of your selected variant: 3.5, 3.4, 3.3, 3.2, or 3.1. If not specified, this defaults to the latest version for the specified variant.
4
Defines where the Ansible logs are stored. By default, this is the /tmp/ansible.log file.
5
Defines which user Ansible uses to SSH in to remote systems for gathering facts and for the installation. By default, this is the root user, but you can set it to any user that has sudo privileges.
6
Defines a list of the hosts onto which you want to install the OpenShift Container Platform master and node components.
7 8
Required. Allows the installer to connect to the system and gather facts before proceeding with the install.
9 10
Required for unattended installations. If these details are not specified, then this information is pulled from the facts gathered by the installer, and you are asked to confirm the details. If undefined for an unattended installation, the installation fails.
11
Determines the type of services that are installed. Specified as a list.
12
If set to true, containerized OpenShift Container Platform services are run on target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details.
13
The IP address that Ansible attempts to connect to when installing, upgrading, or uninstalling the systems. If the configuration file was auto-generated, then this is the value you first enter for the host during that interactive install process.
14
Node labels can optionally be set per-host.
15
Defines a dictionary of roles across the deployment.
16 17
Any ansible variables that should only be applied to hosts assigned a role can be defined. For examples, see Configuring Ansible.

2.5.5. Running an Unattended Installation

Note

Ensure you have read through the Before You Begin.

Unattended installations allow you to define your hosts and cluster configuration in an installation configuration file before running the installer so that you do not have to go through all of the interactive installation questions and answers. It also allows you to resume an interactive installation you may have left unfinished, and quickly get back to where you left off.

To run an unattended installation, first define an installation configuration file at ~/.config/openshift/installer.cfg.yml. Then, run the installer with the -u flag:

$ atomic-openshift-installer -u install

By default in interactive or unattended mode, the installer uses the configuration file located at ~/.config/openshift/installer.cfg.yml if the file exists. If it does not exist, attempting to start an unattended installation fails.

Alternatively, you can specify a different location for the configuration file using the -c option, but doing so will require you to specify the file location every time you run the installation:

$ atomic-openshift-installer -u -c </path/to/file> install

After the unattended installation finishes, ensure that you back up the ~/.config/openshift/installer.cfg.yml file that was used, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.

2.5.6. Verifying the Installation

After the installation completes:

  1. If you are using a proxy, you must add the IP address of the etcd endpoints to the openshift_no_proxy cluster variable in your inventory file.

    Note

    If you are not using a proxy, you can skip this step.

    In OpenShift Container Platform 3.4, the master connected to the etcd cluster using the host name of the etcd endpoints. In OpenShift Container Platform 3.5, the master now connects to etcd via IP address.

    When configuring a cluster to use proxy settings (see Configuring Global Proxy Options), this change causes the master-to-etcd connection to be proxied as well, rather than being excluded by host name in each host’s NO_PROXY setting (see Working with HTTP Proxies for more about NO_PROXY).

    1. To workaround this issue, add the IP address of the etcd endpoints to the NO_PROXY environment variable on each master host’s /etc/sysconfig/atomic-openshift-master-controllers file. For example:

      NO_PROXY=<ip_address>

      Use the IP address or host name that the master uses to contact the etcd cluster as the <ip_address>. The <port> should be 2379 if you are using standalone etcd (clustered) or 4001 for embedded etcd (single master, non-clustered etcd). The installer will be updated in a future release to handle this scenario automatically during installation and upgrades (BZ#1466783).

    2. Restart the master service for the changes to take effect:

      # systemctl restart atomic-openshift-master

After the installation completes:

  1. Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:

    # oc get nodes
    
    NAME                        STATUS                     AGE
    master.example.com          Ready,SchedulingDisabled   165d
    node1.example.com           Ready                      165d
    node2.example.com           Ready                      165d
  2. To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.

    For example, for a master host with a host name of master.openshift.com and using the default port of 8443, the web console would be found at https://master.openshift.com:8443/console.

  3. Now that the install has been verified, run the following command on each master and node host to add the atomic-openshift packages back to the list of yum excludes on the host: + ---- # atomic-openshift-excluder exclude ----
  4. Then, see What’s Next for the next steps on configuring your OpenShift Container Platform cluster.

2.5.7. Uninstalling OpenShift Container Platform

You can uninstall OpenShift Container Platform from all hosts in your cluster using the installer’s uninstall command. By default, the installer uses the installation configuration file located at ~/.config/openshift/installer.cfg.yml if the file exists:

$ atomic-openshift-installer uninstall

Alternatively, you can specify a different location for the configuration file using the -c option:

$ atomic-openshift-installer -c </path/to/file> uninstall

See the advanced installation method for more options.

2.5.8. What’s Next?

Now that you have a working OpenShift Container Platform instance, you can:

2.6. Advanced Installation

2.6.1. Overview

A reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing a OpenShift Container Platform cluster. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.

Important

While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host, and must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be.

Alternatively, you can use the quick installation method if you prefer an interactive installation experience.

Note

To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.

Important

Running Ansible playbooks with the --tags or --check options is not supported by Red Hat.

2.6.2. Before You Begin

Before installing OpenShift Container Platform, you must first see the Prerequisites and Host Preparation topics to prepare your hosts. This includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 2.2.0 or later, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.

If you are interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to ensure that you understand the differences between these methods, then return to this topic to continue.

For large-scale installs, including suggestions for optimizing install time, see the Scaling and Performance Guide.

After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible Inventory Files.

2.6.3. Configuring Ansible Inventory Files

The /etc/ansible/hosts file is Ansible’s inventory file for the playbook used to install OpenShift Container Platform. The inventory file describes the configuration for your OpenShift Container Platform cluster. You must replace the default contents of the file with your desired configuration.

The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation.

Many of the Ansible variables described are optional. Accepting the default values should suffice for development environments, but for production environments it is recommended you read through and become familiar with the various options available.

The example inventories describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.

Image Version Policy

Images require a version number policy in order to maintain updates. See the Image Version Tag Policy section in the Architecture Guide for more information.

2.6.3.1. Configuring Cluster Variables

To assign environment variables during the Ansible install that apply more globally to your OpenShift Container Platform cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:

[OSEv3:vars]

openshift_master_identity_providers=[{'name': 'htpasswd_auth',
'login': 'true', 'challenge': 'true',
'kind': 'HTPasswdPasswordIdentityProvider',
'filename': '/etc/origin/master/htpasswd'}]

openshift_master_default_subdomain=apps.test.example.com

The following table describes variables for use with the Ansible installer that can be assigned cluster-wide:

Table 2.9. Cluster Variables
VariablePurpose

ansible_ssh_user

This variable sets the SSH user for the installer to use and defaults to root. This user should allow SSH-based authentication without requiring a password. If using SSH key-based authentication, then the key should be managed by an SSH agent.

ansible_become

If ansible_ssh_user is not root, this variable must be set to true and the user must be configured for passwordless sudo.

debug_level

This variable sets which INFO messages are logged to the systemd-journald.service. Set one of the following:

  • 0 to log errors and warnings only
  • 2 to log normal information (This is the default level.)
  • 4 to log debugging-level information
  • 6 to log API-level debugging information (request / response)
  • 8 to log body-level API debugging information

For more information on debug log levels, see Configuring Logging Levels.

containerized

If set to true, containerized OpenShift Container Platform services are run on all target master and node hosts in the cluster instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. Containerized installations are supported starting in OpenShift Container Platform 3.1.1.

openshift_master_admission_plugin_config

This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. For example:

openshift_master_admission_plugin_config={"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","memoryRequestToLimitPercent":"25","cpuRequestToLimitPercent":"25","limitCPUToMemoryPercent":"200"}}}

openshift_master_audit_config

This variable enables API service auditing. See Audit Configuration for more information.

openshift_master_cluster_hostname

This variable overrides the host name for the cluster, which defaults to the host name of the master.

openshift_master_cluster_public_hostname

This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer.

For example:

---- openshift_master_cluster_public_hostname=openshift-ansible.public.example.com ----

openshift_master_cluster_method

Optional. This variable defines the HA method when deploying multiple masters. Supports the native method. See Multiple Masters for more information.

openshift_rolling_restart_mode

This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when running the upgrade playbook directly. It defaults to services, which allows rolling restarts of services on the masters. It can instead be set to system, which enables rolling, full system restarts and also works for single master clusters.

os_sdn_network_plugin_name

This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to redhat/openshift-ovs-subnet for the standard SDN plug-in. Set the variable to redhat/openshift-ovs-multitenant to use the multitenant plug-in.

openshift_master_identity_providers

This variable sets the identity provider. The default value is Deny All. If you use a supported identity provider, configure OpenShift Container Platform to use it.

openshift_master_named_certificates

These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information.

openshift_master_overwrite_named_certificates

openshift_hosted_registry_cert_expire_days

Validity of the auto-generated registry certificate in days. Defaults to 730 (2 years).

openshift_ca_cert_expire_days

Validity of the auto-generated CA certificate in days. Defaults to 1825 (5 years).

openshift_node_cert_expire_days

Validity of the auto-generated node certificate in days. Defaults to 730 (2 years).

openshift_master_cert_expire_days

Validity of the auto-generated master certificate in days. Defaults to 730 (2 years).

etcd_ca_default_days

Validity of the auto-generated separate etcd certificates in days. Controls validity for etcd CA, peer, server and client certificates. Defaults to 1825 (5 years).

os_firewall_use_firewalld

Set to true to use firewalld instead of the default iptables. Not available on RHEL Atomic Host. See the Configuring the Firewall section for more information.

openshift_master_session_name

These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information.

openshift_master_session_max_seconds

openshift_master_session_auth_secrets

openshift_master_session_encryption_secrets

openshift_portal_net

This variable configures the subnet in which services will be created within the OpenShift Container Platform SDN. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to 172.30.0.0/16, and cannot be re-configured after deployment. If changing from the default, avoid 172.17.0.0/16, which the docker0 network bridge uses by default, or modify the docker0 network.

openshift_master_default_subdomain

This variable overrides the default subdomain to use for exposed routes.

openshift_master_image_policy_config

Sets imagePolicyConfig in the master configuration. See Image Configuration for details.

openshift_node_proxy_mode

This variable specifies the service proxy mode to use: either iptables for the default, pure-iptables implementation, or userspace for the user space proxy.

openshift_router_selector

Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details.

openshift_registry_selector

Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details.

osm_default_node_selector

This variable overrides the node selector that projects will use by default when placing pods.

osm_cluster_network_cidr

This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to 10.128.0.0/14 and cannot be arbitrarily re-configured after deployment, although certain changes to it can be made in the SDN master configuration.

osm_host_subnet_length

This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift Container Platform SDN. Defaults to 9 which means that a subnet of size /23 is allocated to each host; for example, given the default 10.128.0.0/14 cluster network, this will allocate 10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, and so on. This cannot be re-configured after deployment.

openshift_use_flannel

This variable enables flannel as an alternative networking layer instead of the default SDN. If enabling flannel, disable the default SDN with the openshift_use_openshift_sdn variable. For more information, see Using Flannel.

openshift_docker_additional_registries

OpenShift Container Platform adds the specified additional registry or registries to the docker configuration. These are the registries to search.

openshift_docker_insecure_registries

OpenShift Container Platform adds the specified additional insecure registry or registries to the docker configuration. For any of these registries, secure sockets layer (SSL) is not verified. Also, add these registries to openshift_docker_additional_registries.

openshift_docker_blocked_registries

OpenShift Container Platform adds the specified blocked registry or registries to the docker configuration. Block the listed registries. Setting this to all blocks everything not in the other variables.

openshift_hosted_metrics_public_url

This variable sets the host name for integration with the metrics console by overriding metricsPublicURL in the master configuration for cluster metrics. If you alter this variable, ensure the host name is accessible via your router. See Configuring Cluster Metrics for details.

openshift_image_tag

Use this variable to specify a container image tag to install or configure.

openshift_pkg_version

Use this variable to specify an RPM version to install or configure.

Warning

If you modify the openshift_image_tag or the openshift_pkg_version variables after the cluster is set up, then an upgrade can be triggered, resulting in downtime.

  • If openshift_image_tag is set, its value is used for all hosts in containerized environments, even those that have another version installed. If
  • openshift_pkg_version is set, its value is used for all hosts in RPM-based environments, even those that have another version installed.

2.6.3.2. Configuring Deployment Type

Various defaults used throughout the playbooks and roles used by the installer are based on the deployment type configuration (usually defined in an Ansible inventory file).

Ensure the deployment_type parameter in your inventory file’s [OSEv3:vars] section is set to openshift-enterprise to install the OpenShift Container Platform variant:

[OSEv3:vars]
deployment_type=openshift-enterprise

2.6.3.3. Configuring Host Variables

To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:

[masters]
ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com

The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:

Table 2.10. Host Variables
VariablePurpose

openshift_hostname

This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name.

openshift_public_hostname

This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT).

openshift_ip

This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. This variable can also be used for etcd.

openshift_public_ip

This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT).

containerized

If set to true, containerized OpenShift Container Platform services are run on the target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. Containerized installations are supported starting in OpenShift Container Platform 3.1.1.

openshift_node_labels

This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details.

openshift_node_kubelet_args

This variable is used to configure kubeletArguments on nodes, such as arguments used in container and image garbage collection, and to specify resources per node. kubeletArguments are key value pairs that are passed directly to the Kubelet that match the Kubelet’s command line arguments. kubeletArguments are not migrated or validated and may become invalid if used. These values override other settings in node configuration which may cause invalid configurations. Example usage: {'image-gc-high-threshold': ['90'],'image-gc-low-threshold': ['80']}.

openshift_hosted_router_selector

Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details.

openshift_registry_selector

Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details.

openshift_docker_options

This variable configures additional Docker options within /etc/sysconfig/docker, such as options used in Managing Container Logs.

Example usage: "--log-driver json-file --log-opt max-size=1M --log-opt max-file=3".

openshift_schedulable

This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See Configuring Schedulability on Masters.

2.6.3.4. Configuring Master API and Console Ports

To configure the default ports used by the master API and web console, configure the following variables in the /etc/ansible/hosts file:

Table 2.11. Master API and Console Ports
VariablePurpose

openshift_master_api_port

This variable sets the port number to access the OpenShift Container Platform API.

openshift_master_console_port

This variable sets the console port number to access the OpenShift Container Platform console with a web browser.

For example:

openshift_master_api_port=3443
openshift_master_console_port=8756

2.6.3.5. Configuring Cluster Pre-install Checks

Pre-install checks are a set of diagnostic tasks that run as part of the openshift_health_checker Ansible role. They run prior to an Ansible installation of OpenShift Container Platform, ensure that required inventory values are set, and identify potential issues on a host that can prevent or interfere with a successful installation.

The following table describes available pre-install checks that will run before every Ansible installation of OpenShift Container Platform:

Table 2.12. Pre-install Checks
Check NamePurpose

memory_availability

This check ensures that a host has the recommended amount of memory for the specific deployment of OpenShift Container Platform. Default values have been derived from the latest installation documentation. A user-defined value for minimum memory requirements may be set by setting the openshift_check_min_host_memory_gb cluster variable in your inventory file.

disk_availability

This check only runs on etcd, master, and node hosts. It ensures that the mount path for an OpenShift Container Platform installation has sufficient disk space remaining. Recommended disk values are taken from the latest installation documentation. A user-defined value for minimum disk space requirements may be set by setting openshift_check_min_host_disk_gb cluster variable in your inventory file.

docker_storage

Only runs on hosts that depend on the docker daemon (nodes and containerized installations). Checks that docker's total usage does not exceed a user-defined limit. If no user-defined limit is set, docker's maximum usage threshold defaults to 90% of the total size available. The threshold limit for total percent usage can be set with a variable in your inventory file: max_thinpool_data_usage_percent=90. A user-defined limit for maximum thinpool usage may be set by setting the max_thinpool_data_usage_percent cluster variable in your inventory file.

docker_storage_driver

Ensures that the docker daemon is using a storage driver supported by OpenShift Container Platform. If the devicemapper storage driver is being used, the check additionally ensures that a loopback device is not being used.

docker_image_availability

Attempts to ensure that images required by an OpenShift Container Platform installation are available either locally or in at least one of the configured container image registries on the host machine.

openshift_release

Specifies the generic release of OpenShift Container Platform for containerized installations. For RPM installations, set a package_availability value.

package_version

Runs on yum-based systems determining if multiple releases of a required OpenShift Container Platform package are available. Having multiple releases of a package available during an enterprise installation of OpenShift suggests that there are multiple yum repositories enabled for different releases, which may lead to installation problems. This check is skipped if the openshift_release variable is not defined in the inventory file.

package_availability

Runs prior to non-containerized installations of OpenShift Container Platform. Ensures that RPM packages required for the current installation are available.

package_update

Checks whether a yum update or package installation will succeed, without actually performing it or running yum on the host.

To disable specific pre-install checks, include the variable openshift_disable_check with a comma-delimited list of check names in your inventory file. For example:

openshift_disable_check=memory_availability,disk_availability

A similar set of checks meant to run for diagnostic on existing clusters can be Additional Diagnostic Checks via Ansible. Another set of checks for checking certificate expiration can be found in Redeploying Certificates.

2.6.3.6. Configuring System Containers

Important

All system container components are Technology Preview features in OpenShift Container Platform 3.6. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.6. During this phase, they are only meant for use with new cluster installations in non-production environments.

System containers provide a way to containerize services that need to run before the docker daemon is running. They are Docker-formatted containers that use:

System containers are therefore stored and run outside of the traditional docker service. For more details on system container technology, see Running System Containers in the Red Hat Enterprise Linux Atomic Host: Managing Containers documentation.

You can configure your OpenShift Container Platform installation to run certain components as system containers instead of their RPM or standard containerized methods. Currently, the docker and etcd components can be run as system containers in OpenShift Container Platform.

Warning

System containers are currently OS-specific because they require specific versions of atomic and systemd. For example, different system containers are created for RHEL, Fedora, or CentOS. Ensure that the system containers you are using match the OS of the host they will run on. OpenShift Container Platform only supports RHEL and RHEL Atomic as the host OS, so by default system containers built for RHEL are used.

2.6.3.6.1. Running Docker as a System Container
Important

All system container components are Technology Preview features in OpenShift Container Platform 3.6. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.6. During this phase, they are only meant for use with new cluster installations in non-production environments.

The traditional method for using docker in an OpenShift Container Platform cluster is an RPM package installation. For Red Hat Enterprise Linux (RHEL) systems, it must be specifically installed; for RHEL Atomic Host systems, it is provided by default.

However, you can configure your OpenShift Container Platform installation to alternatively run docker on node hosts as a system container. When using the system container method, the container-engine container image and systemd service is used on the host instead of the docker package and service.

To run docker as a system container:

  1. Because the default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, for any RHEL systems you must still configure a thin pool logical volume for docker to use before running the OpenShift Container Platform installation. You can skip these steps for any RHEL Atomic Host systems.

    For any RHEL systems, perform the steps described in the following sections:

    After completing the storage configuration steps, you can leave the RPM installed.

  2. Set the following cluster variable to True in your inventory file in the [OSEv3:vars] section:

    openshift_docker_use_system_container=True

When using the system container method, the following inventory variables for docker are ignored:

  • docker_version
  • docker_upgrade

Further, the following inventory variable must not be used:

  • openshift_docker_options

You can also force docker in the system container to use a specific container registry and repository when pulling the container-engine image instead of from the default registry.access.redhat.com/openshift3/. To do so, set the following cluster variable in your inventory file in the [OSEv3:vars] section:

openshift_docker_systemcontainer_image_registry_override="registry.example.com/myrepo/"
2.6.3.6.2. Running etcd as a System Container
Important

All system container components are Technology Preview features in OpenShift Container Platform 3.6. They must not be used in production and they are not supported for upgrades to OpenShift Container Platform 3.6. During this phase, they are only meant for use with new cluster installations in non-production environments.

When using the RPM-based installation method for OpenShift Container Platform, etcd is installed using RPM packages on any RHEL systems. When using the containerized installation method, the rhel7/etcd image is used instead for RHEL or RHEL Atomic Hosts.

However, you can configure your OpenShift Container Platform installation to alternatively run etcd as a system container. Whereas the standard containerized method uses a systemd service named etcd_container, the system container method uses the service name etcd, same as the RPM-based method. The data directory for etcd using this method is /var/lib/etcd/etcd.etcd/etcd.etcd/member.

To run etcd as a system container, set the following cluster variable in your inventory file in the [OSEv3:vars] section:

openshift_use_etcd_system_container=True

2.6.3.7. Configuring a Registry Location

If you are using an image registry other than the default at registry.access.redhat.com, specify the desired registry within the /etc/ansible/hosts file.

oreg_url=example.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true
Table 2.13. Registry Variables
VariablePurpose

oreg_url

Set to the alternate image location. Necessary if you are not using the default registry at registry.access.redhat.com.

openshift_examples_modify_imagestreams

Set to true if pointing to a registry other than the default. Modifies the image stream location to the value of oreg_url.

2.6.3.7.1. Configuring Registry Storage

There are several options for enabling registry storage when using the advanced install:

Option A: NFS Host Group

When the following variables are set, an NFS volume is created during an advanced install with the path <nfs_directory>/<volume_name> on the host within the [nfs] host group. For example, the volume path using these options would be /exports/registry:

[OSEv3:vars]

openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
Option B: External NFS Host

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host. The remote volume path using the following options would be nfs.example.com:/exports/registry.

[OSEv3:vars]

openshift_hosted_registry_storage_kind=nfs
openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
openshift_hosted_registry_storage_host=nfs.example.com
openshift_hosted_registry_storage_nfs_directory=/exports
openshift_hosted_registry_storage_volume_name=registry
openshift_hosted_registry_storage_volume_size=10Gi
Option C: OpenStack Platform

An OpenStack storage configuration must already exist.

openshift_hosted_registry_storage_kind=openstack
openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
openshift_hosted_registry_storage_openstack_filesystem=ext4
openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57
openshift_hosted_registry_storage_volume_size=10Gi
Option D: AWS or Another S3 Storage Solution

The simple storage solution (S3) bucket must already exist.

#openshift_hosted_registry_storage_kind=object
#openshift_hosted_registry_storage_provider=s3
#openshift_hosted_registry_storage_s3_accesskey=access_key_id
#openshift_hosted_registry_storage_s3_secretkey=secret_access_key
#openshift_hosted_registry_storage_s3_bucket=bucket_name
#openshift_hosted_registry_storage_s3_region=bucket_region
#openshift_hosted_registry_storage_s3_chunksize=26214400
#openshift_hosted_registry_storage_s3_rootdirectory=/registry
#openshift_hosted_registry_pullthrough=true
#openshift_hosted_registry_acceptschema2=true
#openshift_hosted_registry_enforcequota=true

If you are using a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:

openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/

2.6.3.8. Configuring Global Proxy Options

If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.

In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.

Note

See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds.

Table 2.14. Cluster Proxy Variables
VariablePurpose

openshift_http_proxy

This variable specifies the HTTP_PROXY environment variable for masters and the Docker daemon.

openshift_https_proxy

This variable specifices the HTTPS_PROXY environment variable for masters and the Docker daemon.

openshift_no_proxy

This variable is used to set the NO_PROXY environment variable for masters and the Docker daemon. Provide a comma-separated list of host names, domain names, or wildcard host names that do not use the defined proxy. By default, this list is augmented with the list of all defined OpenShift Container Platform host names.

openshift_generate_no_proxy_hosts

This boolean variable specifies whether or not the names of all defined OpenShift hosts and *.cluster.local should be automatically appended to the NO_PROXY list. Defaults to true; set it to false to override this option.

openshift_builddefaults_http_proxy

This variable defines the HTTP_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_http_proxy value is used. Set the openshift_builddefaults_http_proxy value to False to disable default http proxy for builds regardless of the openshift_http_proxy value.

openshift_builddefaults_https_proxy

This variable defines the HTTPS_PROXY environment variable inserted into builds using the BuildDefaults admission controller. If you do not define this parameter but define the openshift_http_proxy parameter, the openshift_https_proxy value is used. Set the openshift_builddefaults_https_proxy value to False to disable default https proxy for builds regardless of the openshift_https_proxy value.

openshift_builddefaults_no_proxy

This variable defines the NO_PROXY environment variable inserted into builds using the BuildDefaults admission controller. Set the openshift_builddefaults_no_proxy value to False to disable default no proxy settings for builds regardless of the openshift_no_proxy value.

openshift_builddefaults_git_http_proxy

This variable defines the HTTP proxy used by git clone operations during a build, defined using the BuildDefaults admission controller. Set the openshift_builddefaults_git_http_proxy value to False to disable default http proxy for git clone operations during a build regardless of the openshift_http_proxy value.

openshift_builddefaults_git_https_proxy

This variable defines the HTTPS proxy used by git clone operations during a build, defined using the BuildDefaults admission controller. Set the openshift_builddefaults_git_https_proxy value to False to disable default https proxy for git clone operations during a build regardless of the openshift_https_proxy value.

2.6.3.9. Configuring the Firewall

Important

If you are changing the default firewall, ensure that each host in your cluster is using the same firewall type to prevent inconsistencies.

Note

While iptables is the default firewall, firewalld is recommended for new installations.

OpenShift Container Platform uses iptables as the default firewall, but you can configure your cluster to use firewalld during the install process.

Because iptables is the default firewall, OpenShift Container Platform is designed to have it configured automatically. However, iptables rules can break OpenShift Container Platform if not configured correctly. The advantages of firewalld include allowing multiple objects to safely share the firewall rules.

To use firewalld as the firewall for an OpenShift Container Platform installation, add the os_firewall_use_firewalld variable to the list of configuration variables in the Ansible host file at install:

[OSEv3:vars]
os_firewall_use_firewalld=True

2.6.3.10. Configuring Schedulability on Masters

Any hosts you designate as masters during the installation process should also be configured as nodes so that the masters are configured as part of the OpenShift SDN. You must do so by adding entries for these hosts to the [nodes] section:

[nodes]
master.example.com

In order to ensure that your masters are not burdened with running pods, they are automatically marked unschedulable by default by the installer, meaning that new pods cannot be placed on the hosts. This is the same as setting the openshift_schedulable=False host variable.

You can manually set a master host to schedulable during installation using the openshift_schedulable=true host variable, though this is not recommended in production environments:

[nodes]
master.example.com openshift_schedulable=true

If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable or Schedulable.

2.6.3.11. Configuring Node Host Labels

You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler. Other than region=infra (discussed in Configuring Dedicated Infrastructure Nodes), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.

To assign labels to a node host during an Ansible install, use the openshift_node_labels variable with the desired labels added to the desired node host entry in the [nodes] section. In the following example, labels are set for a region called primary and a zone called east:

[nodes]
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
2.6.3.11.1. Configuring Dedicated Infrastructure Nodes

The openshift_router_selector and openshift_registry_selector Ansible settings determine the label selectors used when placing registry and router pods. They are set to region=infra by default:

# default selectors for router and registry services
# openshift_router_selector='region=infra'
# openshift_registry_selector='region=infra'

The registry and router are only able to run on node hosts with the region=infra label. Ensure that at least one node host in your OpenShift Container Platform environment has the region=infra label. For example:

[nodes]
infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}"
Important

If there is not a node in the [nodes] section that matches the selector settings, the default router and registry will be deployed as failed with Pending status.

It is recommended for production environments that you maintain dedicated infrastructure nodes where the registry and router pods can run separately from pods used for user applications.

If you do not intend to use OpenShift Container Platform to manage the registry and router, configure the following Ansible settings:

openshift_hosted_manage_registry=false
openshift_hosted_manage_router=false

If you are using an image registry other than the default registry.access.redhat.com, you need to specify the desired registry in the /etc/ansible/hosts file.

As described in Configuring Schedulability on Masters, master hosts are marked unschedulable by default. If you label a master host with region=infra and have no other dedicated infrastructure nodes, you must also explicitly mark these master hosts as schedulable. Otherwise, the registry and router pods cannot be placed anywhere:

[nodes]
master.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}" openshift_schedulable=true

2.6.3.12. Configuring Session Options

Session options in the OAuth configuration are configurable in the inventory file. By default, Ansible populates a sessionSecretsFile with generated authentication and encryption secrets so that sessions generated by one master can be decoded by the others. The default location is /etc/origin/master/session-secrets.yaml, and this file will only be re-created if deleted on all masters.

You can set the session name and maximum number of seconds with openshift_master_session_name and openshift_master_session_max_seconds:

openshift_master_session_name=ssn
openshift_master_session_max_seconds=3600

If provided, openshift_master_session_auth_secrets and openshift_master_encryption_secrets must be equal length.

For openshift_master_session_auth_secrets, used to authenticate sessions using HMAC, it is recommended to use secrets with 32 or 64 bytes:

openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']

For openshift_master_encryption_secrets, used to encrypt sessions, secrets must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:

openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']

2.6.3.13. Configuring Custom Certificates

Custom serving certificates for the public host names of the OpenShift Container Platform API and web console can be deployed during an advanced installation and are configurable in the inventory file.

Note

Custom certificates should only be configured for the host name associated with the publicMasterURL which can be set using openshift_master_cluster_public_hostname. Using a custom serving certificate for the host name associated with the masterURL (openshift_master_cluster_hostname) will result in TLS errors as infrastructure components will attempt to contact the master API using the internal masterURL host.

Certificate and key file paths can be configured using the openshift_master_named_certificates cluster variable:

openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key"}]

File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.

Ansible detects a certificate’s Common Name and Subject Alternative Names. Detected names can be overridden by providing the "names" key when setting openshift_master_named_certificates:

openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"]}]

Certificates configured using openshift_master_named_certificates are cached on masters, meaning that each additional Ansible run with a different set of certificates results in all previously deployed certificates remaining in place on master hosts and within the master configuration file.

If you would like openshift_master_named_certificates to be overwritten with the provided value (or no value), specify the openshift_master_overwrite_named_certificates cluster variable:

openshift_master_overwrite_named_certificates=true

For a more complete example, consider the following cluster variables in an inventory file:

openshift_master_cluster_method=native
openshift_master_cluster_hostname=lb-internal.openshift.com
openshift_master_cluster_public_hostname=custom.openshift.com

To overwrite the certificates on a subsequent Ansible run, you could set the following:

openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"]}]
openshift_master_overwrite_named_certificates=true

2.6.3.14. Configuring Certificate Validity

By default, the certificates used to govern the etcd, master, and kubelet expire after two to five years. The validity (length in days until they expire) for the auto-generated registry, CA, node, and master certificates can be configured during installation using the following variables (default values shown):

[OSEv3:vars]

openshift_hosted_registry_cert_expire_days=730
openshift_ca_cert_expire_days=1825
openshift_node_cert_expire_days=730
openshift_master_cert_expire_days=730
etcd_ca_default_days=1825

These values are also used when redeploying certificates via Ansible post-installation.

2.6.3.15. Configuring Cluster Metrics

Cluster metrics are not set to automatically deploy by default. Set the following to enable cluster metrics when using the advanced install:

[OSEv3:vars]

openshift_hosted_metrics_deploy=true 1
openshift_hosted_metrics_deployer_prefix=registry.example.com:8888/openshift3/ 2
openshift_hosted_metrics_deployer_version=v3.5 3
1
Enables the metrics deployment.
2
Replace registry.example.com:8888/openshift3/ with the prefix for the component images.
3
Replace with the desired image version.

The OpenShift Container Platform web console uses the data coming from the Hawkular Metrics service to display its graphs. The metrics public URL can be set during cluster installation using the openshift_hosted_metrics_public_url Ansible variable, which defaults to:

https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics

If you alter this variable, ensure the host name is accessible via your router.

Important

In accordance with upstream Kubernetes rules, metrics can be collected only on the default interface of eth0.

Note

You must set an openshift_master_default_subdomain value to deploy metrics.

2.6.3.15.1. Configuring Metrics Storage

The openshift_metrics_cassandra_storage_type variable must be set in order to use persistent storage for metrics. If openshift_metrics_cassandra_storage_type is not set, then cluster metrics data is stored in an emptyDir volume, which will be deleted when the Cassandra pod terminates.

There are three options for enabling cluster metrics storage when using the advanced install:

Option A: NFS Host Group

When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs] host group. For example, the volume path using these options would be /exports/metrics:

[OSEv3:vars]

openshift_hosted_metrics_storage_kind=nfs
openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
openshift_hosted_metrics_storage_nfs_directory=/exports
openshift_hosted_metrics_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_metrics_storage_volume_name=metrics
openshift_hosted_metrics_storage_volume_size=10Gi
Option B: External NFS Host

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.

[OSEv3:vars]

openshift_hosted_metrics_storage_kind=nfs
openshift_hosted_metrics_storage_access_modes=['ReadWriteOnce']
openshift_hosted_metrics_storage_host=nfs.example.com
openshift_hosted_metrics_storage_nfs_directory=/exports
openshift_hosted_metrics_storage_volume_name=metrics
openshift_hosted_metrics_storage_volume_size=10Gi

The remote volume path using the following options would be nfs.example.com:/exports/metrics.

Option C: Dynamic

Use the following variable if your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider:

[OSEv3:vars]

openshift_metrics_cassandra_storage_type=dynamic

2.6.3.16. Configuring Cluster Logging

Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging when using the advanced installation method:

[OSEv3:vars]

openshift_hosted_logging_deploy=true 1
openshift_hosted_logging_deployer_prefix=registry.example.com:8888/openshift3/ 2
openshift_hosted_logging_deployer_version=v3.5 3
1
Enables the logging stack.
2
Replace registry.example.com:8888/openshift3/ with your desired prefix.
3
Replace with the desired image version.
2.6.3.16.1. Configuring Logging Storage

The openshift_hosted_logging_storage_kind variable must be set in order to use persistent storage for logging. If openshift_hosted_logging_storage_kind is not set, then cluster logging data is stored in an emptyDir volume, which will be deleted when the Elasticsearch pod terminates.

There are three options for enabling cluster logging storage when using the advanced installation; however, for production environments, you should use only block storage. Do not attempt to use NFS storage. NFS storage is described below only to indicate how it might be used if you were not using block storage, or you were not in a production environment.

Option A: NFS Host Group
Note

Do not attempt to use NFS storage in a production environment.

When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs] host group. For example, the volume path using these options would be /exports/logging:

[OSEv3:vars]

openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_nfs_options='*(rw,root_squash)'
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi
Option B: External NFS Host
Note

Do not attempt to use NFS storage in a production environment.

To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.

[OSEv3:vars]

openshift_hosted_logging_storage_kind=nfs
openshift_hosted_logging_storage_access_modes=['ReadWriteOnce']
openshift_hosted_logging_storage_host=nfs.example.com
openshift_hosted_logging_storage_nfs_directory=/exports
openshift_hosted_logging_storage_volume_name=logging
openshift_hosted_logging_storage_volume_size=10Gi

The remote volume path using the following options would be nfs.example.com:/exports/logging.

Option C: Dynamic

Use the following variable if your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider:

[OSEv3:vars]

openshift_hosted_logging_storage_kind=dynamic

2.6.4. Example Inventory Files

2.6.4.1. Single Master Examples

You can configure an environment with a single master and multiple nodes, and either a single embedded etcd or multiple external etcd hosts.

Note

Moving from a single master cluster to multiple masters after installation is not supported.

Single Master and Multiple Nodes

The following table describes an example environment for a single master (with embedded etcd) and two nodes:

Host NameInfrastructure Component to Install

master.example.com

Master and node

node1.example.com

Node

node2.example.com

You can see these example hosts present in the [masters] and [nodes] sections of the following example inventory file:

Single Master and Multiple Nodes Inventory File

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true

deployment_type=openshift-enterprise

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
master.example.com

# host group for nodes, includes region info
[nodes]
master.example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Single Master, Multiple etcd, and Multiple Nodes

The following table describes an example environment for a single master, three etcd hosts, and two nodes:

Host NameInfrastructure Component to Install

master.example.com

Master and node

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Node

node2.example.com

Note

When specifying multiple etcd hosts, stand-alone etcd (non-embedded) is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported. Stand-alone etcd can also be collocated on master hosts, if desired.

You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:

Single Master, Multiple etcd, and Multiple Nodes Inventory File

# Create an OSEv3 group that contains the masters, nodes, and etcd groups
[OSEv3:children]
masters
nodes
etcd

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=openshift-enterprise

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
master.example.com

# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com

# host group for nodes, includes region info
[nodes]
master.example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

2.6.4.2. Multiple Masters Examples

You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.

Note

Moving from a single master cluster to multiple masters after installation is not supported.

When configuring multiple masters, the advanced installation supports the following high availability (HA) method:

native

Leverages the native HA master capabilities built into OpenShift Container Platform and can be combined with any load balancing solution. If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.

Note

This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.

For an external load balancing solution, you must have:

  • A pre-created load balancer VIP configured for SSL passthrough.
  • A VIP listening on the port specified by the openshift_master_api_port and openshift_master_console_port values (8443 by default) and proxying back to all master hosts on that port.
  • A domain name for VIP registered in DNS.

    • The domain name will become the value of both openshift_master_cluster_public_hostname and openshift_master_cluster_hostname in the OpenShift Container Platform installer.
Note

This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider we recommend that you deploy a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.

See External Load Balancer Integrations for more information.

Note

For more on the high availability master architecture, see Kubernetes Infrastructure.

Note the following when using the native HA method:

  • The advanced installation method does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments.
  • In a HAProxy setup, controller manager servers run as standalone processes. They elect their active leader with a lease stored in etcd. The lease expires after 30 seconds by default. If a failure happens on an active controller server, it will take up to this number of seconds to elect another leader. The interval can be configured with the osm_controller_lease_ttl variable.

To configure multiple masters, refer to the following section.

Multiple Masters with Multiple etcd

The following describes an example environment for three masters, one HAProxy load balancer, three etcd hosts, and two nodes using the native HA method:

Host NameInfrastructure Component to Install

master1.example.com

Master (clustered using native HA) and node

master2.example.com

master3.example.com

lb.example.com

HAProxy to load balance API master endpoints

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Node

node2.example.com

Note

When specifying multiple etcd hosts, stand-alone etcd (non-embedded) is installed and configured. Clustering of OpenShift Container Platform’s embedded etcd is not supported. Stand-alone etcd can also be collocated on master hosts, if desired.

You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:

Example 2.2. Multiple Masters Using HAProxy Inventory File

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=openshift-enterprise

# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Native high availbility cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# apply updated node defaults
openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}

# override the default controller lease ttl
#osm_controller_lease_ttl=30

# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Multiple Masters with Master and etcd on the Same Host

The following describes an example environment for three masters with etcd on each host, one HAProxy load balancer, and two nodes using the native HA method:

Host NameInfrastructure Component to Install

master1.example.com

Master (clustered using native HA) and node with etcd on each host

master2.example.com

master3.example.com

lb.example.com

HAProxy to load balance API master endpoints

node1.example.com

Node

node2.example.com

You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=openshift-enterprise

# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# override the default controller lease ttl
#osm_controller_lease_ttl=30

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
master1.example.com
master2.example.com
master3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"
infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

2.6.5. Running the Advanced Installation

After you have finished configuring Ansible by defining your own inventory file in /etc/ansible/hosts or modifying one of the example inventories, follow these steps to run the advanced installation.

Note

Due to a known issue, after running the installation, if NFS volumes are provisioned for any component, the following directories might be created whether their components are being deployed to NFS volumes or not:

  • /exports/logging-es
  • /exports/logging-es-ops/
  • /exports/metrics/
  • /exports/prometheus
  • /exports/prometheus-alertbuffer/
  • /exports/prometheus-alertmanager/

You can delete these directories after installation, as needed.

  1. If you are using a proxy, you must add the IP address of the etcd endpoints to the openshift_no_proxy cluster variable in your inventory file.

    Note

    If you are not using a proxy, you can skip this step.

    In OpenShift Container Platform 3.4, the master connected to the etcd cluster using the host name of the etcd endpoints. In OpenShift Container Platform 3.5, the master now connects to etcd via IP address.

    When configuring a cluster to use proxy settings (see Configuring Global Proxy Options), this change causes the master-to-etcd connection to be proxied as well, rather than being excluded by host name in each host’s NO_PROXY setting (see Working with HTTP Proxies for more about NO_PROXY).

    To workaround this issue, set the following:

    openshift_no_proxy=<ip_address>

    Use the IP address or host name that the master uses to contact the etcd cluster as the <ip_address>. The <port> should be 2379 if you are using standalone etcd (clustered) or 4001 for embedded etcd (single master, non-clustered etcd). The installer will be updated in a future release to handle this scenario automatically during installation and upgrades (BZ#1466783).

    Important

    Do not run OpenShift Ansible playbooks under nohup. Using nohup with the playbooks causes file descriptors to be created and not closed. Therefore, the system can run out of files to open and the playbook will fail.

  2. Run the advanced installation using the following playbook:

    # ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

    If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.

Warning

The installer caches playbook configuration values for 10 minutes, by default. If you change any system, network, or inventory configuration, and then re-run the installer within that 10 minute period, the new values are not used, and the previous values are used instead. You can delete the contents of the cache, which is defined by the fact_caching_connection value in the /etc/ansible/ansible.cfg file. An example of this file is shown in Recommended Installation Practices.

  1. After the installation succeeds, continue to Verifying the Installation.

2.6.6. Verifying the Installation

After the installation completes:

  1. Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:

    # oc get nodes
    
    NAME                        STATUS                     AGE
    master.example.com          Ready,SchedulingDisabled   165d
    node1.example.com           Ready                      165d
    node2.example.com           Ready                      165d
  2. To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.

    For example, for a master host with a host name of master.openshift.com and using the default port of 8443, the web console would be found at https://master.openshift.com:8443/console.

  3. Now that the install has been verified, run the following command on each master and node host to add the atomic-openshift packages back to the list of yum excludes on the host: + ---- # atomic-openshift-excluder exclude ----
Note

The default port for the console is 8443. If this was changed during the installation, the port can be found at openshift_master_console_port in the /etc/ansible/hosts file.

Verifying Multiple etcd Hosts

If you installed multiple etcd hosts:

  1. First, verify that the etcd package, which provides the etcdctl command, is installed:

    # yum install etcd
  2. On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:

    # etcdctl -C \
        https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \
        --ca-file=/etc/origin/master/master.etcd-ca.crt \
        --cert-file=/etc/origin/master/master.etcd-client.crt \
        --key-file=/etc/origin/master/master.etcd-client.key cluster-health
  3. Also verify the member list is correct:

    # etcdctl -C \
        https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \
        --ca-file=/etc/origin/master/master.etcd-ca.crt \
        --cert-file=/etc/origin/master/master.etcd-client.crt \
        --key-file=/etc/origin/master/master.etcd-client.key member list
Verifying Multiple Masters Using HAProxy

If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:

http://<lb_hostname>:9000

You can verify your installation by consulting the HAProxy Configuration documentation.

2.6.7. Optionally Securing Builds

Running docker build is a privileged process, so the container has more access to the node than might be considered acceptable in some multi-tenant environments. If you do not trust your users, you can use a more secure option at the time of installation. Disable Docker builds on the cluster and require that users build images outside of the cluster. See Securing Builds by Strategy for more information on this optional process.

2.6.8. Uninstalling OpenShift Container Platform

You can uninstall OpenShift Container Platform hosts in your cluster by running the uninstall.yml playbook. This playbook deletes OpenShift Container Platform content installed by Ansible, including:

  • Configuration
  • Containers
  • Default templates and image streams
  • Images
  • RPM packages

The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook. If you want to uninstall OpenShift Container Platform across all hosts in your cluster, run the playbook using the inventory file you used when installing OpenShift Container Platform initially or ran most recently:

# ansible-playbook [-i /path/to/file] \
    /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

2.6.8.1. Uninstalling Nodes

You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:

Warning

This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster.

  1. First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
  2. Create a different inventory file that only references those hosts. For example, to only delete content from one node:

    [OSEv3:children]
    nodes 1
    
    [OSEv3:vars]
    ansible_ssh_user=root
    deployment_type=openshift-enterprise
    
    [nodes]
    node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" 2
    1
    Only include the sections that pertain to the hosts you are interested in uninstalling.
    2
    Only include hosts that you want to uninstall.
  3. Specify that new inventory file using the -i option when running the uninstall.yml playbook:

    # ansible-playbook -i /path/to/new/file \
        /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

When the playbook completes, all OpenShift Container Platform content should be removed from any specified hosts.

2.6.9. Known Issues

  • On failover in multiple master clusters, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/kubernetes/kubernetes/issues/10030 for details.
  • On failure of the Ansible installer, you must start from a clean operating system installation. If you are using virtual machines, start from a fresh image. If you are using bare metal machines, see Uninstalling OpenShift Container Platform for instructions.

2.6.10. What’s Next?

Now that you have a working OpenShift Container Platform instance, you can:

2.7. Disconnected Installation

2.7.1. Overview

Frequently, portions of a datacenter may not have access to the Internet, even via proxy servers. Installing OpenShift Container Platform in these environments is considered a disconnected installation.

An OpenShift Container Platform disconnected installation differs from a regular installation in two primary ways:

  • The OpenShift Container Platform software channels and repositories are not available via Red Hat’s content distribution network.
  • OpenShift Container Platform uses several containerized components. Normally, these images are pulled directly from Red Hat’s Docker registry. In a disconnected environment, this is not possible.

A disconnected installation ensures the OpenShift Container Platform software is made available to the relevant servers, then follows the same installation process as a standard connected installation. This topic additionally details how to manually download the container images and transport them onto the relevant servers.

Once installed, in order to use OpenShift Container Platform, you will need source code in a source control repository (for example, Git). This topic assumes that an internal Git repository is available that can host source code and this repository is accessible from the OpenShift Container Platform nodes. Installing the source control repository is outside the scope of this document.

Also, when building applications in OpenShift Container Platform, your build may have some external dependencies, such as a Maven Repository or Gem files for Ruby applications. For this reason, and because they might require certain tags, many of the Quickstart templates offered by OpenShift Container Platform may not work on a disconnected environment. However, while Red Hat container images try to reach out to external repositories by default, you can configure OpenShift Container Platform to use your own internal repositories. For the purposes of this document, we assume that such internal repositories already exist and are accessible from the OpenShift Container Platform nodes hosts. Installing such repositories is outside the scope of this document.

Note

You can also have a Red Hat Satellite server that provides access to Red Hat content via an intranet or LAN. For environments with Satellite, you can synchronize the OpenShift Container Platform software onto the Satellite for use with the OpenShift Container Platform servers.

Red Hat Satellite 6.1 also introduces the ability to act as a Docker registry, and it can be used to host the OpenShift Container Platform containerized components. Doing so is outside of the scope of this document.

2.7.2. Prerequisites

This document assumes that you understand OpenShift Container Platform’s overall architecture and that you have already planned out what the topology of your environment will look like.

2.7.3. Required Software and Components

In order to pull down the required software repositories and container images, you will need a Red Hat Enterprise Linux (RHEL) 7 server with access to the Internet and at least 100GB of additional free space. All steps in this section should be performed on the Internet-connected server as the root system user.

2.7.3.1. Syncing Repositories

Before you sync with the required repositories, you may need to import the appropriate GPG key:

# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

If the key is not imported, the indicated package is deleted after syncing the repository.

To sync the required repositories:

  1. Register the server with the Red Hat Customer Portal. You must use the login and password associated with the account that has access to the OpenShift Container Platform subscriptions:

    # subscription-manager register
  2. Attach to a subscription that provides OpenShift Container Platform channels. You can find the list of available subscriptions using:

    # subscription-manager list --available --matches '*OpenShift*'

    Then, find the pool ID for the subscription that provides OpenShift Container Platform, and attach it:

    # subscription-manager attach --pool=<pool_id>
    # subscription-manager repos --disable="*"
    # subscription-manager repos \
        --enable="rhel-7-server-rpms" \
        --enable="rhel-7-server-extras-rpms" \
        --enable="rhel-7-fast-datapath-rpms" \
        --enable="rhel-7-server-ose-3.5-rpms"
  3. The yum-utils command provides the reposync utility, which lets you mirror yum repositories, and createrepo can create a usable yum repository from a directory:

    # yum -y install yum-utils createrepo docker git

    You will need up to 110GB of free space in order to sync the software. Depending on how restrictive your organization’s policies are, you could re-connect this server to the disconnected LAN and use it as the repository server. You could use USB-connected storage and transport the software to another server that will act as the repository server. This topic covers these options.

  4. Make a path to where you want to sync the software (either locally or on your USB or other device):

    # mkdir -p </path/to/repos>
  5. Sync the packages and create the repository for each of them. You will need to modify the command for the appropriate path you created above:

    # for repo in \
    rhel-7-server-rpms \
    rhel-7-server-extras-rpms \
    rhel-7-fast-datapath-rpms \
    rhel-7-server-ose-3.5-rpms
    do
      reposync --gpgcheck -lm --repoid=${repo} --download_path=/path/to/repos
      createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo}
    done

2.7.3.2. Syncing Images

To sync the container images:

  1. Start the Docker daemon:

    # systemctl start docker
  2. Pull all of the required OpenShift Container Platform containerized components. Replace <tag> with v3.5.5.31.66 for the latest version.

    # docker pull registry.access.redhat.com/openshift3/ose-f5-router:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-deployer:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-egress-router:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:<tag>
    # docker pull registry.access.redhat.com/openshift3/openvswitch:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-recycler:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-docker-builder:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-docker-registry:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-haproxy-router:<tag>
    # docker pull registry.access.redhat.com/openshift3/node:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-pod:<tag>
    # docker pull registry.access.redhat.com/openshift3/ose-sti-builder:<tag>
    Note

    If you are using NFS, you need the ose-recycler image. Otherwise, the volumes will not recycle, potentially causing errors.

  3. Pull all of the required OpenShift Container Platform containerized components for the additional centralized log aggregation and metrics aggregation components. Replace <tag> with v3.5 for the latest version.

    # docker pull registry.access.redhat.com/openshift3/logging-curator:<tag>
    # docker pull registry.access.redhat.com/openshift3/logging-elasticsearch:<tag>
    # docker pull registry.access.redhat.com/openshift3/logging-kibana:<tag>
    # docker pull registry.access.redhat.com/openshift3/metrics-deployer:<tag>
    # docker pull registry.access.redhat.com/openshift3/metrics-hawkular-openshift-agent:<tag>
    # docker pull registry.access.redhat.com/openshift3/logging-auth-proxy:<tag>
    # docker pull registry.access.redhat.com/openshift3/logging-deployer:<tag>
    # docker pull registry.access.redhat.com/openshift3/logging-fluentd:<tag>
    # docker pull registry.access.redhat.com/openshift3/metrics-cassandra:<tag>
    # docker pull registry.access.redhat.com/openshift3/metrics-hawkular-metrics:<tag>
    # docker pull registry.access.redhat.com/openshift3/metrics-heapster:<tag>
  4. Pull the Red Hat-certified Source-to-Image (S2I) builder images that you intend to use in your OpenShift environment. You can pull the following images:

    # docker pull registry.access.redhat.com/jboss-amq-6/amq63-openshift
    # docker pull registry.access.redhat.com/jboss-datagrid-7/datagrid71-openshift
    # docker pull registry.access.redhat.com/jboss-datagrid-7/datagrid71-client-openshift
    # docker pull registry.access.redhat.com/jboss-datavirt-6/datavirt63-openshift
    # docker pull registry.access.redhat.com/jboss-datavirt-6/datavirt63-driver-openshift
    # docker pull registry.access.redhat.com/jboss-decisionserver-6/decisionserver64-openshift
    # docker pull registry.access.redhat.com/jboss-processserver-6/processserver64-openshift
    # docker pull registry.access.redhat.com/jboss-eap-6/eap64-openshift
    # docker pull registry.access.redhat.com/jboss-eap-7/eap70-openshift
    # docker pull registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift
    # docker pull registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat8-openshift
    # docker pull registry.access.redhat.com/openshift3/jenkins-1-rhel7
    # docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7
    # docker pull registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7
    # docker pull registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7
    # docker pull registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7
    # docker pull registry.access.redhat.com/rhscl/mongodb-32-rhel7
    # docker pull registry.access.redhat.com/rhscl/mysql-57-rhel7
    # docker pull registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7
    # docker pull registry.access.redhat.com/rhscl/perl-524-rhel7
    # docker pull registry.access.redhat.com/rhscl/php-56-rhel7
    # docker pull registry.access.redhat.com/rhscl/postgresql-95-rhel7
    # docker pull registry.access.redhat.com/rhscl/python-35-rhel7
    # docker pull registry.access.redhat.com/redhat-sso-7/sso70-openshift
    # docker pull registry.access.redhat.com/rhscl/ruby-24-rhel7
    # docker pull registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift
    # docker pull registry.access.redhat.com/redhat-sso-7/sso71-openshift
    # docker pull registry.access.redhat.com/rhscl/nodejs-4-rhel7
    # docker pull registry.access.redhat.com/rhscl/mariadb-101-rhel7

    Make sure to indicate the correct tag specifying the desired version number. For example, to pull both the previous and latest version of the Tomcat image:

    # docker pull \
    registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest
    # docker pull \
    registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1

    See the S2I table in the OpenShift and Atomic Platform Tested Integrations page for details about OpenShift image version compatibility.

  5. If you are using a stand-alone registry or plan to enable the registry console with the integrated registry, you must pull the registry-console image.

    Replace <tag> with 3.5 for the latest version.

    # docker pull registry.access.redhat.com/openshift3/registry-console:<tag>

2.7.3.3. Preparing Images for Export

Container images can be exported from a system by first saving them to a tarball and then transporting them:

  1. Make and change into a repository home directory:

    # mkdir </path/to/repos/images>
    # cd </path/to/repos/images>
  2. Export the OpenShift Container Platform containerized components:

    # docker save -o ose3-images.tar \
        registry.access.redhat.com/openshift3/ose-f5-router \
        registry.access.redhat.com/openshift3/ose-deployer \
        registry.access.redhat.com/openshift3/ose \
        registry.access.redhat.com/openshift3/ose-egress-router \
        registry.access.redhat.com/openshift3/ose-keepalived-ipfailover \
        registry.access.redhat.com/openshift3/openvswitch \
        registry.access.redhat.com/openshift3/ose-recycler \
        registry.access.redhat.com/openshift3/ose-docker-builder \
        registry.access.redhat.com/openshift3/ose-docker-registry \
        registry.access.redhat.com/openshift3/ose-haproxy-router \
        registry.access.redhat.com/openshift3/node \
        registry.access.redhat.com/openshift3/ose-pod \
        registry.access.redhat.com/openshift3/ose-sti-builder \
        registry.access.redhat.com/openshift3/registry-console
  3. If you synchronized the metrics and log aggregation images, export:

    # docker save -o ose3-logging-metrics-images.tar \
        registry.access.redhat.com/openshift3/logging-curator:<tag>
        registry.access.redhat.com/openshift3/logging-elasticsearch:<tag>
        registry.access.redhat.com/openshift3/logging-kibana:<tag>
        registry.access.redhat.com/openshift3/metrics-deployer:<tag>
        registry.access.redhat.com/openshift3/metrics-hawkular-openshift-agent:<tag>
        registry.access.redhat.com/openshift3/logging-auth-proxy:<tag>
        registry.access.redhat.com/openshift3/logging-deployer:<tag>
        registry.access.redhat.com/openshift3/logging-fluentd:<tag>
        registry.access.redhat.com/openshift3/metrics-cassandra:<tag>
        registry.access.redhat.com/openshift3/metrics-hawkular-metrics:<tag>
        registry.access.redhat.com/openshift3/metrics-heapster:<tag>
  4. Export the S2I builder images that you synced in the previous section. For example, if you synced only the Jenkins and Tomcat images:

    # docker save -o ose3-builder-images.tar \
        registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \
        registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1 \
        registry.access.redhat.com/openshift3/jenkins-1-rhel7 \
        registry.access.redhat.com/openshift3/jenkins-2-rhel7 \
        registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7 \
        registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 \
        registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7

2.7.4. Repository Server

During the installation (and for later updates, should you so choose), you will need a webserver to host the repositories. RHEL 7 can provide the Apache webserver.

Option 1: Re-configuring as a Web server

If you can re-connect the server where you synchronized the software and images to your LAN, then you can simply install Apache on the server:

# yum install httpd

Skip to Placing the Software.

Option 2: Building a Repository Server

If you need to build a separate server to act as the repository server, install a new RHEL 7 system with at least 110GB of space. On this repository server during the installation, make sure you select the Basic Web Server option.

2.7.4.1. Placing the Software

  1. If necessary, attach the external storage, and then copy the repository files into Apache’s root folder. Note that the below copy step (cp -a) should be substituted with move (mv) if you are repurposing the server you used to sync:

    # cp -a /path/to/repos /var/www/html/
    # chmod -R +r /var/www/html/repos
    # restorecon -vR /var/www/html
  2. Add the firewall rules:

    # firewall-cmd --permanent --add-service=http
    # firewall-cmd --reload
  3. Enable and start Apache for the changes to take effect:

    # systemctl enable httpd
    # systemctl start httpd

2.7.5. OpenShift Container Platform Systems

2.7.5.1. Building Your Hosts

At this point you can perform the initial creation of the hosts that will be part of the OpenShift Container Platform environment. It is recommended to use the latest version of RHEL 7 and to perform a minimal installation. You will also want to pay attention to the other OpenShift Container Platform-specific prerequisites.

Once the hosts are initially built, the repositories can be set up.

2.7.5.2. Connecting the Repositories

On all of the relevant systems that will need OpenShift Container Platform software components, create the required repository definitions. Place the following text in the /etc/yum.repos.d/ose.repo file, replacing <server_IP> with the IP or host name of the Apache server hosting the software repositories:

[rhel-7-server-rpms]
name=rhel-7-server-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-rpms
enabled=1
gpgcheck=0
[rhel-7-server-extras-rpms]
name=rhel-7-server-extras-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-extras-rpms
enabled=1
gpgcheck=0
[rhel-7-fast-datapath-rpms]
name=rhel-7-fast-datapath-rpms
baseurl=http://<server_IP>/repos/rhel-7-fast-datapath-rpms
enabled=1
gpgcheck=0
[rhel-7-server-ose-3.5-rpms]
name=rhel-7-server-ose-3.5-rpms
baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.5-rpms
enabled=1
gpgcheck=0

2.7.5.3. Host Preparation

At this point, the systems are ready to continue to be prepared following the OpenShift Container Platform documentation.

Skip the section titled Host Registration and start with Installing Base Packages.

2.7.6. Installing OpenShift Container Platform

2.7.6.1. Importing OpenShift Container Platform Containerized Components

To import the relevant components, securely copy the images from the connected host to the individual OpenShift Container Platform hosts:

# scp /var/www/html/repos/images/ose3-images.tar root@<openshift_host_name>:
# ssh root@<openshift_host_name> "docker load -i ose3-images.tar"

If you prefer, you could use wget on each OpenShift Container Platform host to fetch the tar file, and then perform the Docker import command locally. Perform the same steps for the metrics and logging images, if you synchronized them.

On the host that will act as an OpenShift Container Platform master, copy and import the builder images:

# scp /var/www/html/images/ose3-builder-images.tar root@<openshift_master_host_name>:
# ssh root@<openshift_master_host_name> "docker load -i ose3-builder-images.tar"

2.7.6.2. Running the OpenShift Container Platform Installer

You can now choose to follow the quick or advanced OpenShift Container Platform installation instructions in the documentation.

2.7.6.3. Creating the Internal Docker Registry

You now need to create the internal Docker registry.

If you want to install a stand-alone registry, you must pull the registry-console container image and set deployment_subtype=registry in the inventory file.

2.7.7. Post-Installation Changes

In one of the previous steps, the S2I images were imported into the Docker daemon running on one of the OpenShift Container Platform master hosts. In a connected installation, these images would be pulled from Red Hat’s registry on demand. Since the Internet is not available to do this, the images must be made available in another Docker registry.

OpenShift Container Platform provides an internal registry for storing the images that are built as a result of the S2I process, but it can also be used to hold the S2I builder images. The following steps assume you did not customize the service IP subnet (172.30.0.0/16) or the Docker registry port (5000).

2.7.7.1. Re-tagging S2I Builder Images

  1. On the master host where you imported the S2I builder images, obtain the service address of your Docker registry that you installed on the master:

    # export REGISTRY=$(oc get service docker-registry --template='{{.spec.clusterIP}}{{"\n"}}')
  2. Next, tag all of the builder images that you synced and exported before pushing them into the OpenShift Container Platform Docker registry. For example, if you synced and exported only the Tomcat image:

    # docker tag \
    registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1 \
    $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1
    # docker tag \
    registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \
    $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2
    # docker tag \
    registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \
    $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest

2.7.7.2. Configuring a Registry Location

If you are using an image registry other than the default at registry.access.redhat.com, specify the desired registry within the /etc/ansible/hosts file.

oreg_url=example.com/openshift3/ose-${component}:${version}
openshift_examples_modify_imagestreams=true

Depending on your registry, you may need to configure:

openshift_docker_additional_registries=example.com
openshift_docker_insecure_registries=example.com
Table 2.15. Registry Variables
VariablePurpose

oreg_url

Set to the alternate image location. Necessary if you are not using the default registry at registry.access.redhat.com.

openshift_examples_modify_imagestreams

Set to true if pointing to a registry other than the default. Modifies the image stream location to the value of oreg_url.

openshift_docker_additional_registries

Set openshift_docker_additional_registries to add its value in the add_registry line in /etc/sysconfig/docker. With add_registry, you can add your own registry to be used for Docker search and Docker pull. Use the add_registry option to list a set of registries, each prepended with --add-registry flag. The first registry added will be the first registry searched. For example, add_registry=--add-registry registry.access.redhat.com --add-registry example.com.

openshift_docker_insecure_registries

Set openshift_docker_insecure_registries to add its value in the insecure_registry line in /etc/sysconfig/docker. If you have a registry secured with HTTPS but do not have proper certificates distributed, you can tell Docker not to look for full authorization by adding the registry to the insecure_registry line and uncommenting it. For example, insecure_registry—​insecure-registry example.com.

2.7.7.3. Creating an Administrative User

Pushing the container images into OpenShift Container Platform’s Docker registry requires a user with cluster-admin privileges. Because the default OpenShift Container Platform system administrator does not have a standard authorization token, they cannot be used to log in to the Docker registry.

To create an administrative user:

  1. Create a new user account in the authentication system you are using with OpenShift Container Platform. For example, if you are using local htpasswd-based authentication:

    # htpasswd -b /etc/openshift/openshift-passwd <admin_username> <password>
  2. The external authentication system now has a user account, but a user must log in to OpenShift Container Platform before an account is created in the internal database. Log in to OpenShift Container Platform for this account to be created. This assumes you are using the self-signed certificates generated by OpenShift Container Platform during the installation:

    # oc login --certificate-authority=/etc/origin/master/ca.crt \
        -u <admin_username> https://<openshift_master_host>:8443
  3. Get the user’s authentication token:

    # MYTOKEN=$(oc whoami -t)
    # echo $MYTOKEN
    iwo7hc4XilD2KOLL4V1O55ExH2VlPmLD-W2-JOd6Fko

2.7.7.4. Modifying the Security Policies

  1. Using oc login switches to the new user. Switch back to the OpenShift Container Platform system administrator in order to make policy changes:

    # oc login -u system:admin
  2. In order to push images into the OpenShift Container Platform Docker registry, an account must have the image-builder security role. Add this to your OpenShift Container Platform administrative user:

    # oc adm policy add-role-to-user system:image-builder <admin_username>
  3. Next, add the administrative role to the user in the openshift project. This allows the administrative user to edit the openshift project, and, in this case, push the container images:

    # oc adm policy add-role-to-user admin <admin_username> -n openshift

2.7.7.5. Editing the Image Stream Definitions

The openshift project is where all of the image streams for builder images are created by the installer. They are loaded by the installer from the /usr/share/openshift/examples directory. Change all of the definitions by deleting the image streams which had been loaded into OpenShift Container Platform’s database, then re-create them:

  1. Delete the existing image streams:

    # oc delete is -n openshift --all
  2. Make a backup of the files in /usr/share/openshift/examples/ if you desire. Next, edit the file image-streams-rhel7.json in the /usr/share/openshift/examples/image-streams folder. You will find an image stream section for each of the builder images. Edit the spec stanza to point to your internal Docker registry.

    For example, change:

    "spec": {
      "dockerImageRepository": "registry.access.redhat.com/rhscl/mongodb-26-rhel7",

    to:

    "spec": {
      "dockerImageRepository": "172.30.69.44:5000/openshift/mongodb-26-rhel7",

    In the above, the repository name was changed from rhscl to openshift. You will need to ensure the change, regardless of whether the repository is rhscl, openshift3, or another directory. Every definition should have the following format:

    <registry_ip>:5000/openshift/<image_name>

    Repeat this change for every image stream in the file. Ensure you use the correct IP address that you determined earlier. When you are finished, save and exit. Repeat the same process for the JBoss image streams in the /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json file.

  3. Load the updated image stream definitions:

    # oc create -f /usr/share/openshift/examples/image-streams/image-streams-rhel7.json -n openshift
    # oc create -f /usr/share/openshift/examples/xpaas-streams/jboss-image-streams.json -n openshift

2.7.7.6. Loading the Container Images

At this point the system is ready to load the container images.

  1. Log in to the Docker registry using the token and registry service IP obtained earlier:

    # docker login -u adminuser -e mailto:adminuser@abc.com \
       -p $MYTOKEN $REGISTRY:5000
  2. Push the Docker images:

    # docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.1
    # docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:1.2
    # docker push $REGISTRY:5000/openshift/webserver30-tomcat7-openshift:latest
  3. Verify that all the image streams now have the tags populated:

    # oc get imagestreams -n openshift
    NAME                                 DOCKER REPO                                                      TAGS                                     UPDATED
    jboss-webserver30-tomcat7-openshift  $REGISTRY/jboss-webserver-3/webserver30-jboss-tomcat7-openshift  1.1,1.1-2,1.1-6 + 2 more...              2 weeks ago
    ...

2.7.8. Installing a Router

At this point, the OpenShift Container Platform environment is almost ready for use. It is likely that you will want to install and configure a router.

2.8. Installing a Stand-alone Deployment of OpenShift Container Registry

2.8.1. About OpenShift Container Registry

OpenShift Container Platform is a fully-featured enterprise solution that includes an integrated container registry called OpenShift Container Registry (OCR). Alternatively, instead of deploying OpenShift Container Platform as a full PaaS environment for developers, you can install OCR as a stand-alone container registry to run on-premise or in the cloud.

When installing a stand-alone deployment of OCR, a cluster of masters and nodes is still installed, similar to a typical OpenShift Container Platform installation. Then, the container registry is deployed to run on the cluster. This stand-alone deployment option is useful for administrators that want a container registry, but do not require the full OpenShift Container Platform environment that includes the developer-focused web console and application build and deployment tools.

Note

OCR has replaced the upstream Atomic Registry project, which was a different implementation that used a non-Kubernetes deployment method that leveraged systemd and local configuration files to manage services.

OCR provides the following capabilities:

Administrators may want to deploy a stand-alone OCR to manage a registry separately that supports multiple OpenShift Container Platform clusters. A stand-alone OCR also enables administrators to separate their registry to satisfy their own security or compliance requirements.

2.8.2. Minimum Hardware Requirements

Installing a stand-alone OCR has the following hardware requirements:

  • Physical or virtual system, or an instance running on a public or private IaaS.
  • Base OS: RHEL 7.3 with the "Minimal" installation option and the latest packages from the RHEL 7 Extras channel, or RHEL Atomic Host 7.3.2 or later. RHEL 7.2 is also supported using Docker 1.12 and its dependencies.
  • NetworkManager 1.0 or later
  • 2 vCPU.
  • Minimum 16 GB RAM.
  • Minimum 15 GB hard disk space for the file system containing /var/.
  • An additional minimum 15 GB unallocated space to be used for Docker’s storage back end; see Configuring Docker Storage for details.
Important

OpenShift Container Platform only supports servers with x86_64 architecture.

Note

Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.

2.8.3. Supported System Topologies

The following system topologies are supported for stand-alone OCR:

All-in-one

A single host that includes the master, node, etcd, and registry components.

Multiple Masters (Highly-Available)

Three hosts with all components included on each (master, node, etcd, and registry), with the masters configured for native high-availability.

2.8.4. Host Preparation

Before installing stand-alone OCR, all of the same steps detailed in the Host Preparation topic for installing a full OpenShift Container Platform PaaS must be performed. This includes registering and subscribing the host(s) to the proper repositories, installing or updating certain packages, and setting up Docker and its storage requirements.

Follow the steps in the Host Preparation topic, then continue to Installation Methods.

2.8.5. Installation Methods

To install a stand-alone registry, use either of the standard installation methods (quick or advanced) used to install any variant of OpenShift Container Platform.

2.8.5.1. Quick Installation for Stand-alone OpenShift Container Registry

When using the quick installation method to install stand-alone OCR, start the interactive installation by running:

$ atomic-openshift-installer install

Then follow the on-screen instructions to install a new registry. The installation questions will be largely the same as if you were installing a full OpenShift Container Platform PaaS, but when you reach the following screen:

Which variant would you like to install?


(1) OpenShift Container Platform
(2) Registry

Be sure to choose 2 to follow the registry installation path.

Note

For further usage details on the quick installer in general, see the full topic at Quick Installation.

2.8.5.2. Advanced Installation for Stand-alone OpenShift Container Registry

When using the advanced installation method to install stand-alone OCR, use the same steps for installing a full OpenShift Container Platform PaaS using Ansible described in the full Advanced Installation topic. The main difference is that you must set deployment_subtype=registry in the inventory file within the [OSEv3:vars] section for the playbooks to follow the registry installation path.

See the following example inventory files for the different supported system topologies:

All-in-one Stand-alone OpenShift Container Registry Inventory File

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

openshift_master_default_subdomain=apps.test.example.com

# If ansible_ssh_user is not root, ansible_become must be set to true
#ansible_become=true

deployment_type=openshift-enterprise
deployment_subtype=registry 1

# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
registry.example.com

# host group for nodes, includes region info
[nodes]
registry.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true 2

1
Set deployment_subtype=registry to ensure installation of stand-alone OCR and not a full OpenShift Container Platform environment.
2
Set openshift_schedulable=true on the node entry to make the single node schedulable for pod placement.

Multiple Masters (Highly-Available) Stand-alone OpenShift Container Registry Inventory File

# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
# The lb group lets Ansible configure HAProxy as the load balancing solution.
# Comment lb out if your load balancer is pre-configured.
[OSEv3:children]
masters
nodes
etcd
lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]
ansible_ssh_user=root
deployment_type=openshift-enterprise
deployment_subtype=registry 1

openshift_master_default_subdomain=apps.test.example.com

# Uncomment the following to enable htpasswd authentication; defaults to
# DenyAllPasswordIdentityProvider.
#openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# Native high availability cluster method with optional load balancer.
# If no lb group is defined installer assumes that a load balancer has
# been preconfigured. For installation the value of
# openshift_master_cluster_hostname must resolve to the load balancer
# or to one or all of the masters defined in the inventory if no load
# balancer is present.
openshift_master_cluster_method=native
openshift_master_cluster_hostname=openshift-internal.example.com
openshift_master_cluster_public_hostname=openshift-cluster.example.com

# apply updated node defaults
openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']}

# override the default controller lease ttl
#osm_controller_lease_ttl=30

# enable ntp on masters to ensure proper failover
openshift_clock_enabled=true

# host group for masters
[masters]
master1.example.com
master2.example.com
master3.example.com

# host group for etcd
[etcd]
etcd1.example.com
etcd2.example.com
etcd3.example.com

# Specify load balancer host
[lb]
lb.example.com

# host group for nodes, includes region info
[nodes]
master[1:3].example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}"

1
Set deployment_subtype=registry to ensure installation of stand-alone OCR and not a full OpenShift Container Platform environment.

After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you can run the advanced installation using the following playbook:

# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
Note

For more detailed usage information on the advanced installation method, including a comprehensive list of available Ansible variables, see the full topic at Advanced Installation.

Red Hat logoGithubRedditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

© 2024 Red Hat, Inc.