Chapter 1. Planning your installation
You install OpenShift Container Platform by running a series of Ansible playbooks. As you prepare to install your cluster, you create an inventory file that represents your environment and OpenShift Container Platform cluster configuration. While familiarity with Ansible might make this process easier, it is not required.
You can read more about Ansible and its basic usage in the official documentation.
1.1. Initial planning
Before you install your production OpenShift Container Platform cluster, you need answers to the following questions:
- Do your on-premise servers use IBM POWER or x86_64 processors? You can install OpenShift Container Platform on servers that use either type of processor. If you use POWER servers, review the Limitations and Considerations for Installations on IBM POWER.
- How many pods are required in your cluster? The Sizing Considerations section provides limits for nodes and pods so you can calculate how large your environment needs to be.
- How many hosts do you require in the cluster? The Environment Scenarios section provides multiple examples of Single Master and Multiple Master configurations.
- Do you need a high availability cluster? High availability configurations improve fault tolerance. In this situation, you might use the Multiple Masters Using Native HA example to set up your environment.
- Is cluster monitoring required? The monitoring stack requires additional system resources. Note that the monitoring stack is installed by default. See the cluster monitoring documentation for more information.
- Do you want to use Red Hat Enterprise Linux (RHEL) or RHEL Atomic Host as the operating system for your cluster nodes? If you install OpenShift Container Platform on RHEL, you use an RPM-based installation. On RHEL Atomic Host, you use a system container. Both installation types provide a working OpenShift Container Platform environment.
- Which identity provider do you use for authentication? If you already use a supported identity provider, configure OpenShift Container Platform to use that identity provider during installation.
- Is my installation supported if I integrate it with other technologies? See the OpenShift Container Platform Tested Integrations for a list of tested integrations.
1.1.1. Limitations and Considerations for Installations on IBM POWER
As of version 3.10.45, you can install OpenShift Container Platform on IBM POWER servers.
- Your cluster must use only Power nodes and masters. Because of the way that images are tagged, OpenShift Container Platform cannot differentiate between x86 images and Power images.
- Image streams and templates are not installed by default or updated when you upgrade. You can manually install and update the image streams.
- You can install only on on-premise Power servers. You cannot install OpenShift Container Platform on nodes in any cloud provider.
Not all storage providers are supported. You can use only the following storage providers:
- GlusterFS
- NFS
- Local storage
1.2. Sizing considerations
Determine how many nodes and pods you require for your OpenShift Container Platform cluster. Cluster scalability correlates to the number of pods in a cluster environment. That number influences the other numbers in your setup. See Cluster Limits for the latest limits for objects in OpenShift Container Platform.
1.3. Environment scenarios
Use these environment scenarios to help plan your OpenShift Container Platform cluster based on your sizing needs.
Moving from a single master cluster to multiple masters after installation is not supported.
In all environments, if your etcd hosts are co-located with master hosts, etcd runs as a static pod on the host. If your etcd hosts are not co-located with master hosts, they run etcd as standalone processes.
If you use RHEL Atomic Host, you can configure etcd on only master hosts.
1.3.1. Single master and node on one system
You can install OpenShift Container Platform on a single system for only a development environment. You cannot use an all-in-one environment as a production environment.
1.3.2. Single master and multiple nodes
The following table describes an example environment for a single master (with etcd installed on the same host) and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master, etcd, and node |
node1.example.com | Node |
node2.example.com |
1.3.3. Multiple masters using native HA
The following describes an example environment for three masters, one HAProxy load balancer, and two nodes using the native
HA method. etcd runs as static pods on the master nodes:
Routers and master nodes must be load balanced to have a highly available and fault-tolerant environment. Red Hat recommends the use of an enterprise-grade external load balancer for production environments. This load balancing applies to the masters and nodes, hosts running the OpenShift Container Platform routers. Transmission Control Protocol (TCP) layer 4 load balancing, in which the load is spread across IP addresses, is recommended. See External Load Balancer Integrations with OpenShift Enterprise 3 for a reference design, which is not recommended for production use cases.
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node and clustered etcd |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
node1.example.com | Node |
node2.example.com |
1.3.4. Multiple Masters Using Native HA with External Clustered etcd
The following describes an example environment for three masters, one HAProxy load balancer, three external clustered etcd hosts, and two nodes using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
etcd1.example.com | Clustered etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
1.3.5. Stand-alone registry
You can also install OpenShift Container Platform to act as a stand-alone registry using the OpenShift Container Platform’s integrated registry. See Installing a Stand-alone Registry for details on this scenario.
1.4. Installation types for supported operating systems
Starting in OpenShift Container Platform 3.10, if you use RHEL as the underlying OS for a host, the RPM method is used to install OpenShift Container Platform components on that host. If you use RHEL Atomic Host, the system container method is used on that host. Either installation type provides the same functionality for the cluster, but the operating system you use determines how you manage services and host updates.
As of OpenShift Container Platform 3.10, the containzerized installation method is no longer supported on Red Hat Enterprise Linux systems.
An RPM installation installs all services through package management and configures services to run in the same user space, while a system container installation installs services using system container images and runs separate services in individual containers.
When using RPMs on RHEL, all services are installed and updated by package management from an outside source. These packages modify a host’s existing configuration in the same user space. With system container installations on RHEL Atomic Host, each component of OpenShift Container Platform is shipped as a container, in a self-contained package, that uses the host’s kernel to run. Updated, newer containers replace any existing ones on your host.
The following table and sections outline further differences between the installation types:
Red Hat Enterprise Linux | RHEL Atomic Host | |
---|---|---|
Installation Type | RPM-based | System container |
Delivery Mechanism |
RPM packages using |
System container images using |
Service Management | systemd |
|
1.4.1. Required images for system containers
The system container installation type makes use of the following images:
- openshift3/ose-node
By default, all of the above images are pulled from the Red Hat Registry at registry.redhat.io.
If you need to use a private registry to pull these images during the installation, you can specify the registry information ahead of time. Set the following Ansible variables in your inventory file, as required:
oreg_url='<registry_hostname>/openshift3/ose-${component}:${version}' openshift_docker_insecure_registries=<registry_hostname> openshift_docker_blocked_registries=<registry_hostname>
You can also set the openshift_docker_insecure_registries
variable to the IP address of the host. 0.0.0.0/0
is not a valid setting.
The default component inherits the image prefix and version from the oreg_url
value.
The configuration of additional, insecure, and blocked container registries occurs at the beginning of the installation process to ensure that these settings are applied before attempting to pull any of the required images.
1.4.2. systemd service names
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands. For system container installations, these unit names match those of an RPM installation.
1.4.3. File path locations
All OpenShift Container Platform configuration files are placed in the same locations during containerized installation as RPM based installations and will survive os-tree upgrades.
However, the default image stream and template files are installed at /etc/origin/examples/ for Atomic Host installations rather than the standard /usr/share/openshift/examples/ because that directory is read-only on RHEL Atomic Host.
1.4.4. Storage requirements
RHEL Atomic Host installations normally have a very small root file system. However, the etcd, master, and node containers persist data in the /var/lib/ directory. Ensure that you have enough space on the root file system before installing OpenShift Container Platform. See the System Requirements section for details.