Chapter 2. System and Environment Requirements
2.1. System Requirements
The following sections identify the hardware specifications and system-level requirements of all hosts within your OpenShift Container Platform environment.
2.1.1. Red Hat Subscriptions
You must have an active OpenShift Container Platform subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.
2.1.2. Minimum Hardware Requirements
The system requirements vary per host type:
| |
| |
External etcd Nodes |
|
Ansible Controller | The host that you run the Ansible playbook on must have at least 75MiB of free memory per host in the inventory. |
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage with Docker-formatted Containers for instructions on configuring this during or after installation.
The system’s temporary directory is determined according to the rules defined in the tempfile
module in Python’s standard library.
You must configure storage for each system that runs a container daemon. For containerized installations, you need storage on masters. Also, by default, the web console is run in containers on masters, and storage is needed on masters to run the web console. Containers are run on nodes, so storage is always required on the nodes. The size of storage depends on workload, number of containers, the size of the containers being run, and the containers' storage requirements. Containerized etcd also needs container storage configured.
2.1.3. Production Level Hardware Requirements
Test or sample environments function with the minimum requirements. For production environments, the following recommendations apply:
- Master Hosts
- In a highly available OpenShift Container Platform cluster with external etcd, a master host should have, in addition to the minimum requirements in the table above, 1 CPU core and 1.5 GB of memory for each 1000 pods. Therefore, the recommended size of a master host in an OpenShift Container Platform cluster of 2000 pods would be the minimum requirements of 2 CPU cores and 16 GB of RAM, plus 2 CPU cores and 3 GB of RAM, totaling 4 CPU cores and 19 GB of RAM.
A minimum of three etcd hosts and a load-balancer between the master hosts are required.
See Recommended Practices for OpenShift Container Platform Master Hosts for performance guidance.
- Node Hosts
- The size of a node host depends on the expected size of its workload. As an OpenShift Container Platform cluster administrator, you will need to calculate the expected workload, then add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
For more information, see Sizing Considerations and Cluster Limits.
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.
2.1.4. Storage management
Directory | Notes | Sizing | Expected Growth |
---|---|---|---|
/var/lib/openshift | Used for etcd storage only when in single master mode and etcd is embedded in the atomic-openshift-master process. | Less than 10GB. | Will grow slowly with the environment. Only storing metadata. |
/var/lib/etcd | Used for etcd storage when in Multi-Master mode or when etcd is made standalone by an administrator. | Less than 20 GB. | Will grow slowly with the environment. Only storing metadata. |
/var/lib/docker | When the run time is docker, this is the mount point. Storage used for active container runtimes (including pods) and storage of local images (not used for registry storage). Mount point should be managed by docker-storage rather than manually. | 50 GB for a Node with 16 GB memory. Additional 20-25 GB for every additional 8 GB of memory. | Growth is limited by the capacity for running containers. |
/var/lib/containers | When the run time is CRI-O, this is the mount point. Storage used for active container runtimes (including pods) and storage of local images (not used for registry storage). | 50 GB for a Node with 16 GB memory. Additional 20-25 GB for every additional 8 GB of memory. | Growth limited by capacity for running containers |
/var/lib/origin/openshift.local.volumes | Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent storage PVs. | Varies | Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. |
/var/log | Log files for all components. | 10 to 30 GB. | Log files can grow quickly; size can be managed by growing disks or managed using log rotate. |
2.1.5. Red Hat Gluster Storage Hardware Requirements
Any nodes used in a converged mode or independent mode cluster are considered storage nodes. Storage nodes can be grouped into distinct cluster groups, though a single node can not be in multiple groups. For each group of storage nodes:
- A minimum of three storage nodes per group is required.
Each storage node must have a minimum of 8 GB of RAM. This is to allow running the Red Hat Gluster Storage pods, as well as other applications and the underlying operating system.
- Each GlusterFS volume also consumes memory on every storage node in its storage cluster, which is about 30 MB. The total amount of RAM should be determined based on how many concurrent volumes are desired or anticipated.
Each storage node must have at least one raw block device with no present data or metadata. These block devices will be used in their entirety for GlusterFS storage. Make sure the following are not present:
- Partition tables (GPT or MSDOS)
- Filesystems or residual filesystem signatures
- LVM2 signatures of former Volume Groups and Logical Volumes
- LVM2 metadata of LVM2 physical volumes
If in doubt,
wipefs -a <device>
should clear any of the above.
It is recommended to plan for two clusters: one dedicated to storage for infrastructure applications (such as an OpenShift Container Registry) and one dedicated to storage for general applications. This would require a total of six storage nodes. This recommendation is made to avoid potential impacts on performance in I/O and volume creation.
2.1.6. SELinux Requirements
Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift Container Platform or the installer will fail. Also, configure SELINUX=enforcing
and SELINUXTYPE=targeted
in the /etc/selinux/config file:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
2.1.7. Optional: Configuring Core Usage
By default, OpenShift Container Platform masters and nodes use all available cores in the system they run on. You can choose the number of cores you want OpenShift Container Platform to use by setting the GOMAXPROCS
environment variable. See the Go Language documentation for more information, including how the GOMAXPROCS
environment variable works.
For example, run the following before starting the server to make OpenShift Container Platform only run on one core:
# export GOMAXPROCS=1
2.1.8. Optional: Using OverlayFS
OverlayFS is a union file system that allows you to overlay one file system on top of another.
As of Red Hat Enterprise Linux 7.4, you have the option to configure your OpenShift Container Platform environment to use OverlayFS. The overlay2
graph driver is fully supported in addition to the older overlay
driver. However, Red Hat recommends using overlay2
instead of overlay
, because of its speed and simple implementation.
Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.
See the Overlay Graph Driver section of the Atomic Host documentation for instructions on how to enable the overlay2
graph driver for the Docker service.
2.1.9. Security Warning
OpenShift Container Platform runs containers on hosts in the cluster, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access the hosts' Docker daemon and perform docker build
and docker push
operations. As such, cluster administrators should be aware of the inherent security risks associated with performing docker run
operations on arbitrary images as they effectively have root access. This is particularly relevant for docker build
operations.
Exposure to harmful containers can be limited by assigning specific builds to nodes so that any exposure is limited to those nodes. To do this, see the Assigning Builds to Specific Nodes section of the Developer Guide. For cluster administrators, see the Configuring Global Build Defaults and Overrides topic.
You can also use security context constraints to control the actions that a pod can perform and what it has the ability to access. For instructions on how to enable images to run with USER in the Dockerfile, see Managing Security Context Constraints (requires a user with cluster-admin privileges).
For more information, see these articles:
2.2. Environment Requirements
The following section defines the requirements of the environment containing your OpenShift Container Platform configuration. This includes networking considerations and access to external services, such as Git repository access, storage, and cloud infrastructure providers.
2.2.1. DNS Requirements
OpenShift Container Platform requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.
Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform.
Key components of OpenShift Container Platform run themselves inside of containers and use the following process for name resolution:
- By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.
- OpenShift Container Platform then sets the pod’s first nameserver to the IP address of the node.
As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP addresses.
NM_CONTROLLED
is set to yes
by default. If NM_CONTROLLED
is set to no
, then the NetworkManager dispatch script does not create the relevant origin-upstream-dns.conf dnsmasq file, and you would need to configure dnsmasq manually.
Similarly, if the PEERDNS
parameter is set to no
in the network script, for example, /etc/sysconfig/network-scripts/ifcfg-em1, then the dnsmasq files are not generated, and the Ansible install will fail. Ensure the PEERDNS
setting is set to yes
.
The following is an example set of DNS records:
master1 A 10.64.33.100 master2 A 10.64.33.103 node1 A 10.64.33.101 node2 A 10.64.33.102
If you do not have a properly functioning DNS environment, you could experience failure with:
- Product installation via the reference Ansible-based scripts
- Deployment of the infrastructure containers (registry, routers)
- Access to the OpenShift Container Platform web console, because it is not accessible via IP address alone
2.2.1.1. Configuring Hosts to Use DNS
Make sure each host in your environment is configured to resolve hostnames from your DNS server. The configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:
- Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.
- Enabled, then the NetworkManager dispatch script automatically configures DNS based on the DHCP configuration.
To verify that hosts can be resolved by your DNS server:
Check the contents of /etc/resolv.conf:
$ cat /etc/resolv.conf # Generated by NetworkManager search example.com nameserver 10.64.33.1 # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
In this example, 10.64.33.1 is the address of our DNS server.
Test that the DNS servers listed in /etc/resolv.conf are able to resolve host names to the IP addresses of all masters and nodes in your OpenShift Container Platform environment:
$ dig <node_hostname> @<IP_address> +short
For example:
$ dig master.example.com @10.64.33.1 +short 10.64.33.100 $ dig node1.example.com @10.64.33.1 +short 10.64.33.101
2.2.1.2. Configuring a DNS Wildcard
Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.
A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Container Platform router.
For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:
*.cloudapps.example.com. 300 IN A 192.168.133.2
In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Container Platform may fail to resolve host names properly.
2.2.1.3. Configuring Node Host Names
When you set up a cluster that is not integrated with a cloud provider, you must correctly set your nodes' host names. Each node’s host name must be resolvable, and each node must be able to reach each other node.
To confirm that a node can reach another node:
On one node, obtain the host name:
$ hostname master-1.example.com
On that same node, obtain the fully qualified domain name of the host:
$ hostname -f master-1.example.com
From a different node, confirm that you can reach the first node:
$ ping master-1.example.com -c 1 PING master-1.example.com (172.16.122.9) 56(84) bytes of data. 64 bytes from master-1.example.com (172.16.122.9): icmp_seq=1 ttl=64 time=0.319 ms --- master-1.example.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms
2.2.2. Network Access Requirements
A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using standard cluster installation process, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.
2.2.2.1. NetworkManager
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP addresses.
NM_CONTROLLED
is set to yes
by default. If NM_CONTROLLED
is set to no
, then the NetworkManager dispatch script does not create the relevant origin-upstream-dns.conf dnsmasq file, and you would need to configure dnsmasq manually.
2.2.2.2. Configuring firewalld as the firewall
While iptables is the default firewall, firewalld is recommended for new installations. You can enable firewalld by setting os_firewall_use_firewalld=true
in the Ansible inventory file.
[OSEv3:vars] os_firewall_use_firewalld=True
Setting this variable to true
opens the required ports and adds rules to the default zone, which ensure that firewalld is configured correctly.
Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.
2.2.2.3. Required Ports
The OpenShift Container Platform installation automatically creates a set of internal firewall rules on each host using iptables. However, if your network configuration uses an external firewall, such as a hardware-based firewall, you must ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.
Ensure the following ports required by OpenShift Container Platform are open on your network and configured to allow access between hosts. Some ports are optional depending on your configuration and usage.
4789 | UDP | Required for SDN communication between pods on separate hosts. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
10250 | TCP |
The master proxies to node hosts via the Kubelet for |
10010 | TCP |
If using CRI-O, open this port to allow |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
2049 | TCP/UDP | Required when provisioning an NFS host as part of the installer. |
2379 | TCP | Used for standalone etcd (clustered) to accept changes in state. |
2380 | TCP | etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered). |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
9000 | TCP |
If you choose the |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on. |
8444 | TCP |
Port that the controller service listens on. Required to be open for the |
22 | TCP | Required for SSH by the installer or system administrator. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts. |
80 or 443 | TCP | For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router. |
1936 | TCP | (Optional) Required to be open when running the template router to access statistics. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section below for more information. |
2379 and 2380 | TCP | For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd. |
4789 | UDP | For VxLAN use (OpenShift SDN). Required only internally on node hosts. |
8443 | TCP | For use by the OpenShift Container Platform web console, shared with the API server. |
10250 | TCP | For use by the Kubelet. Required to be externally open on nodes. |
Notes
- In the above examples, port 4789 is used for User Datagram Protocol (UDP).
- When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.
- OpenShift Container Platform internal DNS cannot be received over SDN. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata.
-
The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed value of
openshift_public_hostname
. Port 1936 can still be inaccessible due to your iptables rules. Use the following to configure iptables to open port 1936:
# iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp \ --dport 1936 -j ACCEPT
9200 | TCP |
For Elasticsearch API use. Required to be internally open on any infrastructure nodes so Kibana is able to retrieve logs for display. It can be externally open for direct access to Elasticsearch by means of a route. The route can be created using |
9300 | TCP | For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster can communicate. Add required ports for Prometheus to required ports section with each other. |
9090 | TCP | For Prometheus API and web console use. |
9100 | TCP | For the Prometheus Node-Exporter, which exports hardware and operating system metrics. Port 9100 needs to be open on each OpenShift Container Platform host in order for the Prometheus server to scrape the metrics. |
8443 | TCP | For node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. This port needs to be allowed from masters and infra nodes to any master and node. |
10250 | TCP | For the Kubernetes cAdvisor, a container resource usage and performance analysis agent. This port must to be allowed from masters and infra nodes to any master and node. For metrics, the source must be the infra nodes. |
8444 | TCP | Port that the controller service listens on. Port 8444 needs to be open on each OpenShift Container Platform host |
1936 | TCP | (Optional) Required to be open when running the template router to access statistics. This port must be allowed from the infra nodes to any infra nodes hosting the routers if Prometheus metrics are enabled on routers. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section above for more information. |
Notes
2.2.3. Persistent Storage
The Kubernetes persistent volume framework allows you to provision an OpenShift Container Platform cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Container Platform installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.
The Configuring Clusters guide provides instructions for cluster administrators on provisioning an OpenShift Container Platform cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.
2.2.4. Cloud Provider Considerations
There are certain aspects to take into consideration if installing OpenShift Container Platform on a cloud provider.
- For Amazon Web Services, see the Permissions and the Configuring a Security Group sections.
- For OpenStack, see the Permissions and the Configuring a Security Group sections.
2.2.4.1. Overriding Detected IP Addresses and Host Names
Some deployments require that the user override the detected host names and IP addresses for the hosts. To see the default values, run the openshift_facts
playbook:
# ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift_facts.yml
For Amazon Web Services, see the Overriding Detected IP Addresses and Host Names section.
Now, verify the detected common settings. If they are not what you expect them to be, you can override them.
The Configuring Your Inventory File topic discusses the available Ansible variables in greater detail.
Variable | Usage |
---|---|
|
|
|
|
|
|
|
|
|
|
2.2.4.2. Post-Installation Configuration for Cloud Providers
Following the installation process, you can configure OpenShift Container Platform for AWS, OpenStack, or GCE.