Chapter 2. Installing a Cluster
2.1. Planning
2.1.1. Initial Planning
For production environments, several factors influence installation. Consider the following questions as you read through the documentation:
- Which installation method do you want to use? The Installation Methods section provides some information about the quick and advanced installation methods.
- How many pods are required in your cluster? The Sizing Considerations section provides limits for nodes and pods so you can calculate how large your environment needs to be.
- How many hosts do you require in the cluster? The Environment Scenarios section provides multiple examples of Single Master and Multiple Master configurations.
- Is high availability required? High availability is recommended for fault tolerance. In this situation, you might aim to use the Multiple Masters Using Native HA example as a basis for your environment.
- Which installation type do you want to use: RPM or containerized? Both installations provide a working OpenShift Container Platform environment, but you might have a preference for a particular method of installing, managing, and updating your services.
- Which identity provider do you use for authentication? If you already use a supported identity provider, it is a best practice to configure OpenShift Container Platform to use that identity provider during advanced installation.
- Is my installation supported if integrating with other technologies? See the OpenShift Container Platform Tested Integrations for a list of tested integrations.
2.1.2. Installation Methods
As of OpenShift Container Platform 3.9, the quick installation method is deprecated. In a future release, it will be removed completely. In addition, using the quick installer to upgrade from version 3.7 to 3.9 is not supported.
Both the quick and advanced installation methods are supported for development and production environments. If you want to quickly get OpenShift Container Platform up and running to try out for the first time, use the quick installer and let the interactive CLI guide you through the configuration options relevant to your environment.
For the most control over your cluster’s configuration, you can use the advanced installation method. This method is particularly suited if you are already familiar with Ansible. However, following along with the OpenShift Container Platform documentation should equip you with enough information to reliably deploy your cluster and continue to manage its configuration post-deployment using the provided Ansible playbooks directly.
If you install initially using the quick installer, you can always further tweak your cluster’s configuration and adjust the number of hosts in the cluster using the same installer tool. If you wanted to later switch to using the advanced method, you can create an inventory file for your configuration and carry on that way.
2.1.3. Sizing Considerations
Determine how many nodes and pods you require for your OpenShift Container Platform cluster. Cluster scalability correlates to the number of pods in a cluster environment. That number influences the other numbers in your setup. See Cluster Limits for the latest limits for objects in OpenShift Container Platform.
2.1.4. Environment Scenarios
This section outlines different examples of scenarios for your OpenShift Container Platform environment. Use these scenarios as a basis for planning your own OpenShift Container Platform cluster, based on your sizing needs.
Moving from a single master cluster to multiple masters after installation is not supported.
For information on updating labels, see Updating Labels on Nodes.
2.1.4.1. Single Master and Node on One System
OpenShift Container Platform can be installed on a single system for a development environment only. An all-in-one environment is not considered a production environment.
2.1.4.2. Single Master and Multiple Nodes
The following table describes an example environment for a single master (with etcd installed on the same host) and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master, etcd, and node |
node1.example.com | Node |
node2.example.com |
2.1.4.3. Single Master, Multiple etcd, and Multiple Nodes
The following table describes an example environment for a single master, three etcd hosts, and two nodes:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master and node |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
2.1.4.4. Multiple Masters Using Native HA with Co-located Clustered etcd
The following describes an example environment for three masters with co-located clustered etcd, one HAProxy load balancer, and two nodes using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node and clustered etcd |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
node1.example.com | Node |
node2.example.com |
2.1.4.5. Multiple Masters Using Native HA with External Clustered etcd
The following describes an example environment for three masters, one HAProxy load balancer, three external clustered etcd hosts, and two nodes using the native
HA method:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
etcd1.example.com | Clustered etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com |
2.1.4.6. Stand-alone Registry
You can also install OpenShift Container Platform to act as a stand-alone registry using the OpenShift Container Platform’s integrated registry. See Installing a Stand-alone Registry for details on this scenario.
2.1.5. RPM Versus Containerized
An RPM installation installs all services through package management and configures services to run within the same user space, while a containerized installation installs services using container images and runs separate services in individual containers.
See the Installing on Containerized Hosts topic for more details on configuring your installation to use containerized services.
2.2. Prerequisites
2.2.1. System Requirements
The following sections identify the hardware specifications and system-level requirements of all hosts within your OpenShift Container Platform environment.
2.2.1.1. Red Hat Subscriptions
You must have an active OpenShift Container Platform subscription on your Red Hat account to proceed. If you do not, contact your sales representative for more information.
2.2.1.2. Minimum Hardware Requirements
The system requirements vary per host type:
| |
| |
External etcd Nodes |
|
Ansible Controller | The host that you run the Ansible playbook on must have at least 75MiB of free memory per host in the inventory. |
Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage with Docker-formatted Containers for instructions on configuring this during or after installation.
The system’s temporary directory is determined according to the rules defined in the tempfile
module in Python’s standard library.
OpenShift Container Platform only supports servers with x86_64 architecture.
You must configure storage for each system that runs a container daemon. For containerized installations, you need storage on masters. Also, by default, the web console is run in containers on masters, and storage is needed on masters to run the web console. Containers are run on nodes, so storage is always required on the nodes. The size of storage depends on workload, number of containers, the size of the containers being run, and the containers' storage requirements. Containerized etcd also needs container storage configured.
2.2.1.3. Production Level Hardware Requirements
Test or sample environments function with the minimum requirements. For production environments, the following recommendations apply:
- Master Hosts
- In a highly available OpenShift Container Platform cluster with external etcd, a master host should have, in addition to the minimum requirements in the table above, 1 CPU core and 1.5 GB of memory for each 1000 pods. Therefore, the recommended size of a master host in an OpenShift Container Platform cluster of 2000 pods would be the minimum requirements of 2 CPU cores and 16 GB of RAM, plus 2 CPU cores and 3 GB of RAM, totaling 4 CPU cores and 19 GB of RAM.
A minimum of three etcd hosts and a load-balancer between the master hosts are required.
See Recommended Practices for OpenShift Container Platform Master Hosts for performance guidance.
- Node Hosts
- The size of a node host depends on the expected size of its workload. As an OpenShift Container Platform cluster administrator, you will need to calculate the expected workload, then add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity.
For more information, see Sizing Considerations and Cluster Limits.
Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping.
2.2.1.4. Storage management
Directory | Notes | Sizing | Expected Growth |
---|---|---|---|
/var/lib/openshift | Used for etcd storage only when in single master mode and etcd is embedded in the atomic-openshift-master process. | Less than 10GB. | Will grow slowly with the environment. Only storing metadata. |
/var/lib/etcd | Used for etcd storage when in Multi-Master mode or when etcd is made standalone by an administrator. | Less than 20 GB. | Will grow slowly with the environment. Only storing metadata. |
/var/lib/docker | When the run time is docker, this is the mount point. Storage used for active container runtimes (including pods) and storage of local images (not used for registry storage). Mount point should be managed by docker-storage rather than manually. | 50 GB for a Node with 16 GB memory. Additional 20-25 GB for every additional 8 GB of memory. | Growth is limited by the capacity for running containers. |
/var/lib/containers | When the run time is CRI-O, this is the mount point. Storage used for active container runtimes (including pods) and storage of local images (not used for registry storage). | 50 GB for a Node with 16 GB memory. Additional 20-25 GB for every additional 8 GB of memory. | Growth limited by capacity for running containers |
/var/lib/origin/openshift.local.volumes | Ephemeral volume storage for pods. This includes anything external that is mounted into a container at runtime. Includes environment variables, kube secrets, and data volumes not backed by persistent storage PVs. | Varies | Minimal if pods requiring storage are using persistent volumes. If using ephemeral storage, this can grow quickly. |
/var/log | Log files for all components. | 10 to 30 GB. | Log files can grow quickly; size can be managed by growing disks or managed using log rotate. |
2.2.1.5. Red Hat Gluster Storage Hardware Requirements
Any nodes used in a Container-Native Storage or Container-Ready Storage cluster are considered storage nodes. Storage nodes can be grouped into distinct cluster groups, though a single node can not be in multiple groups. For each group of storage nodes:
- A minimum of three storage nodes per group is required.
Each storage node must have a minimum of 8 GB of RAM. This is to allow running the Red Hat Gluster Storage pods, as well as other applications and the underlying operating system.
- Each GlusterFS volume also consumes memory on every storage node in its storage cluster, which is about 30 MB. The total amount of RAM should be determined based on how many concurrent volumes are desired or anticipated.
Each storage node must have at least one raw block device with no present data or metadata. These block devices will be used in their entirety for GlusterFS storage. Make sure the following are not present:
- Partition tables (GPT or MSDOS)
- Filesystems or residual filesystem signatures
- LVM2 signatures of former Volume Groups and Logical Volumes
- LVM2 metadata of LVM2 physical volumes
If in doubt,
wipefs -a <device>
should clear any of the above.
It is recommended to plan for two clusters: one dedicated to storage for infrastructure applications (such as an OpenShift Container Registry) and one dedicated to storage for general applications. This would require a total of six storage nodes. This recommendation is made to avoid potential impacts on performance in I/O and volume creation.
2.2.1.6. Optional: Configuring Core Usage
By default, OpenShift Container Platform masters and nodes use all available cores in the system they run on. You can choose the number of cores you want OpenShift Container Platform to use by setting the GOMAXPROCS
environment variable. See the Go Language documentation for more information, including how the GOMAXPROCS
environment variable works.
For example, run the following before starting the server to make OpenShift Container Platform only run on one core:
# export GOMAXPROCS=1
2.2.1.7. SELinux
Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift Container Platform or the installer will fail. Also, configure SELINUX=enforcing
and SELINUXTYPE=targeted
in the /etc/selinux/config file:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of these three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
2.2.1.8. Red Hat Gluster Storage
To access GlusterFS volumes, the mount.glusterfs
command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage. To do this, the following RPM repository must be enabled:
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
# yum update glusterfs-fuse
Optional: Using OverlayFS
OverlayFS is a union file system that allows you to overlay one file system on top of another.
As of Red Hat Enterprise Linux 7.4, you have the option to configure your OpenShift Container Platform environment to use OverlayFS. The overlay2
graph driver is fully supported in addition to the older overlay
driver. However, Red Hat recommends using overlay2
instead of overlay
, because of its speed and simple implementation.
Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.
See the Overlay Graph Driver section of the Atomic Host documentation for instructions on how to enable the overlay2
graph driver for the Docker service.
2.2.1.9. Security Warning
OpenShift Container Platform runs containers on hosts in the cluster, and in some cases, such as build operations and the registry service, it does so using privileged containers. Furthermore, those containers access the hosts' Docker daemon and perform docker build
and docker push
operations. As such, cluster administrators should be aware of the inherent security risks associated with performing docker run
operations on arbitrary images as they effectively have root access. This is particularly relevant for docker build
operations.
Exposure to harmful containers can be limited by assigning specific builds to nodes so that any exposure is limited to those nodes. To do this, see the Assigning Builds to Specific Nodes section of the Developer Guide. For cluster administrators, see the Configuring Global Build Defaults and Overrides section of the Installation and Configuration Guide.
You can also use security context constraints to control the actions that a pod can perform and what it has the ability to access. For instructions on how to enable images to run with USER in the Dockerfile, see Managing Security Context Constraints (requires a user with cluster-admin privileges).
For more information, see these articles:
2.2.2. Environment Requirements
The following section defines the requirements of the environment containing your OpenShift Container Platform configuration. This includes networking considerations and access to external services, such as Git repository access, storage, and cloud infrastructure providers.
2.2.2.1. DNS
OpenShift Container Platform requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform.
Adding entries into the /etc/hosts file on each host is not enough. This file is not copied into containers running on the platform.
Key components of OpenShift Container Platform run themselves inside of containers and use the following process for name resolution:
- By default, containers receive their DNS configuration file (/etc/resolv.conf) from their host.
-
OpenShift Container Platform then inserts one DNS value into the pods (above the node’s nameserver values). That value is defined in the /etc/origin/node/node-config.yaml file by the
dnsIP
parameter, which by default is set to the address of the host node because the host is using dnsmasq. -
If the
dnsIP
parameter is omitted from the node-config.yaml file, then the value defaults to the kubernetes service IP, which is the first nameserver in the pod’s /etc/resolv.conf file.
As of OpenShift Container Platform 3.2, dnsmasq is automatically configured on all masters and nodes. The pods use the nodes as their DNS, and the nodes forward the requests. By default, dnsmasq is configured on the nodes to listen on port 53, therefore the nodes cannot run any other type of DNS application.
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP addresses.
NM_CONTROLLED
is set to yes
by default. If NM_CONTROLLED
is set to no
, then the NetworkManager dispatch script does not create the relevant origin-upstream-dns.conf dnsmasq file, and you would need to configure dnsmasq manually.
Similarly, if the PEERDNS
parameter is set to no
in the network script, for example, /etc/sysconfig/network-scripts/ifcfg-em1, then the dnsmasq files are not generated, and the Ansible install will fail. Ensure the PEERDNS
setting is set to yes
.
The following is an example set of DNS records:
master1 A 10.64.33.100 master2 A 10.64.33.103 node1 A 10.64.33.101 node2 A 10.64.33.102
If you do not have a properly functioning DNS environment, you could experience failure with:
- Product installation via the reference Ansible-based scripts
- Deployment of the infrastructure containers (registry, routers)
- Access to the OpenShift Container Platform web console, because it is not accessible via IP address alone
2.2.2.1.1. Configuring Hosts to Use DNS
Make sure each host in your environment is configured to resolve hostnames from your DNS server. The configuration for hosts' DNS resolution depend on whether DHCP is enabled. If DHCP is:
- Disabled, then configure your network interface to be static, and add DNS nameservers to NetworkManager.
Enabled, then the NetworkManager dispatch script automatically configures DNS based on the DHCP configuration. Optionally, you can add a value to
dnsIP
in the node-config.yaml file to prepend the pod’s resolv.conf file. The second nameserver is then defined by the host’s first nameserver. By default, this will be the IP address of the node host.NoteFor most configurations, do not set the
openshift_dns_ip
option during the advanced installation of OpenShift Container Platform (using Ansible), because this option overrides the default IP address set bydnsIP
.Instead, allow the installer to configure each node to use dnsmasq and forward requests to the external DNS provider or SkyDNS, the internal DNS service for cluster-wide DNS resolution of internal hostnames for services and pods. If you do set the
openshift_dns_ip
option, then it should be set either with a DNS IP that queries SkyDNS first, or to the SkyDNS service or endpoint IP (the Kubernetes service IP).
To verify that hosts can be resolved by your DNS server:
Check the contents of /etc/resolv.conf:
$ cat /etc/resolv.conf # Generated by NetworkManager search example.com nameserver 10.64.33.1 # nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
In this example, 10.64.33.1 is the address of our DNS server.
Test that the DNS servers listed in /etc/resolv.conf are able to resolve host names to the IP addresses of all masters and nodes in your OpenShift Container Platform environment:
$ dig <node_hostname> @<IP_address> +short
For example:
$ dig master.example.com @10.64.33.1 +short 10.64.33.100 $ dig node1.example.com @10.64.33.1 +short 10.64.33.101
2.2.2.1.2. Configuring a DNS Wildcard
Optionally, configure a wildcard for the router to use, so that you do not need to update your DNS configuration when new routes are added.
A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift Container Platform router.
For example, create a wildcard DNS entry for cloudapps that has a low time-to-live value (TTL) and points to the public IP address of the host where the router will be deployed:
*.cloudapps.example.com. 300 IN A 192.168.133.2
In almost all cases, when referencing VMs you must use host names, and the host names that you use must match the output of the hostname -f
command on each node.
In your /etc/resolv.conf file on each node host, ensure that the DNS server that has the wildcard entry is not listed as a nameserver or that the wildcard domain is not listed in the search list. Otherwise, containers managed by OpenShift Container Platform may fail to resolve host names properly.
2.2.2.2. Network Access
A shared network must exist between the master and node hosts. If you plan to configure multiple masters for high-availability using the advanced installation method, you must also select an IP to be configured as your virtual IP (VIP) during the installation process. The IP that you select must be routable between all of your nodes, and if you configure using a FQDN it should resolve on all nodes.
2.2.2.2.1. NetworkManager
NetworkManager, a program for providing detection and configuration for systems to automatically connect to the network, is required on the nodes in order to populate dnsmasq with the DNS IP addresses.
NM_CONTROLLED
is set to yes
by default. If NM_CONTROLLED
is set to no
, then the NetworkManager dispatch script does not create the relevant origin-upstream-dns.conf dnsmasq file, and you would need to configure dnsmasq manually.
2.2.2.2.2. Configuring firewalld as the firewall
While iptables is the default firewall, firewalld is recommended for new installations. You can enable firewalld by setting os_firewall_use_firewalld=true
in the Ansible inventory file.
[OSEv3:vars] os_firewall_use_firewalld=True
Setting this variable to true
opens the required ports and adds rules to the default zone, which ensure that firewalld is configured correctly.
Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.
2.2.2.2.3. Required Ports
The OpenShift Container Platform installation automatically creates a set of internal firewall rules on each host using iptables. However, if your network configuration uses an external firewall, such as a hardware-based firewall, you must ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.
Ensure the following ports required by OpenShift Container Platform are open on your network and configured to allow access between hosts. Some ports are optional depending on your configuration and usage.
4789 | UDP | Required for SDN communication between pods on separate hosts. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
10250 | TCP |
The master proxies to node hosts via the Kubelet for |
10010 | TCP |
If using CRI-O, open this port to allow |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. |
2049 | TCP/UDP | Required when provisioning an NFS host as part of the installer. |
2379 | TCP | Used for standalone etcd (clustered) to accept changes in state. |
2380 | TCP | etcd requires this port be open between masters for leader election and peering connections when using standalone etcd (clustered). |
4789 | UDP | Required for SDN communication between pods on separate hosts. |
9000 | TCP |
If you choose the |
443 or 8443 | TCP | Required for node hosts to communicate to the master API, for node hosts to post back status, to receive tasks, and so on. |
8444 | TCP |
Port that the |
22 | TCP | Required for SSH by the installer or system administrator. |
53 or 8053 | TCP/UDP | Required for DNS resolution of cluster services (SkyDNS). Installations prior to 3.2 or environments upgraded to 3.2 use port 53. New installations will use 8053 by default so that dnsmasq may be configured. Only required to be internally open on master hosts. |
80 or 443 | TCP | For HTTP/HTTPS use for the router. Required to be externally open on node hosts, especially on nodes running the router. |
1936 | TCP | (Optional) Required to be open when running the template router to access statistics. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section below for more information. |
2379 and 2380 | TCP | For standalone etcd use. Only required to be internally open on the master host. 2379 is for server-client connections. 2380 is for server-server connections, and is only required if you have clustered etcd. |
4789 | UDP | For VxLAN use (OpenShift SDN). Required only internally on node hosts. |
8443 | TCP | For use by the OpenShift Container Platform web console, shared with the API server. |
10250 | TCP | For use by the Kubelet. Required to be externally open on nodes. |
Notes
- In the above examples, port 4789 is used for User Datagram Protocol (UDP).
- When deployments are using the SDN, the pod network is accessed via a service proxy, unless it is accessing the registry from the same node the registry is deployed on.
-
OpenShift Container Platform internal DNS cannot be received over SDN. Depending on the detected values of
openshift_facts
, or if theopenshift_ip
andopenshift_public_ip
values are overridden, it will be the computed value ofopenshift_ip
. For non-cloud deployments, this will default to the IP address associated with the default route on the master host. For cloud deployments, it will default to the IP address associated with the first internal interface as defined by the cloud metadata. -
The master host uses port 10250 to reach the nodes and does not go over SDN. It depends on the target host of the deployment and uses the computed values of
openshift_hostname
andopenshift_public_hostname
. Port 1936 can still be inaccessible due to your iptables rules. Use the following to configure iptables to open port 1936:
# iptables -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp \ --dport 1936 -j ACCEPT
9200 | TCP |
For Elasticsearch API use. Required to be internally open on any infrastructure nodes so Kibana is able to retrieve logs for display. It can be externally open for direct access to Elasticsearch by means of a route. The route can be created using |
9300 | TCP | For Elasticsearch inter-cluster use. Required to be internally open on any infrastructure node so the members of the Elasticsearch cluster can communicate with each other. |
9090 | TCP | For Prometheus API and web console use. |
9100 | TCP | For the Prometheus Node-Exporter, which exports hardware and operating system metrics. Port 9100 needs to be open on each OpenShift Container Platform host in order for the Prometheus server to scrape the metrics. |
8443 | TCP | For node hosts to communicate to the master API, for the node hosts to post back status, to receive tasks, and so on. This port needs to be allowed from masters and infra nodes to any master and node. |
10250 | TCP | For the Kubernetes cAdvisor, a container resource usage and performance analysis agent. This port must to be allowed from masters and infra nodes to any master and node. For metrics, the source must be the infra nodes. |
8444 | TCP | Port that the controller service listens on. Port 8444 needs to be open on each OpenShift Container Platform host |
1936 | TCP | (Optional) Required to be open when running the template router to access statistics. This port must be allowed from the infra nodes to any infra nodes hosting the routers if Prometheus metrics are enabled on routers. Can be open externally or internally to connections depending on if you want the statistics to be expressed publicly. Can require extra configuration to open. See the Notes section above for more information. |
Notes
2.2.2.3. Persistent Storage
The Kubernetes persistent volume framework allows you to provision an OpenShift Container Platform cluster with persistent storage using networked storage available in your environment. This can be done after completing the initial OpenShift Container Platform installation depending on your application needs, giving users a way to request those resources without having any knowledge of the underlying infrastructure.
The Installation and Configuration Guide provides instructions for cluster administrators on provisioning an OpenShift Container Platform cluster with persistent storage using NFS, GlusterFS, Ceph RBD, OpenStack Cinder, AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI.
2.2.2.4. Cloud Provider Considerations
There are certain aspects to take into consideration if installing OpenShift Container Platform on a cloud provider.
- For Amazon Web Services, see the Permissions and the Configuring a Security Group sections.
- For OpenStack, see the Permissions and the Configuring a Security Group sections.
2.2.2.4.1. Overriding Detected IP Addresses and Host Names
Some deployments require that the user override the detected host names and IP addresses for the hosts. To see the default values, run the openshift_facts
playbook:
# ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/byo/openshift_facts.yml
For Amazon Web Services, see the Overriding Detected IP Addresses and Host Names section.
Now, verify the detected common settings. If they are not what you expect them to be, you can override them.
The Advanced Installation topic discusses the available Ansible variables in greater detail.
Variable | Usage |
---|---|
|
|
|
|
|
|
|
|
|
|
If openshift_hostname
is set to a value other than the metadata-provided private-dns-name
value, the native cloud integration for those providers will no longer work.
2.2.2.4.2. Post-Installation Configuration for Cloud Providers
Following the installation process, you can configure OpenShift Container Platform for AWS, OpenStack, or GCE.
2.3. Host Preparation
2.3.1. Setting PATH
The PATH
for the root user on each host must contain the following directories:
- /bin
- /sbin
- /usr/bin
- /usr/sbin
These should all be included by default in a fresh RHEL 7.x installation.
2.3.2. Operating System Requirements
A base installation of 7.3 or later (with the latest packages from the Extras channel) or RHEL Atomic Host 7.4.2 or later is required for master and node hosts. See the following documentation for the respective installation instructions, if required:
2.3.3. Host Registration
Each host must be registered using Red Hat Subscription Manager (RHSM) and have an active OpenShift Container Platform subscription attached to access the required packages.
On each host, register with RHSM:
# subscription-manager register --username=<user_name> --password=<password>
Pull the latest subscription data from RHSM:
# subscription-manager refresh
List the available subscriptions:
# subscription-manager list --available --matches '*OpenShift*'
In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
# subscription-manager attach --pool=<pool_id>
Disable all yum repositories:
Disable all the enabled RHSM repositories:
# subscription-manager repos --disable="*"
List the remaining yum repositories and note their names under
repo id
, if any:# yum repolist
Use
yum-config-manager
to disable the remaining yum repositories:# yum-config-manager --disable <repo_id>
Alternatively, disable all repositories:
yum-config-manager --disable \*
Note that this could take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 3.9:
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.9-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-ansible-2.4-rpms"
NoteThe addition of the rhel-7-server-ansible-2.4-rpms repository is a new requirement as of OpenShift Container Platform 3.9.
2.3.4. Installing Base Packages
Once you have a working inventory file, you can use /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml to install container runtimes in their default configuration. If you require customization to the container runtime, follow the guidance in this topic.
For RHEL 7 systems:
Install the following base packages:
# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
Update the system to the latest packages:
# yum update # systemctl reboot
If you plan to use the RPM-based installer to run an advanced installation, you can skip this step. However, if you plan to use the containerized installer:
Install the atomic package:
# yum install atomic
- Skip to Installing Docker.
Install the following package, which provides RPM-based OpenShift Container Platform installer utilities and pulls in other tools required by the quick and advanced installation methods, such as Ansible and related configuration files:
# yum install atomic-openshift-utils
For RHEL Atomic Host 7 systems:
Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:
# atomic host upgrade
After the upgrade is completed and prepared for the next boot, reboot the host:
# systemctl reboot
2.3.5. Installing Docker
At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OpenShift Container Platform.
For RHEL 7 systems, install Docker 1.13:
On RHEL Atomic Host 7 systems, Docker should already be installed, configured, and running by default.
# yum install docker-1.13.1
After the package installation is complete, verify that version 1.13 was installed:
# rpm -V docker-1.13.1 # docker version
The Advanced Installation method automatically changes /etc/sysconfig/docker.
2.3.6. Configuring Docker Storage
Containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications. With Ephemeral storage, container-saved data is lost when the container is removed. With persistent storage, container-saved data remains if the container is removed.
You must configure storage for each system that runs a container daemon. For containerized installations, you need storage on masters. Also, by default, the web console is run in containers on masters, and storage is needed on masters to run the web console. Containers are run on nodes, so storage is always required on the nodes. The size of storage depends on workload, number of containers, the size of the containers being run, and the containers' storage requirements. Containerized etcd also needs container storage configured.
Once you have a working inventory file, you can use /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml to install container runtimes in their default configuration. If you require customization to the container runtime, follow the guidance in this topic.
For RHEL Atomic Host
The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.
If you do not have enough allocated, see Managing Storage with Docker Formatted Containers for details on using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.
For RHEL
The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.
Docker stores images and containers in a graph driver, which is a pluggable storage technology, such as DeviceMapper, OverlayFS, and Btrfs. Each has advantages and disadvantages. For example, OverlayFS is faster than DeviceMapper at starting and stopping containers, but is not Portable Operating System Interface for Unix (POSIX) compliant because of the architectural limitations of a union file system and is not supported prior to Red Hat Enterprise Linux 7.2. See the Red Hat Enterprise Linux release notes for information on using OverlayFS with your version of RHEL.
For more information on the benefits and limitations of DeviceMapper and OverlayFS, see Choosing a Graph Driver.
2.3.6.1. Configuring OverlayFS
OverlayFS is a type of union file system. It allows you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified.
Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.
For information on enabling the OverlayFS storage driver for the Docker service, see the Red Hat Enterprise Linux Atomic Host documentation.
2.3.6.2. Configuring Thin Pool Storage
You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. This can be done after installing Docker and should be done before creating images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:
- Option A) Use an additional block device.
- Option B) Use an existing, specified volume group.
- Option C) Use the remaining free space from the volume group where your root file system is located.
Option A is the most robust option, however it requires adding an additional block device to your host before configuring Docker storage. Options B and C both require leaving free space available when provisioning your host. Option C is known to cause issues with some applications, for example Red Hat Mobile Application Platform (RHMAP).
Create the docker-pool volume using one of the following three options:
Option A) Use an additional block device.
In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device you wish to use. Set VG to the volume group name you wish to create; docker-vg is a reasonable choice. For example:
# cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS=/dev/vdc VG=docker-vg EOF
Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup [5/1868] 0 Checking that no-one is using this disk right now ... OK Disk /dev/vdc: 31207 cylinders, 16 heads, 63 sectors/track sfdisk: /dev/vdc: unrecognized partition table type Old situation: sfdisk: No partitions found New situation: Units: sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/vdc1 2048 31457279 31455232 8e Linux LVM /dev/vdc2 0 - 0 0 Empty /dev/vdc3 0 - 0 0 Empty /dev/vdc4 0 - 0 0 Empty Warning: partition 1 does not start at a cylinder boundary Warning: partition 1 does not end at a cylinder boundary Warning: no primary partition is marked bootable (active) This does not matter for LILO, but the DOS MBR will not boot this disk. Successfully wrote the new partition table Re-reading the partition table ... If you created or changed a DOS partition, /dev/foo7, say, then use dd(1) to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1 (See fdisk(8).) Physical volume "/dev/vdc1" successfully created Volume group "docker-vg" successfully created Rounding up size to full physical extent 16.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker-vg/docker-pool to thin pool. Logical volume "docker-pool" changed.
Option B) Use an existing, specified volume group.
In /etc/sysconfig/docker-storage-setup, set VG to the desired volume group. For example:
# cat <<EOF > /etc/sysconfig/docker-storage-setup VG=docker-vg EOF
Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup Rounding up size to full physical extent 16.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted docker-vg/docker-pool to thin pool. Logical volume "docker-pool" changed.
Option C) Use the remaining free space from the volume group where your root file system is located.
Verify that the volume group where your root file system resides has the desired free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
# docker-storage-setup Rounding up size to full physical extent 32.00 MiB Logical volume "docker-poolmeta" created. Logical volume "docker-pool" created. WARNING: Converting logical volume rhel/docker-pool and rhel/docker-poolmeta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted rhel/docker-pool to thin pool. Logical volume "docker-pool" changed.
Verify your configuration. You should have a dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool logical volume:
# cat /etc/sysconfig/docker-storage DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/rhel-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true " # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert docker-pool rhel twi-a-t--- 9.29g 0.00 0.12
ImportantBefore using Docker or OpenShift Container Platform, verify that the docker-pool logical volume is large enough to meet your needs. The docker-pool volume should be 60% of the available volume group and will grow to fill the volume group via LVM monitoring.
If Docker has not yet been started on the host, enable and start the service, then verify it is running:
# systemctl enable docker # systemctl start docker # systemctl is-active docker
If Docker is already running, re-initialize Docker:
WarningThis will destroy any containers or images currently on the host.
# systemctl stop docker # rm -rf /var/lib/docker/* # systemctl restart docker
If there is any content in /var/lib/docker/, it must be deleted. Files will be present if Docker has been used prior to the installation of OpenShift Container Platform.
2.3.6.3. Reconfiguring Docker Storage
Should you need to reconfigure Docker storage after having created the docker-pool, you should first remove the docker-pool logical volume. If you are using a dedicated volume group, you should also remove the volume group and any associated physical volumes before reconfiguring docker-storage-setup according to the instructions above.
See Logical Volume Manager Administration for more detailed information on LVM management.
2.3.6.4. Enabling Image Signature Support
OpenShift Container Platform is capable of cryptographically verifying images are from trusted sources. The Container Security Guide provides a high-level description of how image signing works.
You can configure image signature verification using the atomic
command line interface (CLI), version 1.12.5 or greater. The atomic
CLI is pre-installed on RHEL Atomic Host systems.
For more on the atomic
CLI, see the Atomic CLI documentation.
Install the atomic package if it is not installed on the host system:
$ yum install atomic
The atomic trust sub-command manages trust configuration. The default configuration is to whitelist all registries. This means no signature verification is configured.
$ atomic trust show * (default) accept
A reasonable configuration might be to whitelist a particular registry or namespace, blacklist (reject) untrusted registries, and require signature verification on a vendor registry. The following set of commands performs this example configuration:
Example Atomic Trust Configuration
$ atomic trust add --type insecureAcceptAnything 172.30.1.1:5000 $ atomic trust add --sigstoretype atomic \ --pubkeys pub@example.com \ 172.30.1.1:5000/production $ atomic trust add --sigstoretype atomic \ --pubkeys /etc/pki/example.com.pub \ 172.30.1.1:5000/production $ atomic trust add --sigstoretype web \ --sigstore https://access.redhat.com/webassets/docker/content/sigstore \ --pubkeys /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release \ registry.access.redhat.com # atomic trust show * (default) accept 172.30.1.1:5000 accept 172.30.1.1:5000/production signed security@example.com registry.access.redhat.com signed security@redhat.com,security@redhat.com
When all the signed sources are verified, nodes may be further hardened with a global reject
default:
$ atomic trust default reject $ atomic trust show * (default) reject 172.30.1.1:5000 accept 172.30.1.1:5000/production signed security@example.com registry.access.redhat.com signed security@redhat.com,security@redhat.com
Use the atomic
man page man atomic-trust
for additional examples.
The following files and directories comprise the trust configuration of a host:
- /etc/containers/registries.d/*
- /etc/containers/policy.json
The trust configuration may be managed directly on each node or the generated files managed on a separate host and distributed to the appropriate nodes using Ansible, for example. See the Container Image Signing Integration Guide for an example of automating file distribution with Ansible.
2.3.6.5. Managing Container Logs
Sometimes a container’s log file (the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running) can increase to a problematic size. You can manage this by configuring Docker’s json-file
logging driver to restrict the size and number of log files.
Option | Purpose |
---|---|
| Sets the size at which a new log file is created. |
| Sets the maximum number of log files to be kept per host. |
For example, to set the maximum file size to 1MB and always keep the last three log files, edit the /etc/sysconfig/docker
file to configure max-size=1M
and max-file=3
, ensuring that the values maintain the single quotation mark formatting:
OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'
Next, restart the Docker service:
# systemctl restart docker
2.3.6.6. Viewing Available Container Logs
Container logs are stored in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:
# ls -lh /var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/ total 2.6M -rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json -rw-r--r--. 1 root root 649K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log -rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1 -rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2 -rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json drwx------. 2 root root 6 Nov 24 00:12 secrets
See Docker’s documentation for additional information on how to configure logging drivers.
2.3.6.7. Blocking Local Volume Usage
When a volume is provisioned using the VOLUME
instruction in a Dockerfile or using the docker run -v <volumename>
command, a host’s storage space is used. Using this storage can lead to an unexpected out of space issue and could bring down the host.
In OpenShift Container Platform, users trying to run their own images risk filling the entire storage space on a node host. One solution to this issue is to prevent users from running images with volumes. This way, the only storage a user has access to can be limited, and the cluster administrator can assign storage quota.
Using docker-novolume-plugin solves this issue by disallowing starting a container with local volumes defined. In particular, the plug-in blocks docker run
commands that contain:
-
The
--volumes-from
option -
Images that have
VOLUME
(s) defined -
References to existing volumes that were provisioned with the
docker volume
command
The plug-in does not block references to bind mounts.
To enable docker-novolume-plugin, perform the following steps on each node host:
Install the docker-novolume-plugin package:
$ yum install docker-novolume-plugin
Enable and start the docker-novolume-plugin service:
$ systemctl enable docker-novolume-plugin $ systemctl start docker-novolume-plugin
Edit the /etc/sysconfig/docker file and append the following to the
OPTIONS
list:--authorization-plugin=docker-novolume-plugin
Restart the docker service:
$ systemctl restart docker
After you enable this plug-in, containers with local volumes defined fail to start and show the following error message:
runContainer: API error (500): authorization denied by plugin docker-novolume-plugin: volumes are not allowed
2.3.7. Ensuring Host Access
The quick and advanced installation methods require a user that has access to all hosts. If you want to run the installer as a non-root user, passwordless sudo rights must be configured on each destination host.
For example, you can generate an SSH key on the host where you will invoke the installation process:
# ssh-keygen
Do not use a password.
An easy way to distribute your SSH keys is by using a bash
loop:
# for host in master.example.com \ 1 node1.example.com \ 2 node2.example.com; \ 3 do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
Modify the host names in the above command according to your configuration.
After you run the bash
loop, confirm that you can access each host that is listed in the loop through SSH.
2.3.8. Setting Proxy Overrides
If the /etc/environment file on your nodes contains either an http_proxy
or https_proxy
value, you must also set a no_proxy
value in that file to allow open communication between OpenShift Container Platform components.
The no_proxy
parameter in /etc/environment file is not the same value as the global proxy values that you set in your inventory file. The global proxy values configure specific OpenShift Container Platform services with your proxy settings. See Configuring Global Proxy Options for details.
If the /etc/environment file contains proxy values, define the following values in the no_proxy
parameter of that file on each node:
- Master and node host names or their domain suffix.
- Other internal host names or their domain suffix.
- Etcd IP addresses. You must provide IP addresses and not host names because etcd access is controlled by IP address.
-
Kubernetes IP address, by default
172.30.0.1
. Must be the value set in theopenshift_portal_net
parameter in your inventory file. -
Kubernetes internal domain suffix,
cluster.local
. -
Kubernetes internal domain suffix,
.svc
.
Because no_proxy
does not support CIDR, you can use domain suffixes.
If you use either an http_proxy
or https_proxy
value, your no_proxy
parameter value resembles the following example:
no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1
2.3.9. What’s Next?
If you are interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to prepare your hosts.
When you are ready to proceed, you can install OpenShift Container Platform using the quick installation or advanced installation method.
As of OpenShift Container Platform 3.9, the quick installation method is deprecated. In a future release, it will be removed completely. In addition, using the quick installer to upgrade from version 3.7 to 3.9 is not supported.
If you are installing a stand-alone registry, continue with Installing a Stand-alone Registry.
2.4. Installing on Containerized Hosts
2.4.1. RPM Versus Containerized Installation
You can opt to install OpenShift Container Platform using the RPM or containerized package method. Either installation method results in a working environment, but the choice comes from the operating system and how you choose to update your hosts.
The default method for installing OpenShift Container Platform on Red Hat Enterprise Linux (RHEL) uses RPMs. When targeting a Red Hat Atomic Host system, the containerized method is the only available option, and is automatically selected for you based on the detection of the /run/ostree-booted file.
When using RPMs, all services are installed and updated by package management from an outside source. These modify a host’s existing configuration within the same user space. Alternatively, with containerized installs, each component of OpenShift Container Platform is shipped as a container (in a self-contained package) and leverages the host’s kernel to start and run. Any updated, newer containers replace any existing ones on your host. Choosing one method over the other depends on how you choose to update OpenShift Container Platform in the future.
The following table outlines further differences between the RPM and Containerized methods:
RPM | Containerized | |
---|---|---|
Installation Method |
Packages via |
Container images via |
Service Management |
|
|
Operating System | Red Hat Enterprise Linux | Red Hat Enterprise Linux or Red Hat Atomic Host |
2.4.2. Install Methods for Containerized Hosts
As with the RPM installation, you can choose between the quick and advanced install methods for the containerized install.
For the quick installation method, you can choose between the RPM or containerized method on a per host basis during the interactive installation, or set the values manually in an installation configuration file.
For the advanced installation method, you can set the Ansible variable containerized=true
in an inventory file on a cluster-wide or per host basis.
2.4.3. Required Images
Containerized installations make use of the following images:
- openshift3/ose
- openshift3/node
- openshift3/openvswitch
- registry.access.redhat.com/rhel7/etcd
By default, all of the above images are pulled from the Red Hat Registry at registry.access.redhat.com.
If you need to use a private registry to pull these images during the installation, you can specify the registry information ahead of time. For the advanced installation method, you can set the following Ansible variables in your inventory file, as required:
openshift_docker_additional_registries=<registry_hostname> openshift_docker_insecure_registries=<registry_hostname> openshift_docker_blocked_registries=<registry_hostname>
For the quick installation method, you can export the following environment variables on each target host:
# export OO_INSTALL_ADDITIONAL_REGISTRIES=<registry_hostname> # export OO_INSTALL_INSECURE_REGISTRIES=<registry_hostname>
Blocked Docker registries cannot currently be specified using the quick installation method.
The configuration of additional, insecure, and blocked Docker registries occurs at the beginning of the installation process to ensure that these settings are applied before attempting to pull any of the required images.
2.4.4. Starting and Stopping Containers
The installation process creates relevant systemd units which can be used to start, stop, and poll services using normal systemctl commands. For containerized installations, these unit names match those of an RPM installation, with the exception of the etcd service which is named etcd_container.
This change is necessary as currently RHEL Atomic Host ships with the etcd package installed as part of the operating system, so a containerized version is used for the OpenShift Container Platform installation instead. The installation process disables the default etcd service.
The etcd package is slated to be removed from RHEL Atomic Host in the future.
2.4.5. File Paths
All OpenShift Container Platform configuration files are placed in the same locations during containerized installation as RPM based installations and will survive os-tree upgrades.
However, the default image stream and template files are installed at /etc/origin/examples/ for containerized installations rather than the standard /usr/share/openshift/examples/, because that directory is read-only on RHEL Atomic Host.
2.4.6. Storage Requirements
RHEL Atomic Host installations normally have a very small root file system. However, the etcd, master, and node containers persist data in the /var/lib/ directory. Ensure that you have enough space on the root file system before installing OpenShift Container Platform. See the System Requirements section for details.
2.4.7. Open vSwitch SDN Initialization
OpenShift SDN initialization requires that the Docker bridge be reconfigured and that Docker is restarted. This complicates the situation when the node is running within a container. When using the Open vSwitch (OVS) SDN, you will see the node start, reconfigure Docker, restart Docker (which restarts all containers), and finally start successfully.
In this case, the node service may fail to start and be restarted a few times, because the master services are also restarted along with Docker. The current implementation uses a workaround which relies on setting the Restart=always
parameter in the Docker based systemd units.
2.5. Quick Installation
2.5.1. Overview
As of OpenShift Container Platform 3.9, the quick installation method is deprecated. In a future release, it will be removed completely. In addition, using the quick installer to upgrade from version 3.7 to 3.9 is not supported. The advanced installation method will continue to be supported for new installations and cluster upgrades.
The quick installation method allows you to use an interactive CLI utility, the atomic-openshift-installer
command, to install OpenShift Container Platform across a set of hosts. This installer can deploy OpenShift Container Platform components on targeted hosts by either installing RPMs or running containerized services.
While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the installer is provided by an RPM and not available by default in RHEL Atomic Host. Therefore, it must be run from a Red Hat Enterprise Linux 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be.
This installation method is provided to make the installation experience easier by interactively gathering the data needed to run on each host. The installer is a self-contained wrapper intended for usage on a Red Hat Enterprise Linux (RHEL) 7 system.
In addition to running interactive installations from scratch, the atomic-openshift-installer
command can also be run or re-run using a predefined installation configuration file. This file can be used with the installer to:
- run an unattended installation,
- add nodes to an existing cluster,
- upgrade your cluster, or
- reinstall the OpenShift Container Platform cluster completely.
To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.
2.5.2. Before You Begin
The installer allows you to install OpenShift Container Platform master and node components on a defined set of hosts.
By default, any hosts you designate as masters during the installation process are automatically also configured as nodes so that the masters are configured as part of the OpenShift Container Platform SDN.
See the OpenShift Container Platform 3.9 Release Notes for information on the following related notable technical changes:
Before installing OpenShift Container Platform, you must first satisfy the prerequisites on your hosts, which includes verifying system and environment requirements and properly installing and configuring Docker. You must also be prepared to provide or validate the following information for each of your targeted hosts during the course of the installation:
- User name on the target host that should run the Ansible-based installation (can be root or non-root)
- Host name
- Whether to install components for master, node, or both
- Whether to use the RPM or containerized method
- Internal and external IP addresses
If you are installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see the Installing on Containerized Hosts topic to ensure that you understand the differences between these methods, then return to this topic to continue.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue to running an interactive or unattended installation.
2.5.3. Running an Interactive Installation
Ensure you have read through Before You Begin.
You can start the interactive installation by running:
$ atomic-openshift-installer install
Then follow the on-screen instructions to install a new OpenShift Container Platform cluster.
After it has finished, ensure that you back up the ~/.config/openshift/installer.cfg.ymlinstallation configuration file that is created, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.
2.5.4. Defining an Installation Configuration File
The installer can use a predefined installation configuration file, which contains information about your installation, individual hosts, and cluster. When running an interactive installation, an installation configuration file based on your answers is created for you in ~/.config/openshift/installer.cfg.yml. The file is created if you are instructed to exit the installation to manually modify the configuration or when the installation completes. You can also create the configuration file manually from scratch to perform an unattended installation.
Installation Configuration File Specification
version: v2 1 variant: openshift-enterprise 2 variant_version: 3.9 3 ansible_log_path: /tmp/ansible.log 4 deployment: ansible_ssh_user: root 5 hosts: 6 - ip: 10.0.0.1 7 hostname: master-private.example.com 8 public_ip: 24.222.0.1 9 public_hostname: master.example.com 10 roles: 11 - master - node containerized: true 12 connect_to: 24.222.0.1 13 - ip: 10.0.0.2 hostname: node1-private.example.com public_ip: 24.222.0.2 public_hostname: node1.example.com node_labels: {'region': 'infra'} 14 roles: - node connect_to: 10.0.0.2 - ip: 10.0.0.3 hostname: node2-private.example.com public_ip: 24.222.0.3 public_hostname: node2.example.com roles: - node connect_to: 10.0.0.3 roles: 15 master: <variable_name1>: "<value1>" 16 <variable_name2>: "<value2>" node: <variable_name1>: "<value1>" 17
- 1
- The version of this installation configuration file. As of OpenShift Container Platform 3.3, the only valid version here is
v2
. - 2
- The OpenShift Container Platform variant to install. For OpenShift Container Platform, set this to
openshift-enterprise
. - 3
- A valid version of your selected variant:
3.9
,3.7
,3.6
,3.5
,3.4
,3.3
,3.2
, or3.1
. If not specified, this defaults to the latest version for the specified variant. - 4
- Defines where the Ansible logs are stored. By default, this is the /tmp/ansible.log file.
- 5
- Defines which user Ansible uses to SSH in to remote systems for gathering facts and for the installation. By default, this is the root user, but you can set it to any user that has sudo privileges.
- 6
- Defines a list of the hosts onto which you want to install the OpenShift Container Platform master and node components.
- 7 8
- Required. Allows the installer to connect to the system and gather facts before proceeding with the install.
- 9 10
- Required for unattended installations. If these details are not specified, then this information is pulled from the facts gathered by the installer, and you are asked to confirm the details. If undefined for an unattended installation, the installation fails.
- 11
- Determines the type of services that are installed. Specified as a list.
- 12
- If set to true, containerized OpenShift Container Platform services are run on target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details.
- 13
- The IP address that Ansible attempts to connect to when installing, upgrading, or uninstalling the systems. If the configuration file was auto-generated, then this is the value you first enter for the host during that interactive install process.
- 14
- Node labels can optionally be set per-host.
- 15
- Defines a dictionary of roles across the deployment.
- 16 17
- Any ansible variables that should only be applied to hosts assigned a role can be defined. For examples, see Configuring Ansible.
2.5.5. Running an Unattended Installation
Ensure you have read through the Before You Begin.
Unattended installations allow you to define your hosts and cluster configuration in an installation configuration file before running the installer so that you do not have to go through all of the interactive installation questions and answers. It also allows you to resume an interactive installation you may have left unfinished, and quickly get back to where you left off.
To run an unattended installation, first define an installation configuration file at ~/.config/openshift/installer.cfg.yml. Then, run the installer with the -u
flag:
$ atomic-openshift-installer -u install
By default in interactive or unattended mode, the installer uses the configuration file located at ~/.config/openshift/installer.cfg.yml if the file exists. If it does not exist, attempting to start an unattended installation fails.
Alternatively, you can specify a different location for the configuration file using the -c
option, but doing so will require you to specify the file location every time you run the installation:
$ atomic-openshift-installer -u -c </path/to/file> install
After the unattended installation finishes, ensure that you back up the ~/.config/openshift/installer.cfg.yml file that was used, as it is required if you later want to re-run the installation, add hosts to the cluster, or upgrade your cluster. Then, verify the installation.
2.5.6. Verifying the Installation
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.9.1+a0ce1bc657 node1.example.com Ready compute 7h v1.9.1+a0ce1bc657 node2.example.com Ready compute 7h v1.9.1+a0ce1bc657
To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.
For example, for a master host with a host name of
master.openshift.com
and using the default port of8443
, the web console would be found athttps://master.openshift.com:8443/console
.- Then, see What’s Next for the next steps on configuring your OpenShift Container Platform cluster.
2.5.7. Uninstalling OpenShift Container Platform
You can uninstall OpenShift Container Platform from all hosts in your cluster using the installer’s uninstall
command. By default, the installer uses the installation configuration file located at ~/.config/openshift/installer.cfg.yml if the file exists:
$ atomic-openshift-installer uninstall
Alternatively, you can specify a different location for the configuration file using the -c
option:
$ atomic-openshift-installer -c </path/to/file> uninstall
See the advanced installation method for more options.
2.5.8. What’s Next?
Now that you have a working OpenShift Container Platform instance, you can:
- Configure authentication; by default, authentication is set to Deny All.
- Configure the automatically-deployed integrated Docker registry.
- Configure the automatically-deployed router.
2.6. Advanced Installation
2.6.1. Overview
A reference configuration implemented using Ansible playbooks is available as the advanced installation method for installing a OpenShift Container Platform cluster. Familiarity with Ansible is assumed, however you can use this configuration as a reference to create your own implementation using the configuration management tool of your choosing.
While RHEL Atomic Host is supported for running containerized OpenShift Container Platform services, the advanced installation method utilizes Ansible, which is not available in RHEL Atomic Host. The RPM-based installer must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be. Alternatively, a containerized version of the installer is available as a system container, which can be run from a RHEL Atomic Host system.
To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.
Running Ansible playbooks with the --tags
or --check
options is not supported by Red Hat.
2.6.2. Before You Begin
Before installing OpenShift Container Platform, you must first see the Prerequisites and Host Preparation topics to prepare your hosts. This includes verifying system and environment requirements per component type and properly installing and configuring Docker. It also includes installing Ansible version 2.4, as the advanced installation method is based on Ansible playbooks and as such requires directly invoking Ansible.
If you are interested in installing OpenShift Container Platform using the containerized method (optional for RHEL but required for RHEL Atomic Host), see Installing on Containerized Hosts to ensure that you understand the differences between these methods, then return to this topic to continue.
For large-scale installs, including suggestions for optimizing install time, see the Scaling and Performance Guide.
After following the instructions in the Prerequisites topic and deciding between the RPM and containerized methods, you can continue in this topic to Configuring Ansible Inventory Files.
2.6.3. Configuring Ansible Inventory Files
The /etc/ansible/hosts file is Ansible’s inventory file for the playbook used to install OpenShift Container Platform. The inventory file describes the configuration for your OpenShift Container Platform cluster. You must replace the default contents of the file with your desired configuration.
The following sections describe commonly-used variables to set in your inventory file during an advanced installation, followed by example inventory files you can use as a starting point for your installation.
Many of the Ansible variables described are optional. Accepting the default values should suffice for development environments, but for production environments, it is recommended you read through and become familiar with the various options available.
The example inventories describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the advanced installation.
Image Version Policy
Images require a version number policy in order to maintain updates. See the Image Version Tag Policy section in the Architecture Guide for more information.
2.6.3.1. Configuring Cluster Variables
To assign environment variables during the Ansible install that apply more globally to your OpenShift Container Platform cluster overall, indicate the desired variables in the /etc/ansible/hosts file on separate, single lines within the [OSEv3:vars] section. For example:
[OSEv3:vars] openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] openshift_master_default_subdomain=apps.test.example.com
If a parameter value in the Ansible inventory file contains special characters, such as #
, {
or }
, you must double-escape the value (that is enclose the value in both single and double quotation marks). For example, to use mypasswordwith###hashsigns
as a value for the variable openshift_cloudprovider_openstack_password
, declare it as openshift_cloudprovider_openstack_password='"mypasswordwith###hashsigns"'
in the Ansible host inventory file.
The following tables describe variables for use with the Ansible installer that can be assigned cluster-wide:
Variable | Purpose |
---|---|
|
This variable sets the SSH user for the installer to use and defaults to |
|
If |
|
This variable sets which INFO messages are logged to the
For more information on debug log levels, see Configuring Logging Levels. |
|
If set to |
|
Whether to enable Network Time Protocol (NTP) on cluster nodes. Important To prevent masters and nodes in the cluster from going out of sync, do not change the default value of this parameter. |
| This variable sets the parameter and arbitrary JSON values as per the requirement in your inventory hosts file. For example: openshift_master_admission_plugin_config={"ClusterResourceOverride":{"configuration":{"apiVersion":"v1","kind":"ClusterResourceOverrideConfig","memoryRequestToLimitPercent":"25","cpuRequestToLimitPercent":"25","limitCPUToMemoryPercent":"200"}}} |
| This variable enables API service auditing. See Audit Configuration for more information. |
| This variable overrides the host name for the cluster, which defaults to the host name of the master. |
| This variable overrides the public host name for the cluster, which defaults to the host name of the master. If you use an external load balancer, specify the address of the external load balancer. For example: openshift_master_cluster_public_hostname=openshift-ansible.public.example.com |
|
Optional. This variable defines the HA method when deploying multiple masters. Supports the |
|
This variable enables rolling restarts of HA masters (i.e., masters are taken down one at a time) when running the upgrade playbook directly. It defaults to |
| This variable sets the identity provider. The default value is Deny All. If you use a supported identity provider, configure OpenShift Container Platform to use it. |
| These variables are used to configure custom certificates which are deployed as part of the installation. See Configuring Custom Certificates for more information. |
| |
| Provide the location of the custom certificates for the hosted router. |
|
Validity of the auto-generated registry certificate in days. Defaults to |
|
Validity of the auto-generated CA certificate in days. Defaults to |
|
Validity of the auto-generated node certificate in days. Defaults to |
|
Validity of the auto-generated master certificate in days. Defaults to |
|
Validity of the auto-generated external etcd certificates in days. Controls validity for etcd CA, peer, server and client certificates. Defaults to |
|
Set to |
| These variables override defaults for session options in the OAuth configuration. See Configuring Session Options for more information. |
| |
| |
| |
|
This variable configures |
|
Sets |
| Default node selector for automatically deploying router pods. See Configuring Node Host Labels for details. |
| Default node selector for automatically deploying registry pods. See Configuring Node Host Labels for details. |
| This variable enables the template service broker by specifying one or more namespaces whose templates will be served by the broker. |
|
Default node selector for automatically deploying Ansible service broker pods, defaults |
|
Default node selector for automatically deploying template service broker pods, defaults |
|
This variable overrides the node selector that projects will use by default when placing pods, which is defined by the |
|
OpenShift Container Platform adds the specified additional registry or registries to the docker configuration. These are the registries to search. If the registry requires access to a port other than For example: openshift_docker_additional_registries=example.com:443 |
|
OpenShift Container Platform adds the specified additional insecure registry or registries to the docker configuration. For any of these registries, secure sockets layer (SSL) is not verified. Also, add these registries to |
|
OpenShift Container Platform adds the specified blocked registry or registries to the docker configuration. Block the listed registries. Setting this to |
|
This variable sets the host name for integration with the metrics console by overriding |
| This variable is a cluster identifier unique to the AWS Availability Zone. Using this avoids potential issues in Amazon Web Service (AWS) with multiple zones or multiple clusters. See Labeling Clusters for AWS for details. |
| Use this variable to specify a container image tag to install or configure. |
| Use this variable to specify an RPM version to install or configure. |
If you modify the openshift_image_tag
or the openshift_pkg_version
variables after the cluster is set up, then an upgrade can be triggered, resulting in downtime.
-
If
openshift_image_tag
is set, its value is used for all hosts in containerized environments, even those that have another version installed. If -
openshift_pkg_version
is set, its value is used for all hosts in RPM-based environments, even those that have another version installed.
Variable | Purpose |
---|---|
| This variable overrides the default subdomain to use for exposed routes. |
|
This variable configures which OpenShift SDN plug-in to use for the pod network, which defaults to |
|
This variable overrides the SDN cluster network CIDR block. This is the network from which pod IPs are assigned. This network block should be a private block and must not conflict with existing network blocks in your infrastructure to which pods, nodes, or the master may require access. Defaults to |
|
This variable configures the subnet in which services will be created within the OpenShift Container Platform SDN. This network block should be private and must not conflict with any existing network blocks in your infrastructure to which pods, nodes, or the master may require access to, or the installation will fail. Defaults to |
|
This variable specifies the size of the per host subnet allocated for pod IPs by OpenShift Container Platform SDN. Defaults to |
|
This variable specifies the service proxy mode to use: either |
|
This variable enables flannel as an alternative networking layer instead of the default SDN. If enabling flannel, disable the default SDN with the |
|
Set to |
2.6.3.2. Configuring Deployment Type
Various defaults used throughout the playbooks and roles used by the installer are based on the deployment type configuration (usually defined in an Ansible inventory file).
Ensure the openshift_deployment_type
parameter in your inventory file’s [OSEv3:vars]
section is set to openshift-enterprise
to install the OpenShift Container Platform variant:
[OSEv3:vars] openshift_deployment_type=openshift-enterprise
2.6.3.3. Configuring Host Variables
To assign environment variables to hosts during the Ansible installation, indicate the desired variables in the /etc/ansible/hosts file after the host entry in the [masters] or [nodes] sections. For example:
[masters] ec2-52-6-179-239.compute-1.amazonaws.com openshift_public_hostname=ose3-master.public.example.com
The following table describes variables for use with the Ansible installer that can be assigned to individual host entries:
Variable | Purpose |
---|---|
| This variable overrides the internal cluster host name for the system. Use this when the system’s default IP address does not resolve to the system host name. |
| This variable overrides the system’s public host name. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
|
This variable overrides the cluster internal IP address for the system. Use this when using an interface that is not configured with the default route. |
| This variable overrides the system’s public IP address. Use this for cloud installations, or for hosts on networks using a network address translation (NAT). |
| If set to true, containerized OpenShift Container Platform services are run on the target master and node hosts instead of installed using RPM packages. If set to false or unset, the default RPM method is used. RHEL Atomic Host requires the containerized method, and is automatically selected for you based on the detection of the /run/ostree-booted file. See Installing on Containerized Hosts for more details. Containerized installations are supported starting in OpenShift Container Platform 3.1.1. |
| This variable adds labels to nodes during installation. See Configuring Node Host Labels for more details. |
|
This variable is used to configure |
|
This variable configures additional
To configure the log file, edit the OPTIONS='--log-driver json-file --insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'
Do not use when running |
| This variable configures whether the host is marked as a schedulable node, meaning that it is available for placement of new pods. See Configuring Schedulability on Masters. |
2.6.3.4. Configuring Project Parameters
To configure the default project settings, configure the following variables in the /etc/ansible/hosts file:
Parameter | Description | Type | Default Value |
---|---|---|---|
| The string presented to a user if they are unable to request a project via the projectrequest API endpoint. | String | null |
| The template to use for creating projects in response to a projectrequest. If you do not specify a value, the default template is used. |
String with the format | null |
|
Defines the range of MCS categories to assign to namespaces. If this value is changed after startup, new projects might receive labels that are already allocated to other projects. The prefix can be any valid SELinux set of terms, including user, role, and type. However, leaving the prefix at its default allows the server to set them automatically. For example, |
String with the format |
|
| Defines the number of labels to reserve per project. | Integer |
|
|
Defines the total set of Unix user IDs (UIDs) automatically allocated to projects and the size of the block that each namespace gets. For example, |
String in the format |
|
2.6.3.5. Configuring Master API Port
To configure the default ports used by the master API, configure the following variables in the /etc/ansible/hosts file:
Variable | Purpose |
---|---|
| This variable sets the port number to access the OpenShift Container Platform API. |
For example:
openshift_master_api_port=3443
The web console port setting (openshift_master_console_port
) must match the API server port (openshift_master_api_port
).
2.6.3.6. Configuring Cluster Pre-install Checks
Pre-install checks are a set of diagnostic tasks that run as part of the openshift_health_checker Ansible role. They run prior to an Ansible installation of OpenShift Container Platform, ensure that required inventory values are set, and identify potential issues on a host that can prevent or interfere with a successful installation.
The following table describes available pre-install checks that will run before every Ansible installation of OpenShift Container Platform:
Check Name | Purpose |
---|---|
|
This check ensures that a host has the recommended amount of memory for the specific deployment of OpenShift Container Platform. Default values have been derived from the latest installation documentation. A user-defined value for minimum memory requirements may be set by setting the |
|
This check only runs on etcd, master, and node hosts. It ensures that the mount path for an OpenShift Container Platform installation has sufficient disk space remaining. Recommended disk values are taken from the latest installation documentation. A user-defined value for minimum disk space requirements may be set by setting |
|
Only runs on hosts that depend on the docker daemon (nodes and containerized installations). Checks that docker's total usage does not exceed a user-defined limit. If no user-defined limit is set, docker's maximum usage threshold defaults to 90% of the total size available. The threshold limit for total percent usage can be set with a variable in your inventory file: |
|
Ensures that the docker daemon is using a storage driver supported by OpenShift Container Platform. If the |
| Attempts to ensure that images required by an OpenShift Container Platform installation are available either locally or in at least one of the configured container image registries on the host machine. |
|
Specifies the generic release of OpenShift Container Platform for containerized installations. For RPM installations, set a |
|
Runs on |
| Runs prior to non-containerized installations of OpenShift Container Platform. Ensures that RPM packages required for the current installation are available. |
|
Checks whether a |
To disable specific pre-install checks, include the variable openshift_disable_check
with a comma-delimited list of check names in your inventory file. For example:
openshift_disable_check=memory_availability,disk_availability
A similar set of health checks meant to run for diagnostics on existing clusters can be found in Ansible-based Health Checks. Another set of checks for checking certificate expiration can be found in Redeploying Certificates.
2.6.3.7. Configuring System Containers
System containers provide a way to containerize services that need to run before the docker
daemon is running. They are Docker-formatted containers that use:
System containers are therefore stored and run outside of the traditional docker
service. For more details on system container technology, see Running System Containers in the Red Hat Enterprise Linux Atomic Host: Managing Containers documentation.
You can configure your OpenShift Container Platform installation to run certain components as system containers instead of their RPM or standard containerized methods. Currently, the docker
and etcd components can be run as system containers in OpenShift Container Platform.
System containers are currently OS-specific because they require specific versions of atomic
and systemd. For example, different system containers are created for RHEL, Fedora, or CentOS. Ensure that the system containers you are using match the OS of the host they will run on. OpenShift Container Platform only supports RHEL and RHEL Atomic as the host OS, so by default system containers built for RHEL are used.
2.6.3.7.1. Running Docker as a System Container
The traditional method for using docker
in an OpenShift Container Platform cluster is an RPM package installation. For Red Hat Enterprise Linux (RHEL) systems, it must be specifically installed; for RHEL Atomic Host systems, it is provided by default.
However, you can configure your OpenShift Container Platform installation to alternatively run docker
on node hosts as a system container. When using the system container method, the container-engine
container image and systemd service is used on the host instead of the docker
package and service.
To run docker
as a system container:
Because the default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, for any RHEL systems you must still configure a thin pool logical volume for
docker
to use before running the OpenShift Container Platform installation. You can skip these steps for any RHEL Atomic Host systems.For any RHEL systems, perform the steps described in the following sections:
After completing the storage configuration steps, you can leave the RPM installed.
Set the following cluster variable to
True
in your inventory file in the[OSEv3:vars]
section:openshift_docker_use_system_container=True
When using the system container method, the following inventory variables for docker
are ignored:
-
docker_version
-
docker_upgrade
Further, the following inventory variable must not be used:
-
openshift_docker_options
You can also force docker
in the system container to use a specific container registry and repository when pulling the container-engine
image instead of from the default registry.access.redhat.com/openshift3/
. To do so, set the following cluster variable in your inventory file in the [OSEv3:vars]
section:
openshift_docker_systemcontainer_image_override="<registry>/<user>/<image>:<tag>"
2.6.3.7.2. Running etcd as a System Container
When using the RPM-based installation method for OpenShift Container Platform, etcd is installed using RPM packages on any RHEL systems. When using the containerized installation method, the rhel7/etcd
image is used instead for RHEL or RHEL Atomic Hosts.
However, you can configure your OpenShift Container Platform installation to alternatively run etcd as a system container. Whereas the standard containerized method uses a systemd service named etcd_container
, the system container method uses the service name etcd, same as the RPM-based method. The data directory for etcd using this method is /var/lib/etcd.
To run etcd as a system container, set the following cluster variable in your inventory file in the [OSEv3:vars]
section:
openshift_use_etcd_system_container=True
2.6.3.8. Configuring a Registry Location
If you are using an image registry other than the default at registry.access.redhat.com
, specify the desired registry within the /etc/ansible/hosts file.
oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true
Variable | Purpose |
---|---|
|
Set to the alternate image location. Necessary if you are not using the default registry at |
|
Set to |
|
Specify the additional registry or registries. If the registry required to access the registry is other than |
|
Specify the URL and path to namespace where the registry-console image is located. Note that the value for this must end in |
| Specify the prefix for the web console images. |
| Specify the prefix for the service catalog component image. |
| Specify the prefix for the ansible service broker component image. |
| Specify the prefix for the template service broker component image. |
| A setting for if you are using CRI-O and if you are using an alternative CRI-O system container image from another registry. |
For example:
oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true openshift_docker_additional_registries=example.com:443 +openshift_crio_systemcontainer_image_override=<registry>/<repo>/<image>:<tag> openshift_cockpit_deployer_prefix='registry.example.com/openshift3/' openshift_web_console_prefix='registry.example.com/openshift3/ose- openshift_service_catalog_image_prefix='registry.example.com/openshift3/ose-' ansible_service_broker_image_prefix='registry.example.com/openshift3/ose-' template_service_broker_prefix='registry.example.com/openshift3/ose-'
2.6.3.9. Configuring a Registry Route
To allow users to push and pull images to the internal Docker registry from outside of the OpenShift Container Platform cluster, configure the registry route in the /etc/ansible/hosts file. By default, the registry route is docker-registry-default.router.default.svc.cluster.local.
Variable | Purpose |
---|---|
|
Set to the value of the desired registry route. The route contains either a name that resolves to an infrastructure node where a router manages communication or the subdomain that you set as the default application subdomain wildcard value. For example, if you set the |
| Set the paths to the registry certificates. If you do not provide values for the certificate locations, certificates are generated. You can define locations for the following certificates:
|
| Set to one of the following values:
|
For example:
openshift_hosted_registry_routehost=<path> openshift_hosted_registry_routetermination=reencrypt openshift_hosted_registry_routecertificates= "{'certfile': '<path>/org-cert.pem', 'keyfile': '<path>/org-privkey.pem', 'cafile': '<path>/org-chain.pem'}"
2.6.3.10. Configuring the Registry Console
If you are using a Cockpit registry console image other than the default or require a specific version of the console, specify the desired registry within the /etc/ansible/hosts file:
openshift_cockpit_deployer_prefix=<registry_name>/<namespace>/ openshift_cockpit_deployer_version=<cockpit_image_tag>
Variable | Purpose |
---|---|
| Specify the URL and path to the directory where the image is located. |
| Specify the Cockpit image version. |
For example: If your image is at registry.example.com/openshift3/registry-console
and you require version 3.9.3, enter:
openshift_cockpit_deployer_prefix='registry.example.com/openshift3/' openshift_cockpit_deployer_version='3.9.3'
2.6.3.11. Configuring Router Sharding
Router sharding support is enabled by supplying the correct data to the inventory. The variable openshift_hosted_routers
holds the data, which is in the form of a list. If no data is passed, then a default router is created. There are multiple combinations of router sharding. The following example supports routers on separate nodes:
openshift_hosted_routers=[{'name': 'router1', 'certificate': {'certfile': '/path/to/certificate/abc.crt', 'keyfile': '/path/to/certificate/abc.key', 'cafile': '/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router', 'namespace': 'default', 'stats_port': 1936, 'edits': [], 'images': 'openshift3/ose-${component}:${version}', 'selector': 'type=router1', 'ports': ['80:80', '443:443']}, {'name': 'router2', 'certificate': {'certfile': '/path/to/certificate/xyz.crt', 'keyfile': '/path/to/certificate/xyz.key', 'cafile': '/path/to/certificate/ca.crt'}, 'replicas': 1, 'serviceaccount': 'router', 'namespace': 'default', 'stats_port': 1936, 'edits': [{'action': 'append', 'key': 'spec.template.spec.containers[0].env', 'value': {'name': 'ROUTE_LABELS', 'value': 'route=external'}}], 'images': 'openshift3/ose-${component}:${version}', 'selector': 'type=router2', 'ports': ['80:80', '443:443']}]
2.6.3.12. Configuring Red Hat Gluster Storage Persistent Storage
Red Hat Gluster Storage can be configured to provide persistent storage and dynamic provisioning for OpenShift Container Platform. It can be used both containerized within OpenShift Container Platform (Container-Native Storage) and non-containerized on its own nodes (Container-Ready Storage).
Additional information and examples, including the ones below, can be found at Persistent Storage Using Red Hat Gluster Storage.
2.6.3.12.1. Configuring Container-Native Storage
See Container-Native Storage Considerations for specific host preparations and prerequisites.
In your inventory file, add
glusterfs
in the[OSEv3:children]
section to enable the[glusterfs]
group:[OSEv3:children] masters nodes glusterfs
Add a
[glusterfs]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
Add the hosts listed under
[glusterfs]
to the[nodes]
group:[nodes] ... node11.example.com openshift_schedulable=True node12.example.com openshift_schedulable=True node13.example.com openshift_schedulable=True
2.6.3.12.2. Configuring Container-Ready Storage
In your inventory file, add
glusterfs
in the[OSEv3:children]
section to enable the[glusterfs]
group:[OSEv3:children] masters nodes glusterfs
Include the following variables in the
[OSEv3:vars]
section, adjusting them as needed for your configuration:[OSEv3:vars] ... openshift_storage_glusterfs_is_native=false openshift_storage_glusterfs_storageclass=true openshift_storage_glusterfs_heketi_is_native=true openshift_storage_glusterfs_heketi_executor=ssh openshift_storage_glusterfs_heketi_ssh_port=22 openshift_storage_glusterfs_heketi_ssh_user=root openshift_storage_glusterfs_heketi_ssh_sudo=false openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa"
Add a
[glusterfs]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Also, setglusterfs_ip
to the IP address of the node. Specifying the variable takes the form:<hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs] gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
2.6.3.13. Configuring an OpenShift Container Registry
An integrated OpenShift Container Registry can be deployed using the advanced installer.
2.6.3.13.1. Configuring Registry Storage
If no registry storage options are used, the default OpenShift Container Registry is ephemeral and all data will be lost when the pod no longer exists. There are several options for enabling registry storage when using the advanced installer:
Option A: NFS Host Group
The use of NFS for registry storage is not recommended in OpenShift Container Platform.
When the following variables are set, an NFS volume is created during an advanced install with the path <nfs_directory>/<volume_name> on the host within the [nfs]
host group. For example, the volume path using these options would be /exports/registry:
[OSEv3:vars] openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
Option B: External NFS Host
The use of NFS for registry storage is not recommended in OpenShift Container Platform.
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host. The remote volume path using the following options would be nfs.example.com:/exports/registry.
[OSEv3:vars] openshift_hosted_registry_storage_kind=nfs openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] openshift_hosted_registry_storage_host=nfs.example.com openshift_hosted_registry_storage_nfs_directory=/exports openshift_hosted_registry_storage_volume_name=registry openshift_hosted_registry_storage_volume_size=10Gi
Option C: OpenStack Platform
An OpenStack storage configuration must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_kind=openstack openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_openstack_filesystem=ext4 openshift_hosted_registry_storage_openstack_volumeID=3a650b4f-c8c5-4e0a-8ca5-eaee11f16c57 openshift_hosted_registry_storage_volume_size=10Gi
Option D: AWS or Another S3 Storage Solution
The simple storage solution (S3) bucket must already exist.
[OSEv3:vars] #openshift_hosted_registry_storage_kind=object #openshift_hosted_registry_storage_provider=s3 #openshift_hosted_registry_storage_s3_accesskey=access_key_id #openshift_hosted_registry_storage_s3_secretkey=secret_access_key #openshift_hosted_registry_storage_s3_bucket=bucket_name #openshift_hosted_registry_storage_s3_region=bucket_region #openshift_hosted_registry_storage_s3_chunksize=26214400 #openshift_hosted_registry_storage_s3_rootdirectory=/registry #openshift_hosted_registry_pullthrough=true #openshift_hosted_registry_acceptschema2=true #openshift_hosted_registry_enforcequota=true
If you are using a different S3 service, such as Minio or ExoScale, also add the region endpoint parameter:
openshift_hosted_registry_storage_s3_regionendpoint=https://myendpoint.example.com/
Option E: Container-Native Storage
Similar to configuring Container-Native Storage, Red Hat Gluster Storage can be configured to provide storage for an OpenShift Container Registry during the initial installation of the cluster to offer redundant and reliable storage for the registry.
See Container-Native Storage Considerations for specific host preparations and prerequisites.
In your inventory file, set the following variable under
[OSEv3:vars]
:[OSEv3:vars] ... openshift_hosted_registry_storage_kind=glusterfs
Add
glusterfs_registry
in the[OSEv3:children]
section to enable the[glusterfs_registry]
group:[OSEv3:children] masters nodes glusterfs_registry
Add a
[glusterfs_registry]
section with entries for each storage node that will host the GlusterFS storage. For each node, setglusterfs_devices
to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:<hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'
For example:
[glusterfs_registry] node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]' node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
Add the hosts listed under
[glusterfs_registry]
to the[nodes]
group:[nodes] ... node11.example.com openshift_schedulable=True node12.example.com openshift_schedulable=True node13.example.com openshift_schedulable=True
Option F: Google Cloud Storage (GCS) bucket on Google Compute Engine (GCE)
A GCS bucket must already exist.
[OSEv3:vars] openshift_hosted_registry_storage_provider=gcs openshift_hosted_registry_storage_gcs_bucket=bucket01 openshift_hosted_registry_storage_gcs_keyfile=test.key openshift_hosted_registry_storage_gcs_rootdirectory=/registry
2.6.3.14. Configuring Global Proxy Options
If your hosts require use of a HTTP or HTTPS proxy in order to connect to external hosts, there are many components that must be configured to use the proxy, including masters, Docker, and builds. Node services only connect to the master API requiring no external access and therefore do not need to be configured to use a proxy.
In order to simplify this configuration, the following Ansible variables can be specified at a cluster or host level to apply these settings uniformly across your environment.
See Configuring Global Build Defaults and Overrides for more information on how the proxy environment is defined for builds.
Variable | Purpose |
---|---|
|
This variable specifies the |
|
This variable specifices the |
|
This variable is used to set the The host names that do not use the defined proxy include:
|
|
This boolean variable specifies whether or not the names of all defined OpenShift hosts and |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the |
|
This variable defines the HTTP proxy used by |
|
This variable defines the HTTPS proxy used by |
2.6.3.15. Configuring the Firewall
- If you are changing the default firewall, ensure that each host in your cluster is using the same firewall type to prevent inconsistencies.
- Do not use firewalld with the OpenShift Container Platform installed on Atomic Host. firewalld is not supported on Atomic host.
While iptables is the default firewall, firewalld is recommended for new installations.
OpenShift Container Platform uses iptables as the default firewall, but you can configure your cluster to use firewalld during the install process.
Because iptables is the default firewall, OpenShift Container Platform is designed to have it configured automatically. However, iptables rules can break OpenShift Container Platform if not configured correctly. The advantages of firewalld include allowing multiple objects to safely share the firewall rules.
To use firewalld as the firewall for an OpenShift Container Platform installation, add the os_firewall_use_firewalld
variable to the list of configuration variables in the Ansible host file at install:
[OSEv3:vars]
os_firewall_use_firewalld=True 1
- 1
- Setting this variable to
true
opens the required ports and adds rules to the default zone, ensuring that firewalld is configured correctly.
Using the firewalld default configuration comes with limited configuration options, and cannot be overridden. For example, while you can set up a storage network with interfaces in multiple zones, the interface that nodes communicate on must be in the default zone.
2.6.3.16. Configuring Schedulability on Masters
Any hosts you designate as masters during the installation process should also be configured as nodes so that the masters are configured as part of the OpenShift SDN. You must do so by adding entries for these hosts to the [nodes]
section:
[nodes] master.example.com
In previous versions of OpenShift Container Platform, master hosts were marked as unschedulable nodes by default by the installer, meaning that new pods could not be placed on the hosts. Starting with OpenShift Container Platform 3.9, however, masters are marked schedulable automatically during installation. This change is mainly so that the web console, which used to run as part of the master itself, can instead be run as a pod deployed to the master.
If you want to change the schedulability of a host post-installation, see Marking Nodes as Unschedulable or Schedulable.
2.6.3.17. Configuring Node Host Labels
You can assign labels to node hosts during the Ansible install by configuring the /etc/ansible/hosts file. Labels are useful for determining the placement of pods onto nodes using the scheduler. Other than region=infra
(referred to as dedicated infrastructure nodes and discussed further in Configuring Dedicated Infrastructure Nodes), the actual label names and values are arbitrary and can be assigned however you see fit per your cluster’s requirements.
To assign labels to a node host during an Ansible install, use the openshift_node_labels
variable with the desired labels added to the desired node host entry in the [nodes]
section. In the following example, labels are set for a region called primary
and a zone called east
:
[nodes] node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}"
Starting in OpenShift Container Platform 3.9, masters are now marked as schedulable nodes by default. As a result, the default node selector (defined in the master configuration file’s projectConfig.defaultNodeSelector
field to determine which node that projects will use by default when placing pods, and previously left blank by default) is now set by default during cluster installations. It is set to node-role.kubernetes.io/compute=true
unless overridden using the osm_default_node_selector
Ansible variable.
In addition, whether osm_default_node_selector
is set or not, the following automatic labeling occurs for hosts defined in your inventory file during installation:
-
non-master, non-dedicated infrastructure nodes hosts (for example, the
node1.example.com
host shown above) are labeled withnode-role.kubernetes.io/compute=true
-
master nodes are labeled
node-role.kubernetes.io/master=true
This ensures that the default node selector has available nodes to choose from when determining pod placement.
If you accept the default node selector of node-role.kubernetes.io/compute=true
during installation, ensure that you do not only have dedicated infrastructure nodes as the non-master nodes defined in your cluster. In that scenario, application pods would fail to deploy because no nodes with the node-role.kubernetes.io/compute=true
label would be available to match the default node selector when scheduling pods for projects.
See Setting the Cluster-wide Default Node Selector for steps on adjusting this setting post-installation if needed.
2.6.3.17.1. Configuring Dedicated Infrastructure Nodes
It is recommended for production environments that you maintain dedicated infrastructure nodes where the registry and router pods can run separately from pods used for user applications.
The openshift_router_selector
and openshift_registry_selector
Ansible settings determine the label selectors used when placing registry and router pods. They are set to region=infra
by default:
# default selectors for router and registry services # openshift_router_selector='region=infra' # openshift_registry_selector='region=infra'
The registry and router are only able to run on node hosts with the region=infra
label, which are then considered dedicated infrastructure nodes. Ensure that at least one node host in your OpenShift Container Platform environment has the region=infra
label. For example:
[nodes] infra-node1.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}"
If there is not a node in the [nodes]
section that matches the selector settings, the default router and registry will be deployed as failed with Pending
status.
If you do not intend to use OpenShift Container Platform to manage the registry and router, configure the following Ansible settings:
openshift_hosted_manage_registry=false openshift_hosted_manage_router=false
If you are using an image registry other than the default registry.access.redhat.com
, you need to specify the desired registry in the /etc/ansible/hosts file.
As described in Configuring Schedulability on Masters, master hosts are marked schedulable by default. If you label a master host with region=infra
and have no other dedicated infrastructure nodes, the master hosts must also be marked as schedulable. Otherwise, the registry and router pods cannot be placed anywhere:
[nodes] master.example.com openshift_node_labels="{'region': 'infra','zone': 'default'}" openshift_schedulable=true
2.6.3.18. Configuring Session Options
Session options in the OAuth configuration are configurable in the inventory file. By default, Ansible populates a sessionSecretsFile
with generated authentication and encryption secrets so that sessions generated by one master can be decoded by the others. The default location is /etc/origin/master/session-secrets.yaml, and this file will only be re-created if deleted on all masters.
You can set the session name and maximum number of seconds with openshift_master_session_name
and openshift_master_session_max_seconds
:
openshift_master_session_name=ssn openshift_master_session_max_seconds=3600
If provided, openshift_master_session_auth_secrets
and openshift_master_encryption_secrets
must be equal length.
For openshift_master_session_auth_secrets
, used to authenticate sessions using HMAC, it is recommended to use secrets with 32 or 64 bytes:
openshift_master_session_auth_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
For openshift_master_encryption_secrets
, used to encrypt sessions, secrets must be 16, 24, or 32 characters long, to select AES-128, AES-192, or AES-256:
openshift_master_session_encryption_secrets=['DONT+USE+THIS+SECRET+b4NV+pmZNSO']
2.6.3.19. Configuring Custom Certificates
Custom serving certificates for the public host names of the OpenShift Container Platform API and web console can be deployed during an advanced installation and are configurable in the inventory file.
Custom certificates should only be configured for the host name associated with the publicMasterURL
which can be set using openshift_master_cluster_public_hostname
. Using a custom serving certificate for the host name associated with the masterURL
(openshift_master_cluster_hostname
) will result in TLS errors as infrastructure components will attempt to contact the master API using the internal masterURL
host.
Certificate and key file paths can be configured using the openshift_master_named_certificates
cluster variable:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "cafile": "/path/to/custom-ca1.crt"}]
File paths must be local to the system where Ansible will be run. Certificates are copied to master hosts and are deployed within the /etc/origin/master/named_certificates/ directory.
Ansible detects a certificate’s Common Name
and Subject Alternative Names
. Detected names can be overridden by providing the "names"
key when setting openshift_master_named_certificates
:
openshift_master_named_certificates=[{"certfile": "/path/to/custom1.crt", "keyfile": "/path/to/custom1.key", "names": ["public-master-host.com"], "cafile": "/path/to/custom-ca1.crt"}]
Certificates configured using openshift_master_named_certificates
are cached on masters, meaning that each additional Ansible run with a different set of certificates results in all previously deployed certificates remaining in place on master hosts and within the master configuration file.
If you would like openshift_master_named_certificates
to be overwritten with the provided value (or no value), specify the openshift_master_overwrite_named_certificates
cluster variable:
openshift_master_overwrite_named_certificates=true
For a more complete example, consider the following cluster variables in an inventory file:
openshift_master_cluster_method=native openshift_master_cluster_hostname=lb-internal.openshift.com openshift_master_cluster_public_hostname=custom.openshift.com
To overwrite the certificates on a subsequent Ansible run, you could set the following:
openshift_master_named_certificates=[{"certfile": "/root/STAR.openshift.com.crt", "keyfile": "/root/STAR.openshift.com.key", "names": ["custom.openshift.com"]}] openshift_master_overwrite_named_certificates=true
2.6.3.20. Configuring Certificate Validity
By default, the certificates used to govern the etcd, master, and kubelet expire after two to five years. The validity (length in days until they expire) for the auto-generated registry, CA, node, and master certificates can be configured during installation using the following variables (default values shown):
[OSEv3:vars] openshift_hosted_registry_cert_expire_days=730 openshift_ca_cert_expire_days=1825 openshift_node_cert_expire_days=730 openshift_master_cert_expire_days=730 etcd_ca_default_days=1825
These values are also used when redeploying certificates via Ansible post-installation.
2.6.3.21. Configuring Cluster Metrics
Cluster metrics are not set to automatically deploy. Set the following to enable cluster metrics when using the advanced installation method:
[OSEv3:vars] openshift_metrics_install_metrics=true
The metrics public URL can be set during cluster installation using the openshift_metrics_hawkular_hostname
Ansible variable, which defaults to:
https://hawkular-metrics.{{openshift_master_default_subdomain}}/hawkular/metrics
If you alter this variable, ensure the host name is accessible via your router.
openshift_metrics_hawkular_hostname=hawkular-metrics.{{openshift_master_default_subdomain}}
In accordance with upstream Kubernetes rules, metrics can be collected only on the default interface of eth0
.
You must set an openshift_master_default_subdomain
value to deploy metrics.
2.6.3.21.1. Configuring Metrics Storage
The openshift_metrics_cassandra_storage_type
variable must be set in order to use persistent storage for metrics. If openshift_metrics_cassandra_storage_type
is not set, then cluster metrics data is stored in an emptyDir
volume, which will be deleted when the Cassandra pod terminates.
There are three options for enabling cluster metrics storage when using the advanced install:
Option A: Dynamic
If your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider, use the following variable:
[OSEv3:vars] openshift_metrics_cassandra_storage_type=dynamic
If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. For example, openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block
.
Check Volume Configuration for more information on using DynamicProvisioningEnabled
to enable or disable dynamic provisioning.
Option B: NFS Host Group
The use of NFS for metrics storage is not recommended in OpenShift Container Platform.
When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs]
host group. For example, the volume path using these options would be /exports/metrics:
[OSEv3:vars] openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_nfs_options='*(rw,root_squash)' openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
Option C: External NFS Host
The use of NFS for metrics storage is not recommended in OpenShift Container Platform.
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_metrics_storage_kind=nfs openshift_metrics_storage_access_modes=['ReadWriteOnce'] openshift_metrics_storage_host=nfs.example.com openshift_metrics_storage_nfs_directory=/exports openshift_metrics_storage_volume_name=metrics openshift_metrics_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/metrics.
Upgrading or Installing OpenShift Container Platform with NFS
During testing, Red Hat has seen issues with NFS (on RHEL) when used as storage backend for Registry. Based on that, we do not recommend NFS (on RHEL) as storage backend for Registry.
Other NFS implementations in the marketplace might not have the issues that Red Hat testing found. Please contact the individual NFS implementation vendor for more information on any testing they may have performed.
2.6.3.22. Configuring Cluster Logging
Cluster logging is not set to automatically deploy by default. Set the following to enable cluster logging when using the advanced installation method:
[OSEv3:vars] openshift_logging_install_logging=true
2.6.3.22.1. Configuring Logging Storage
The openshift_logging_es_pvc_dynamic
variable must be set in order to use persistent storage for logging. If openshift_logging_es_pvc_dynamic
is not set, then cluster logging data is stored in an emptyDir
volume, which will be deleted when the Elasticsearch pod terminates.
There are three options for enabling cluster logging storage when using the advanced install:
Option A: Dynamic
If your OpenShift Container Platform environment supports dynamic volume provisioning for your cloud provider, use the following variable:
[OSEv3:vars] openshift_logging_es_pvc_dynamic=true
If there are multiple default dynamically provisioned volume types, such as gluster-storage and glusterfs-storage-block, you can specify the provisioned volume type by variable. For example, openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block
.
Check Volume Configuration for more information on using DynamicProvisioningEnabled
to enable or disable dynamic provisioning.
Option B: NFS Host Group
The use of NFS for logging storage is not recommended in OpenShift Container Platform.
When the following variables are set, an NFS volume is created during an advanced install with path <nfs_directory>/<volume_name> on the host within the [nfs]
host group. For example, the volume path using these options would be /exports/logging:
[OSEv3:vars] openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_nfs_options='*(rw,root_squash)' openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=10Gi
Option C: External NFS Host
The use of NFS for logging storage is not recommended in OpenShift Container Platform.
To use an external NFS volume, one must already exist with a path of <nfs_directory>/<volume_name> on the storage host.
[OSEv3:vars] openshift_logging_storage_kind=nfs openshift_logging_storage_access_modes=['ReadWriteOnce'] openshift_logging_storage_host=nfs.example.com openshift_logging_storage_nfs_directory=/exports openshift_logging_storage_volume_name=logging openshift_logging_storage_volume_size=10Gi
The remote volume path using the following options would be nfs.example.com:/exports/logging.
Upgrading or Installing OpenShift Container Platform with NFS
During testing, Red Hat has seen issues with NFS (on RHEL) when used as storage backend for Registry. Based on that, we do not recommend NFS (on RHEL) as storage backend for Registry.
Other NFS implementations in the marketplace might not have the issues that Red Hat testing found. Please contact the individual NFS implementation vendor for more information on any testing they may have performed.
2.6.3.23. Customizing Service Catalog Options
The service catalog is enabled by default during installation. Enabling the service broker allows you to register service brokers with the catalog. When the service catalog is enabled, the OpenShift Ansible broker and template service broker are both installed as well; see Configuring the OpenShift Ansible Broker and Configuring the Template Service Broker for more information. If you disable the service catalog, the OpenShift Ansible broker and template service broker are not installed.
To disable automatic deployment of the service catalog, set the following cluster variable in your inventory file:
openshift_enable_service_catalog=false
If you use your own registry, you must add:
-
openshift_service_catalog_image_prefix
: When pulling the service catalog image, force the use of a specific prefix (for example,registry
). You must provide the full registry name up to the image name. -
openshift_service_catalog_image_version
: When pulling the service catalog image, force the use of a specific image version.
For example:
openshift_service_catalog_image="docker-registry.default.example.com/openshift/ose-service-catalog:${version}" openshift_service_catalog_image_prefix="docker-registry-default.example.com/openshift/ose-" openshift_service_catalog_image_version="v3.9.30" template_service_broker_selector={"role":"infra"}
When the service catalog is enabled, the OpenShift Ansible broker and template service broker are both enabled as well; see Configuring the OpenShift Ansible Broker and Configuring the Template Service Broker for more information.
2.6.3.23.1. Configuring the OpenShift Ansible Broker
The OpenShift Ansible broker (OAB) is enabled by default during installation.
If you do not want to install the OAB, set the ansible_service_broker_install
parameter value to false
in the inventory file:
ansible_service_broker_install=false
2.6.3.23.1.1. Configuring Persistent Storage for the OpenShift Ansible Broker
The OAB deploys its own etcd instance separate from the etcd used by the rest of the OpenShift Container Platform cluster. The OAB’s etcd instance requires separate storage using persistent volumes (PVs) to function. If no PV is available, etcd will wait until the PV can be satisfied. The OAB application will enter a CrashLoop
state until its etcd instance is available.
You can use the installer with the following variables to configure persistent storage for the OAB using NFS.
Variable | Purpose |
---|---|
|
Storage type to use for the etcd PV. |
| Name of etcd PV. |
|
Defaults to |
|
Size of the etcd PV. Defaults to |
|
Labels to use for the etcd PV. Defaults to |
|
NFS options to use. Defaults to |
|
Directory for NFS exports. Defaults to |
Some Ansible playbook bundles (APBs) also require a PV for their own usage in order to deploy. For example, each of the database APBs have two plans: the Development plan uses ephemeral storage and does not require a PV, while the Production plan is persisted and does require a PV.
APB | PV Required? |
---|---|
postgresql-apb | Yes, but only for the Production plan |
mysql-apb | Yes, but only for the Production plan |
mariadb-apb | Yes, but only for the Production plan |
mediawiki-apb | Yes |
To configure persistent storage for the OAB:
In your inventory file, add
nfs
to the[OSEv3:children]
section to enable the[nfs]
group:[OSEv3:children] masters nodes nfs
Add a
[nfs]
group section and add the host name for the system that will be the NFS host:[nfs] master1.example.com
Add the following in the
[OSEv3:vars]
section:openshift_hosted_etcd_storage_kind=nfs openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)" openshift_hosted_etcd_storage_nfs_directory=/opt/osev3-etcd 1 openshift_hosted_etcd_storage_volume_name=etcd-vol2 2 openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
These settings create a persistent volume that is attached to the OAB’s etcd instance during cluster installation.
2.6.3.23.1.2. Configuring the OpenShift Ansible Broker for Local APB Development
In order to do APB development with the OpenShift Container Registry in conjunction with the OAB, a whitelist of images the OAB can access must be defined. If a whitelist is not defined, the broker will ignore APBs and users will not see any APBs available.
By default, the whitelist is empty so that a user cannot add APB images to the broker without a cluster administrator configuring the broker. To whitelist all images that end in -apb
:
In your inventory file, add the following to the
[OSEv3:vars]
section:ansible_service_broker_local_registry_whitelist=['.*-apb$']
2.6.3.23.2. Configuring the Template Service Broker
The template service broker (TSB) is enabled by default during installation.
If you do not want to install the TSB, set the template_service_broker_install
parameter value to false
:
template_service_broker_install=false
To configure the TSB, one or more projects must be defined as the broker’s source namespace(s) for loading templates and image streams into the service catalog. Set the desired projects by modifying the following in your inventory file’s [OSEv3:vars]
section:
openshift_template_service_broker_namespaces=['openshift','myproject']
By default, the TSB will use the nodeselector {"region": "infra"}
for deploying its pods. You can modify this by setting the desired nodeselector in your inventory file’s [OSEv3:vars]
section:
template_service_broker_selector={"region": "infra"}
2.6.3.24. Configuring Web Console Customization
The following Ansible variables set master configuration options for customizing the web console. See Customizing the Web Console for more details on these customization options.
Variable | Purpose |
---|---|
|
Determines whether to install the web console. Can be set to |
|
The prefix for the component images. For example, with |
|
The version for the component images. For example, with |
|
Sets |
|
Sets |
|
Sets |
|
Sets the OAuth template in the master configuration. See Customizing the Login Page for details. Example value: |
|
Sets |
|
Sets |
| Configurate the web console to log the user out automatically after a period of inactivity. Must be a whole number greater than or equal to 5, or 0 to disable the feature. Defaults to 0 (disabled). |
|
Boolean value indicating if the cluster is configured for overcommit. When |
2.6.4. Example Inventory Files
2.6.4.1. Single Master Examples
You can configure an environment with a single master and multiple nodes, and either a single or multiple number of external etcd hosts.
Moving from a single master cluster to multiple masters after installation is not supported.
Single Master, Single etcd, and Multiple Nodes
The following table describes an example environment for a single master (with a single etcd on the same host), two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master, etcd, and node |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], and [nodes] sections of the following example inventory file:
Single Master, Single etcd, and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] # SSH user, this user should allow ssh based auth without requiring a password ansible_ssh_user=root # If ansible_ssh_user is not root, ansible_become must be set to true #ansible_become=true openshift_deployment_type=openshift-enterprise oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] master.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OpenShift Container Platform 3.9.
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Single Master, Multiple etcd, and Multiple Nodes
The following table describes an example environment for a single master, three etcd hosts, two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master.example.com | Master and node |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:
Single Master, Multiple etcd, and Multiple Nodes Inventory File
# Create an OSEv3 group that contains the masters, nodes, and etcd groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # host group for masters [masters] master.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # host group for nodes, includes region info [nodes] master.example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OpenShift Container Platform 3.9.
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
2.6.4.2. Multiple Masters Examples
You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.
Moving from a single master cluster to multiple masters after installation is not supported.
When configuring multiple masters, the advanced installation supports the native
high availability (HA) method. This method leverages the native HA master capabilities built into OpenShift Container Platform and can be combined with any load balancing solution.
If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.
This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.
For an external load balancing solution, you must have:
- A pre-created load balancer virtual IP (VIP) configured for SSL passthrough.
-
A VIP listening on the port specified by the
openshift_master_api_port
value (8443 by default) and proxying back to all master hosts on that port. A domain name for VIP registered in DNS.
-
The domain name will become the value of both
openshift_master_cluster_public_hostname
andopenshift_master_cluster_hostname
in the OpenShift Container Platform installer.
-
The domain name will become the value of both
See the External Load Balancer Integrations example in Github for more information. For more on the high availability master architecture, see Kubernetes Infrastructure.
The advanced installation method does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments.
To configure multiple masters, refer to Multiple Masters with Multiple etcd
Multiple Masters Using Native HA with External Clustered etcd
The following describes an example environment for three masters using the native
HA method:, one HAProxy load balancer, three etcd hosts, two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
etcd1.example.com | etcd |
etcd2.example.com | |
etcd3.example.com | |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
Multiple Masters Using HAProxy Inventory File
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availbility cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # apply updated node defaults openshift_node_kubelet_args={'pods-per-core': ['10'], 'max-pods': ['250'], 'image-gc-high-threshold': ['90'], 'image-gc-low-threshold': ['80']} # enable ntp on masters to ensure proper failover openshift_clock_enabled=true # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] etcd1.example.com etcd2.example.com etcd3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OpenShift Container Platform 3.9.
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
Multiple Masters Using Native HA with Co-located Clustered etcd
The following describes an example environment for three masters using the native
HA method (with etcd on each host), one HAProxy load balancer, two nodes for hosting user applications, and two nodes with the region=infra
label for hosting dedicated infrastructure:
Host Name | Infrastructure Component to Install |
---|---|
master1.example.com | Master (clustered using native HA) and node with etcd on each host |
master2.example.com | |
master3.example.com | |
lb.example.com | HAProxy to load balance API master endpoints |
node1.example.com | Node |
node2.example.com | |
infra-node1.example.com |
Node (with |
infra-node2.example.com |
You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:
# Create an OSEv3 group that contains the master, nodes, etcd, and lb groups. # The lb group lets Ansible configure HAProxy as the load balancing solution. # Comment lb out if your load balancer is pre-configured. [OSEv3:children] masters nodes etcd lb # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise oreg_url=example.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true # Uncomment the following to enable htpasswd authentication; defaults to # DenyAllPasswordIdentityProvider. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] # Native high availability cluster method with optional load balancer. # If no lb group is defined installer assumes that a load balancer has # been preconfigured. For installation the value of # openshift_master_cluster_hostname must resolve to the load balancer # or to one or all of the masters defined in the inventory if no load # balancer is present. openshift_master_cluster_method=native openshift_master_cluster_hostname=openshift-internal.example.com openshift_master_cluster_public_hostname=openshift-cluster.example.com # host group for masters [masters] master1.example.com master2.example.com master3.example.com # host group for etcd [etcd] master1.example.com master2.example.com master3.example.com # Specify load balancer host [lb] lb.example.com # host group for nodes, includes region info [nodes] master[1:3].example.com node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" node2.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" infra-node1.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" infra-node2.example.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OpenShift Container Platform 3.9.
To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.
2.6.5. Running the Advanced Installation
After you have configured Ansible by defining an inventory file in /etc/ansible/hosts, you run the advanced installation playbook via Ansible.
The installer uses modularized playbooks allowing administrators to install specific components as needed. By breaking up the roles and playbooks, there is better targeting of ad hoc administration tasks. This results in an increased level of control during installations and results in time savings.
The playbooks and their ordering are detailed below in Running Individual Component Playbooks.
Due to a known issue, after running the installation, if NFS volumes are provisioned for any component, the following directories might be created whether their components are being deployed to NFS volumes or not:
- /exports/logging-es
- /exports/logging-es-ops/
- /exports/metrics/
- /exports/prometheus
- /exports/prometheus-alertbuffer/
- /exports/prometheus-alertmanager/
You can delete these directories after installation, as needed.
2.6.5.1. Running the RPM-based Installer
The RPM-based installer uses Ansible installed via RPM packages to run playbooks and configuration files available on the local host.
Do not run OpenShift Ansible playbooks under nohup
. Using nohup
with the playbooks causes file descriptors to be created and not closed. Therefore, the system can run out of files to open and the playbook will fail.
To run the RPM-based installer:
Run the prerequisites.yml playbook. This playbook installs required software packages, if any, and modifies the container runtimes. Unless you need to configure the container runtimes, run this playbook only once, before you deploy a cluster the first time:
# ansible-playbook [-i /path/to/inventory] \ 1 /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
- 1
- If your inventory file is not in the /etc/ansible/hosts directory, specify
-i
and the path to the inventory file.
Run the deploy_cluster.yml playbook to initiate the cluster installation:
# ansible-playbook [-i /path/to/inventory] \ /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
The installer caches playbook configuration values for 10 minutes, by default. If you change any system, network, or inventory configuration, and then re-run the installer within that 10 minute period, the new values are not used, and the previous values are used instead. You can delete the contents of the cache, which is defined by the fact_caching_connection
value in the /etc/ansible/ansible.cfg file. An example of this file is shown in Recommended Installation Practices.
2.6.5.2. Running the Containerized Installer
The openshift3/ose-ansible image is a containerized version of the OpenShift Container Platform installer. This installer image provides the same functionality as the RPM-based installer, but it runs in a containerized environment that provides all of its dependencies rather than being installed directly on the host. The only requirement to use it is the ability to run a container.
2.6.5.2.1. Running the Installer as a System Container
The installer image can be used as a system container. System containers are stored and run outside of the traditional docker service. This enables running the installer image from one of the target hosts without concern for the install restarting docker on the host.
To use the Atomic CLI to run the installer as a run-once system container, perform the following steps as the root user:
Run the prerequisites.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ 1 --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml \ --set OPTS="-v" \ registry.access.redhat.com/openshift3/ose-ansible:v3.9
- 1
- Specify the location on the local host for your inventory file.
This command runs a set of prerequiste tasks by using the inventory file specified and the
root
user’s SSH configuration.Run the deploy_cluster.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ 1 --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml \ --set OPTS="-v" \ registry.access.redhat.com/openshift3/ose-ansible:v3.9
- 1
- Specify the location on the local host for your inventory file.
This command initiates the cluster installation by using the inventory file specified and the
root
user’s SSH configuration. It logs the output on the terminal and also saves it in the /var/log/ansible.log file. The first time this command is run, the image is imported into OSTree storage (system containers use this rather than docker daemon storage). On subsequent runs, it reuses the stored image.If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
2.6.5.2.2. Running Other Playbooks
You can use the PLAYBOOK_FILE
environment variable to specify other playbooks you want to run by using the containerized installer. The default value of the PLAYBOOK_FILE
is /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml, which is the main cluster installation playbook, but you can set it to the path of another playbook inside the container.
For example, to run the pre-install checks playbook before installation, use the following command:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ 1 --set OPTS="-v" \ 2 registry.access.redhat.com/openshift3/ose-ansible:v3.9
2.6.5.2.3. Running the Installer as a Docker Container
The installer image can also run as a docker container anywhere that docker can run.
This method must not be used to run the installer on one of the hosts being configured, as the install may restart docker on the host, disrupting the installer container execution.
Although this method and the system container method above use the same image, they run with different entry points and contexts, so runtime parameters are not the same.
At a minimum, when running the installer as a docker container you must provide:
- SSH key(s), so that Ansible can reach your hosts.
- An Ansible inventory file.
- The location of the Ansible playbook to run against that inventory.
Here is an example of how to run an install via docker
, which must be run by a non-root user with access to docker
:
First, run the prerequisites.yml playbook:
$ docker run -t -u `id -u` \ 1 -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ 2 -v $HOME/ansible/hosts:/tmp/inventory:Z \ 3 -e INVENTORY_FILE=/tmp/inventory \ 4 -e PLAYBOOK_FILE=playbooks/prerequisites.yml \ 5 -e OPTS="-v" \ 6 registry.access.redhat.com/openshift3/ose-ansible:v3.9
- 1
-u `id -u`
makes the container run with the same UID as the current user, which allows that user to use the SSH key inside the container (SSH private keys are expected to be readable only by their owner).- 2
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z
mounts your SSH key ($HOME/.ssh/id_rsa
) under the container user’s$HOME/.ssh
(/opt/app-root/src is the$HOME
of the user in the container). If you mount the SSH key into a non-standard location you can add an environment variable with-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point
or setansible_ssh_private_key_file=/the/mount/point
as a variable in the inventory to point Ansible at it. Note that the SSH key is mounted with the:Z
flag. This is required so that the container can read the SSH key under its restricted SELinux context. This also means that your original SSH key file will be re-labeled to something likesystem_u:object_r:container_file_t:s0:c113,c247
. For more details about:Z
, check thedocker-run(1)
man page. Keep this in mind when providing these volume mount specifications because this might have unexpected consequences: for example, if you mount (and therefore re-label) your whole$HOME/.ssh
directory it will block the host’s sshd from accessing your public keys to login. For this reason you may want to use a separate copy of the SSH key (or directory), so that the original file labels remain untouched.- 3 4
-v $HOME/ansible/hosts:/tmp/inventory:Z
and-e INVENTORY_FILE=/tmp/inventory
mount a static Ansible inventory file into the container as /tmp/inventory and set the corresponding environment variable to point at it. As with the SSH key, the inventory file SELinux labels may need to be relabeled by using the:Z
flag to allow reading in the container, depending on the existing label (for files in a user$HOME
directory this is likely to be needed). So again you may prefer to copy the inventory to a dedicated location before mounting it. The inventory file can also be downloaded from a web server if you specify theINVENTORY_URL
environment variable, or generated dynamically usingDYNAMIC_SCRIPT_URL
to specify an executable script that provides a dynamic inventory.- 5
-e PLAYBOOK_FILE=playbooks/prerequisites.yml
specifies the playbook to run (in this example, the prereqsuites playbook) as a relative path from the top level directory of openshift-ansible content. The full path from the RPM can also be used, as well as the path to any other playbook file in the container.- 6
-e OPTS="-v"
supplies arbitrary command line options (in this case,-v
to increase verbosity) to theansible-playbook
command that runs inside the container.
Next, run the deploy_cluster.yml playbook to initiate the cluster installation:
$ docker run -t -u `id -u` \ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ -v $HOME/ansible/hosts:/tmp/inventory:Z \ -e INVENTORY_FILE=/tmp/inventory \ -e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ -e OPTS="-v" \ registry.access.redhat.com/openshift3/ose-ansible:v3.9
2.6.5.2.4. Running the Installation Playbook for OpenStack
The OpenStack installation playbook is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.
To install OpenShift Container Platform on an existing OpenStack installation, use the OpenStack playbook. For more information about the playbook, including detailed prerequisites, see the OpenStack Provisioning readme file.
To run the playbook, run the following command:
$ ansible-playbook --user openshift \ -i openshift-ansible/playbooks/openstack/inventory.py \ -i inventory \ openshift-ansible/playbooks/openstack/openshift-cluster/provision_install.yml
2.6.5.3. Running Individual Component Playbooks
The main installation playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml runs a set of individual component playbooks in a specific order, and the installer reports back at the end what phases you have gone through. If the installation fails, you are notified which phase failed along with the errors from the Ansible run.
After you resolve the errors, you can continue installation:
- You can run the remaining individual installation playbooks.
- If you are installing in a new environment, you can run the deploy_cluster.yml playbook again.
If you want to run only the remaining playbooks, start by running the playbook for the phase that failed and then run each of the remaining playbooks in order:
# ansible-playbook [-i /path/to/inventory] <playbook_file_location>
The following table lists the playbooks in the order that they must run:
Playbook Name | File Location |
---|---|
Health Check | /usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml |
etcd Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/config.yml |
NFS Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-nfs/config.yml |
Load Balancer Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-loadbalancer/config.yml |
Master Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-master/config.yml |
Master Additional Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-master/additional_config.yml |
Node Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-node/config.yml |
GlusterFS Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml |
Hosted Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/config.yml |
Web Console Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-web-console/config.yml |
Metrics Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml |
Logging Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml |
Prometheus Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-prometheus/config.yml |
Service Catalog Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/config.yml |
Management Install | /usr/share/ansible/openshift-ansible/playbooks/openshift-management/config.yml |
2.6.6. Verifying the Installation
After the installation completes:
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following as root:
# oc get nodes NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.9.1+a0ce1bc657 node1.example.com Ready compute 7h v1.9.1+a0ce1bc657 node2.example.com Ready compute 7h v1.9.1+a0ce1bc657
To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.
For example, for a master host with a host name of
master.openshift.com
and using the default port of8443
, the web console would be found athttps://master.openshift.com:8443/console
.
Verifying Multiple etcd Hosts
If you installed multiple etcd hosts:
First, verify that the etcd package, which provides the
etcdctl
command, is installed:# yum install etcd
On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
Verifying Multiple Masters Using HAProxy
If you installed multiple masters using HAProxy as a load balancer, browse to the following URL according to your [lb] section definition and check HAProxy’s status:
http://<lb_hostname>:9000
You can verify your installation by consulting the HAProxy Configuration documentation.
2.6.7. Optionally Securing Builds
Running docker build
is a privileged process, so the container has more access to the node than might be considered acceptable in some multi-tenant environments. If you do not trust your users, you can use a more secure option at the time of installation. Disable Docker builds on the cluster and require that users build images outside of the cluster. See Securing Builds by Strategy for more information on this optional process.
2.6.8. Uninstalling OpenShift Container Platform
You can uninstall OpenShift Container Platform hosts in your cluster by running the uninstall.yml playbook. This playbook deletes OpenShift Container Platform content installed by Ansible, including:
- Configuration
- Containers
- Default templates and image streams
- Images
- RPM packages
The playbook will delete content for any hosts defined in the inventory file that you specify when running the playbook.
Before you uninstall your cluster, review the following list of scenarios and make sure that uninstalling is the best option:
- If your installation process failed and you want to continue the process, you can retry the installation. The installation playbooks are designed so that if they fail to install your cluster, you can run them again without needing to uninstall the cluster.
- If you want to restart a failed installation from the beginning, you can uninstall the OpenShift Container Platform hosts in your cluster by running the uninstall.yml playbook, as described in the following section. This playbook only uninstalls the OpenShift Container Platform assets for the most recent version that you installed.
- If you must change the host names or certificate names, you must recreate your certificates before retrying installation by running the uninstall.yml playbook. Running the installation playbooks again will not recreate the certificates.
- If you want to repurpose hosts that you installed OpenShift Container Platform on earlier, such as with a proof-of-concept installation, or want to install a different minor or asynchronous version of OpenShift Container Platform you must reimage the hosts before you use them in a production cluster. After you run the uninstall.yml playbooks, some host assets might remain in an altered state.
If you want to uninstall OpenShift Container Platform across all hosts in your cluster, run the playbook using the inventory file you used when installing OpenShift Container Platform initially or ran most recently:
# ansible-playbook [-i /path/to/file] \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
2.6.8.1. Uninstalling Nodes
You can also uninstall node components from specific hosts using the uninstall.yml playbook while leaving the remaining hosts and cluster alone:
This method should only be used when attempting to uninstall specific node hosts and not for specific masters or etcd hosts, which would require further configuration changes within the cluster.
- First follow the steps in Deleting Nodes to remove the node object from the cluster, then continue with the remaining steps in this procedure.
Create a different inventory file that only references those hosts. For example, to only delete content from one node:
[OSEv3:children] nodes 1 [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=openshift-enterprise [nodes] node3.example.com openshift_node_labels="{'region': 'primary', 'zone': 'west'}" 2
Specify that new inventory file using the
-i
option when running the uninstall.yml playbook:# ansible-playbook -i /path/to/new/file \ /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
When the playbook completes, all OpenShift Container Platform content should be removed from any specified hosts.
2.6.9. Known Issues
- On failover in multiple master clusters, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/kubernetes/kubernetes/issues/10030 for details.
If the Ansible installer fails, you can still install OpenShift Container Platform:
- If you did not modify the SDN configuration or generate new certificates, run the deploy_cluster.yml playbook again.
- If you modified the SDN configuration, generated new certificates, or the installer fails again, you must either start over with a clean operating system installation or uninstall and install again.
- If you use virtual machines, start from a fresh image or uninstall and install again.
- If you use bare metal machines, uninstall and install again.
-
There is a known issue in the initial GA release of OpenShift Container Platform 3.9 that causes the installation and upgrade playbooks to consume more memory than previous releases. The node scale-up and installation Ansible playbooks may have consumed more memory on the control host (the system where you run the playbooks from) than expected due to the use of
include_tasks
in several places. This issue has been addressed with the release of RHBA-2018:0600; the majority of these instances have now been converted toimport_tasks
calls, which do not consume as much memory. After this change, memory consumption on the control host should be below 100MiB per host; for large environments (100+ hosts), a control host with at least 16GiB of memory is recommended. (BZ#1558672)
2.6.10. What’s Next?
Now that you have a working OpenShift Container Platform instance, you can:
- Deploy an integrated Docker registry.
- Deploy a router.
2.7. Disconnected Installation
2.7.1. Overview
Frequently, portions of a datacenter may not have access to the Internet, even via proxy servers. Installing OpenShift Container Platform in these environments is considered a disconnected installation.
An OpenShift Container Platform disconnected installation differs from a regular installation in two primary ways:
- The OpenShift Container Platform software channels and repositories are not available via Red Hat’s content distribution network.
- OpenShift Container Platform uses several containerized components. Normally, these images are pulled directly from Red Hat’s Docker registry. In a disconnected environment, this is not possible.
A disconnected installation ensures the OpenShift Container Platform software is made available to the relevant servers, then follows the same installation process as a standard connected installation. This topic additionally details how to manually download the container images and transport them onto the relevant servers.
Once installed, in order to use OpenShift Container Platform, you will need source code in a source control repository (for example, Git). This topic assumes that an internal Git repository is available that can host source code and this repository is accessible from the OpenShift Container Platform nodes. Installing the source control repository is outside the scope of this document.
Also, when building applications in OpenShift Container Platform, your build may have some external dependencies, such as a Maven Repository or Gem files for Ruby applications. For this reason, and because they might require certain tags, many of the Quickstart templates offered by OpenShift Container Platform may not work on a disconnected environment. However, while Red Hat container images try to reach out to external repositories by default, you can configure OpenShift Container Platform to use your own internal repositories. For the purposes of this document, we assume that such internal repositories already exist and are accessible from the OpenShift Container Platform nodes hosts. Installing such repositories is outside the scope of this document.
You can also have a Red Hat Satellite server that provides access to Red Hat content via an intranet or LAN. For environments with Satellite, you can synchronize the OpenShift Container Platform software onto the Satellite for use with the OpenShift Container Platform servers.
Red Hat Satellite 6.1 also introduces the ability to act as a Docker registry, and it can be used to host the OpenShift Container Platform containerized components. Doing so is outside of the scope of this document.
2.7.2. Prerequisites
This document assumes that you understand OpenShift Container Platform’s overall architecture and that you have already planned out what the topology of your environment will look like.
2.7.3. Required Software and Components
In order to pull down the required software repositories and container images, you will need a Red Hat Enterprise Linux (RHEL) 7 server with access to the Internet and at least 100GB of additional free space. All steps in this section should be performed on the Internet-connected server as the root system user.
2.7.3.1. Syncing Repositories
Before you sync with the required repositories, you may need to import the appropriate GPG key:
$ rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
If the key is not imported, the indicated package is deleted after syncing the repository.
To sync the required repositories:
Register the server with the Red Hat Customer Portal. You must use the login and password associated with the account that has access to the OpenShift Container Platform subscriptions:
$ subscription-manager register
Pull the latest subscription data from RHSM:
$ subscription-manager refresh
Attach to a subscription that provides OpenShift Container Platform channels. You can find the list of available subscriptions using:
$ subscription-manager list --available --matches '*OpenShift*'
Then, find the pool ID for the subscription that provides OpenShift Container Platform, and attach it:
$ subscription-manager attach --pool=<pool_id> $ subscription-manager repos --disable="*" $ subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-ansible-2.4-rpms" \ --enable="rhel-7-server-ose-3.9-rpms"
The
yum-utils
command provides the reposync utility, which lets you mirror yum repositories, andcreaterepo
can create a usableyum
repository from a directory:$ sudo yum -y install yum-utils createrepo docker git
You will need up to 110GB of free space in order to sync the software. Depending on how restrictive your organization’s policies are, you could re-connect this server to the disconnected LAN and use it as the repository server. You could use USB-connected storage and transport the software to another server that will act as the repository server. This topic covers these options.
Make a path to where you want to sync the software (either locally or on your USB or other device):
$ mkdir -p </path/to/repos>
Sync the packages and create the repository for each of them. You will need to modify the command for the appropriate path you created above:
$ for repo in \ rhel-7-server-rpms \ rhel-7-server-extras-rpms \ rhel-7-fast-datapath-rpms \ rhel-7-server-ansible-2.4-rpms \ rhel-7-server-ose-3.9-rpms do reposync --gpgcheck -lm --repoid=${repo} --download_path=/path/to/repos createrepo -v </path/to/repos/>${repo} -o </path/to/repos/>${repo} done
2.7.3.2. Syncing Images
To sync the container images:
Start the Docker daemon:
$ systemctl start docker
If you are performing a containerized install, pull all of the required OpenShift Container Platform host component images. Replace
<tag>
withv3.9.102
for the latest version.# docker pull registry.access.redhat.com/rhel7/etcd # docker pull registry.access.redhat.com/openshift3/ose:<tag> # docker pull registry.access.redhat.com/openshift3/node:<tag> # docker pull registry.access.redhat.com/openshift3/openvswitch:<tag>
Pull all of the required OpenShift Container Platform infrastructure component images. Replace
<tag>
withv3.9.102
for the latest version.$ docker pull registry.access.redhat.com/openshift3/ose-ansible:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-cluster-capacity:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-deployer:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-docker-builder:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-docker-registry:<tag> $ docker pull registry.access.redhat.com/openshift3/registry-console:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-egress-http-proxy:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-egress-router:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-f5-router:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-haproxy-router:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-keepalived-ipfailover:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-pod:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-sti-builder:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-template-service-broker:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-web-console:<tag> $ docker pull registry.access.redhat.com/openshift3/ose:<tag> $ docker pull registry.access.redhat.com/openshift3/container-engine:<tag> $ docker pull registry.access.redhat.com/openshift3/node:<tag> $ docker pull registry.access.redhat.com/openshift3/openvswitch:<tag> $ docker pull registry.access.redhat.com/rhel7/etcd
NoteIf you use NFS, you need the
ose-recycler
image. Otherwise, the volumes will not recycle, potentially causing errors.The recycle reclaim policy is deprecated in favor of dynamic provisioning, and it will be removed in future releases.
Pull all of the required OpenShift Container Platform component images for the additional centralized log aggregation and metrics aggregation components. Replace
<tag>
withv3.9.102
for the latest version.$ docker pull registry.access.redhat.com/openshift3/logging-auth-proxy:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-curator:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-elasticsearch:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-fluentd:<tag> $ docker pull registry.access.redhat.com/openshift3/logging-kibana:<tag> $ docker pull registry.access.redhat.com/openshift3/oauth-proxy:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-cassandra:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-hawkular-metrics:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-hawkular-openshift-agent:<tag> $ docker pull registry.access.redhat.com/openshift3/metrics-heapster:<tag> $ docker pull registry.access.redhat.com/openshift3/prometheus:<tag> $ docker pull registry.access.redhat.com/openshift3/prometheus-alert-buffer:<tag> $ docker pull registry.access.redhat.com/openshift3/prometheus-alertmanager:<tag> $ docker pull registry.access.redhat.com/openshift3/prometheus-node-exporter:<tag> $ docker pull registry.access.redhat.com/cloudforms46/cfme-openshift-postgresql $ docker pull registry.access.redhat.com/cloudforms46/cfme-openshift-memcached $ docker pull registry.access.redhat.com/cloudforms46/cfme-openshift-app-ui $ docker pull registry.access.redhat.com/cloudforms46/cfme-openshift-app $ docker pull registry.access.redhat.com/cloudforms46/cfme-openshift-embedded-ansible $ docker pull registry.access.redhat.com/cloudforms46/cfme-openshift-httpd $ docker pull registry.access.redhat.com/cloudforms46/cfme-httpd-configmap-generator $ docker pull registry.access.redhat.com/rhgs3/rhgs-server-rhel7 $ docker pull registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7 $ docker pull registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7 $ docker pull registry.access.redhat.com/rhgs3/rhgs-s3-server-rhel7
ImportantFor Red Hat support, a Container-Native Storage (CNS) subscription is required for
rhgs3/
images.ImportantPrometheus on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.
For the service catalog, OpenShift Ansible broker, and template service broker features (as described in Advanced Installation), pull the following images. Replace
<tag>
withv3.9.102
for the latest version.$ docker pull registry.access.redhat.com/openshift3/ose-service-catalog:<tag> $ docker pull registry.access.redhat.com/openshift3/ose-ansible-service-broker:<tag> $ docker pull registry.access.redhat.com/openshift3/mediawiki-apb:<tag> $ docker pull registry.access.redhat.com/openshift3/postgresql-apb:<tag>
Pull the Red Hat-certified Source-to-Image (S2I) builder images that you intend to use in your OpenShift environment. You can pull the following images:
$ docker pull registry.access.redhat.com/jboss-amq-6/amq63-openshift $ docker pull registry.access.redhat.com/jboss-datagrid-7/datagrid71-openshift $ docker pull registry.access.redhat.com/jboss-datagrid-7/datagrid71-client-openshift $ docker pull registry.access.redhat.com/jboss-datavirt-6/datavirt63-openshift $ docker pull registry.access.redhat.com/jboss-datavirt-6/datavirt63-driver-openshift $ docker pull registry.access.redhat.com/jboss-decisionserver-6/decisionserver64-openshift $ docker pull registry.access.redhat.com/jboss-processserver-6/processserver64-openshift $ docker pull registry.access.redhat.com/jboss-eap-6/eap64-openshift $ docker pull registry.access.redhat.com/jboss-eap-7/eap70-openshift $ docker pull registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift $ docker pull registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat8-openshift $ docker pull registry.access.redhat.com/openshift3/jenkins-1-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-2-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 $ docker pull registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7 $ docker pull registry.access.redhat.com/rhscl/mongodb-32-rhel7 $ docker pull registry.access.redhat.com/rhscl/mysql-57-rhel7 $ docker pull registry.access.redhat.com/rhscl/perl-524-rhel7 $ docker pull registry.access.redhat.com/rhscl/php-56-rhel7 $ docker pull registry.access.redhat.com/rhscl/postgresql-95-rhel7 $ docker pull registry.access.redhat.com/rhscl/python-35-rhel7 $ docker pull registry.access.redhat.com/redhat-sso-7/sso70-openshift $ docker pull registry.access.redhat.com/rhscl/ruby-24-rhel7 $ docker pull registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift $ docker pull registry.access.redhat.com/redhat-sso-7/sso71-openshift $ docker pull registry.access.redhat.com/rhscl/nodejs-6-rhel7 $ docker pull registry.access.redhat.com/rhscl/mariadb-101-rhel7
Make sure to indicate the correct tag specifying the desired version number. For example, to pull both the previous and latest version of the Tomcat image:
$ docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest $ docker pull \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1
2.7.3.3. Preparing Images for Export
Container images can be exported from a system by first saving them to a tarball and then transporting them:
Make and change into a repository home directory:
$ mkdir </path/to/repos/images> $ cd </path/to/repos/images>
If you are performing a containerized install, export the OpenShift Container Platform host component images:
# docker save -o ose3-host-images.tar \ registry.access.redhat.com/rhel7/etcd \ registry.access.redhat.com/openshift3/ose \ registry.access.redhat.com/openshift3/node \ registry.access.redhat.com/openshift3/openvswitch
Export the OpenShift Container Platform infrastructure component images:
$ docker save -o ose3-images.tar \ registry.access.redhat.com/openshift3/ose-ansible \ registry.access.redhat.com/openshift3/ose-ansible-service-broker \ registry.access.redhat.com/openshift3/ose-cluster-capacity \ registry.access.redhat.com/openshift3/ose-deployer \ registry.access.redhat.com/openshift3/ose-docker-builder \ registry.access.redhat.com/openshift3/ose-docker-registry \ registry.access.redhat.com/openshift3/registry-console \ registry.access.redhat.com/openshift3/ose-egress-http-proxy \ registry.access.redhat.com/openshift3/ose-egress-router \ registry.access.redhat.com/openshift3/ose-f5-router \ registry.access.redhat.com/openshift3/ose-haproxy-router \ registry.access.redhat.com/openshift3/ose-keepalived-ipfailover \ registry.access.redhat.com/openshift3/ose-pod \ registry.access.redhat.com/openshift3/ose-service-catalog \ registry.access.redhat.com/openshift3/ose-sti-builder \ registry.access.redhat.com/openshift3/ose-template-service-broker \ registry.access.redhat.com/openshift3/ose-web-console \ registry.access.redhat.com/openshift3/ose \ registry.access.redhat.com/openshift3/container-engine \ registry.access.redhat.com/openshift3/node \ registry.access.redhat.com/openshift3/openvswitch \ registry.access.redhat.com/openshift3/prometheus \ registry.access.redhat.com/openshift3/prometheus-alert-buffer \ registry.access.redhat.com/openshift3/prometheus-alertmanager \ registry.access.redhat.com/openshift3/prometheus-node-exporter \ registry.access.redhat.com/openshift3/mediawiki-apb \ registry.access.redhat.com/openshift3/postgresql-apb \ registry.access.redhat.com/cloudforms46/cfme-openshift-postgresql \ registry.access.redhat.com/cloudforms46/cfme-openshift-memcached \ registry.access.redhat.com/cloudforms46/cfme-openshift-app-ui \ registry.access.redhat.com/cloudforms46/cfme-openshift-app \ registry.access.redhat.com/cloudforms46/cfme-openshift-embedded-ansible \ registry.access.redhat.com/cloudforms46/cfme-openshift-httpd \ registry.access.redhat.com/cloudforms46/cfme-httpd-configmap-generator \ registry.access.redhat.com/rhgs3/rhgs-server-rhel7 \ registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7 \ registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7 \ registry.access.redhat.com/rhgs3/rhgs-s3-server-rhel7
ImportantFor Red Hat support, a CNS subscription is required for
rhgs3/
images.If you synchronized the metrics and log aggregation images, export them:
$ docker save -o ose3-logging-metrics-images.tar \ registry.access.redhat.com/openshift3/logging-auth-proxy \ registry.access.redhat.com/openshift3/logging-curator \ registry.access.redhat.com/openshift3/logging-elasticsearch \ registry.access.redhat.com/openshift3/logging-fluentd \ registry.access.redhat.com/openshift3/logging-kibana \ registry.access.redhat.com/openshift3/metrics-cassandra \ registry.access.redhat.com/openshift3/metrics-hawkular-metrics \ registry.access.redhat.com/openshift3/metrics-hawkular-openshift-agent \ registry.access.redhat.com/openshift3/metrics-heapster
Export the S2I builder images that you synced in the previous section. For example, if you synced only the Jenkins and Tomcat images:
$ docker save -o ose3-builder-images.tar \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:latest \ registry.access.redhat.com/jboss-webserver-3/webserver30-tomcat7-openshift:1.1 \ registry.access.redhat.com/openshift3/jenkins-1-rhel7 \ registry.access.redhat.com/openshift3/jenkins-2-rhel7 \ registry.access.redhat.com/openshift3/jenkins-slave-base-rhel7 \ registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7 \ registry.access.redhat.com/openshift3/jenkins-slave-nodejs-rhel7
2.7.4. Repository Server
During the installation (and for later updates, should you so choose), you will need a webserver to host the repositories. RHEL 7 can provide the Apache webserver.
Option 1: Re-configuring as a Web server
If you can re-connect the server where you synchronized the software and images to your LAN, then you can simply install Apache on the server:
$ sudo yum install httpd
Skip to Placing the Software.
Option 2: Building a Repository Server
If you need to build a separate server to act as the repository server, install a new RHEL 7 system with at least 110GB of space. On this repository server during the installation, make sure you select the Basic Web Server option.
2.7.4.1. Placing the Software
If necessary, attach the external storage, and then copy the repository files into Apache’s root folder. Note that the below copy step (
cp -a
) should be substituted with move (mv
) if you are repurposing the server you used to sync:$ cp -a /path/to/repos /var/www/html/ $ chmod -R +r /var/www/html/repos $ restorecon -vR /var/www/html
Add the firewall rules:
$ sudo firewall-cmd --permanent --add-service=http $ sudo firewall-cmd --reload
Enable and start Apache for the changes to take effect:
$ systemctl enable httpd $ systemctl start httpd
2.7.5. OpenShift Container Platform Systems
2.7.5.1. Building Your Hosts
At this point you can perform the initial creation of the hosts that will be part of the OpenShift Container Platform environment. It is recommended to use the latest version of RHEL 7 and to perform a minimal installation. You will also want to pay attention to the other OpenShift Container Platform-specific prerequisites.
Once the hosts are initially built, the repositories can be set up.
2.7.5.2. Connecting the Repositories
On all of the relevant systems that will need OpenShift Container Platform software components, create the required repository definitions. Place the following text in the /etc/yum.repos.d/ose.repo file, replacing <server_IP>
with the IP or host name of the Apache server hosting the software repositories:
[rhel-7-server-rpms] name=rhel-7-server-rpms baseurl=http://<server_IP>/repos/rhel-7-server-rpms enabled=1 gpgcheck=0 [rhel-7-server-extras-rpms] name=rhel-7-server-extras-rpms baseurl=http://<server_IP>/repos/rhel-7-server-extras-rpms enabled=1 gpgcheck=0 [rhel-7-fast-datapath-rpms] name=rhel-7-fast-datapath-rpms baseurl=http://<server_IP>/repos/rhel-7-fast-datapath-rpms enabled=1 gpgcheck=0 [rhel-7-server-ansible-2.4-rpms] name=rhel-7-server-ansible-2.4-rpms baseurl=http://<server_IP>/repos/rhel-7-server-ansible-2.4-rpms enabled=1 gpgcheck=0 [rhel-7-server-ose-3.9-rpms] name=rhel-7-server-ose-3.9-rpms baseurl=http://<server_IP>/repos/rhel-7-server-ose-3.9-rpms enabled=1 gpgcheck=0
2.7.5.3. Host Preparation
At this point, the systems are ready to continue to be prepared