Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Installing OpenShift Container Platform
To install a OpenShift Container Platform cluster, you run a series of Ansible playbooks.
Running Ansible playbooks with the --tags
or --check
options is not supported by Red Hat.
To install OpenShift Container Platform as a stand-alone registry, see Installing a Stand-alone Registry.
6.1. Prerequisites
Before installing OpenShift Container Platform, prepare your cluster hosts:
- Review the System and environment requirements.
- If you will have a large cluster, review the Scaling and Performance Guide for suggestions for optimizing installation time.
- Prepare your hosts. This process includes verifying system and environment requirements per component type, installing and configuring the docker service, and installing Ansible version 2.6 or later. You must install Ansible to run the installation playbooks.
- Configure your inventory file to define your environment and OpenShift Container Platform cluster configuration. Both your initial installation and future cluster upgrades are based on this inventory file.
- If you are installing OpenShift Container Platform on Red Hat Enterprise Linux, decide if you want to use the RPM or system container installation method. The system container method is required for RHEL Atomic Host systems.
6.1.1. Running the RPM-based installer
The RPM-based installer uses Ansible installed via RPM packages to run playbooks and configuration files available on the local host.
Do not run OpenShift Ansible playbooks under nohup
. Using nohup
with the playbooks causes file descriptors to be created but not closed. Therefore, the system can run out of files to open and the playbook fails.
To run the RPM-based installer:
Change to the playbook directory and run the prerequisites.yml playbook. This playbook installs required software packages, if any, and modifies the container runtimes. Unless you need to configure the container runtimes, run this playbook only once, before you deploy a cluster the first time:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook [-i /path/to/inventory] \ 1 playbooks/prerequisites.yml
- 1
- If your inventory file is not in the /etc/ansible/hosts directory, specify
-i
and the path to the inventory file.
Change to the playbook directory and run the deploy_cluster.yml playbook to initiate the cluster installation:
$ cd /usr/share/ansible/openshift-ansible $ ansible-playbook [-i /path/to/inventory] \ 1 playbooks/deploy_cluster.yml
- 1
- If your inventory file is not in the /etc/ansible/hosts directory, specify
-i
and the path to the inventory file.
. If your installation succeeded, verify the installation. If your installation failed, retry the installation.
6.1.2. Running the containerized installer
The openshift3/ose-ansible image is a containerized version of the OpenShift Container Platform installer. This installer image provides the same functionality as the RPM-based installer, but it runs in a containerized environment that provides all of its dependencies rather than being installed directly on the host. The only requirement to use it is the ability to run a container.
6.1.2.1. Running the installer as a system container
The installer image can be used as a system container. System containers are stored and run outside of the traditional docker service. This enables running the installer image from one of the target hosts without concern for the install restarting docker on the host.
To use the Atomic CLI to run the installer as a run-once system container, perform the following steps as the root user:
Run the prerequisites.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ 1 --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml \ --set OPTS="-v" \ registry.redhat.io/openshift3/ose-ansible:v3.11
- 1
- Specify the location on the local host for your inventory file.
This command runs a set of prerequiste tasks by using the inventory file specified and the
root
user’s SSH configuration.Run the deploy_cluster.yml playbook:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ 1 --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml \ --set OPTS="-v" \ registry.redhat.io/openshift3/ose-ansible:v3.11
- 1
- Specify the location on the local host for your inventory file.
This command initiates the cluster installation by using the inventory file specified and the
root
user’s SSH configuration. It logs the output on the terminal and also saves it in the /var/log/ansible.log file. The first time this command is run, the image is imported into OSTree storage (system containers use this rather than docker daemon storage). On subsequent runs, it reuses the stored image.If for any reason the installation fails, before re-running the installer, see Known Issues to check for any specific instructions or workarounds.
6.1.2.2. Running other playbooks
You can use the PLAYBOOK_FILE
environment variable to specify other playbooks you want to run by using the containerized installer. The default value of the PLAYBOOK_FILE
is /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml, which is the main cluster installation playbook, but you can set it to the path of another playbook inside the container.
For example, to run the pre-install checks playbook before installation, use the following command:
# atomic install --system \ --storage=ostree \ --set INVENTORY_FILE=/path/to/inventory \ --set PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml \ 1 --set OPTS="-v" \ 2 registry.redhat.io/openshift3/ose-ansible:v3.11
6.1.2.3. Running the installer as a container
The installer image can also run as a docker container anywhere that docker can run.
This method must not be used to run the installer on one of the hosts being configured, because the installer might restart docker on the host and disrupt the installation.
Although this method and the system container method above use the same image, they run with different entry points and contexts, so runtime parameters are not the same.
At a minimum, when running the installer as a docker container you must provide:
- SSH key(s), so that Ansible can reach your hosts.
- An Ansible inventory file.
- The location of the Ansible playbook to run against that inventory.
Here is an example of how to run an install via docker
, which must be run by a non-root user with access to docker
:
First, run the prerequisites.yml playbook:
$ docker run -t -u `id -u` \ 1 -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ 2 -v $HOME/ansible/hosts:/tmp/inventory:Z \ 3 -e INVENTORY_FILE=/tmp/inventory \ 4 -e PLAYBOOK_FILE=playbooks/prerequisites.yml \ 5 -e OPTS="-v" \ 6 registry.redhat.io/openshift3/ose-ansible:v3.11
- 1
-u `id -u`
makes the container run with the same UID as the current user, which allows that user to use the SSH key inside the container. SSH private keys are expected to be readable only by their owner.- 2
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z
mounts your SSH key,$HOME/.ssh/id_rsa
, under the container user’s$HOME/.ssh
directory. /opt/app-root/src is the$HOME
of the user in the container. If you mount the SSH key into a different location, add an environment variable with-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/point
or setansible_ssh_private_key_file=/the/mount/point
as a variable in the inventory to point Ansible to it. Note that the SSH key is mounted with the:Z
flag. This flag is required so that the container can read the SSH key under its restricted SELinux context. This also means that your original SSH key file will be re-labeled to something likesystem_u:object_r:container_file_t:s0:c113,c247
. For more details about:Z
, review thedocker-run(1)
man page. Keep this in mind when providing these volume mount specifications because this might have unexpected consequences, For example, if you mount (and therefore re-label) your whole$HOME/.ssh
directory, it will block the host’s sshd from accessing your public keys to log in. For this reason, you might want to use a separate copy of the SSH key or directory so that the original file labels remain untouched.- 3 4
-v $HOME/ansible/hosts:/tmp/inventory:Z
and-e INVENTORY_FILE=/tmp/inventory
mount a static Ansible inventory file into the container as /tmp/inventory and set the corresponding environment variable to point at it. As with the SSH key, the inventory file SELinux labels might need to be relabeled by using the:Z
flag to allow reading in the container, depending on the existing label. For files in a user$HOME
directory, this is likely to be needed. You might prefer to copy the inventory to a dedicated location before you mount it. You can also download the inventory file from a web server if you specify theINVENTORY_URL
environment variable or generate it dynamically by using theDYNAMIC_SCRIPT_URL
parameter to specify an executable script that provides a dynamic inventory.- 5
-e PLAYBOOK_FILE=playbooks/prerequisites.yml
specifies the playbook to run as a relative path from the top level directory of openshift-ansible content. In this example, you specify the prerequisites playbook. You can also specify the full path from the RPM or the path to any other playbook file in the container.- 6
-e OPTS="-v"
supplies arbitrary command line options to theansible-playbook
command that runs inside the container. In this example, specify-v
to increase verbosity.
Next, run the deploy_cluster.yml playbook to initiate the cluster installation:
$ docker run -t -u `id -u` \ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ -v $HOME/ansible/hosts:/tmp/inventory:Z \ -e INVENTORY_FILE=/tmp/inventory \ -e PLAYBOOK_FILE=playbooks/deploy_cluster.yml \ -e OPTS="-v" \ registry.redhat.io/openshift3/ose-ansible:v3.11
6.1.2.4. Running the Installation Playbook for OpenStack
To install OpenShift Container Platform on an existing OpenStack installation, use the OpenStack playbook. For more information about the playbook, including detailed prerequisites, see the OpenStack Provisioning readme file.
To run the playbook, run the following command:
$ ansible-playbook --user openshift \ -i openshift-ansible/playbooks/openstack/inventory.py \ -i inventory \ openshift-ansible/playbooks/openstack/openshift-cluster/provision_install.yml
6.1.3. About the installation playbooks
The installer uses modularized playbooks so that administrators can install specific components as needed. By breaking up the roles and playbooks, there is better targeting of ad hoc administration tasks. This results in an increased level of control during installations and results in time savings.
The main installation playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml runs a set of individual component playbooks in a specific order, and the installer reports back at the end what phases you have gone through. If the installation fails, you are notified which phase failed along with the errors from the Ansible run.
While RHEL Atomic Host is supported for running OpenShift Container Platform services as system container, the installation method uses Ansible, which is not available in RHEL Atomic Host. The RPM-based installer must therefore be run from a RHEL 7 system. The host initiating the installation does not need to be intended for inclusion in the OpenShift Container Platform cluster, but it can be. Alternatively, a containerized version of the installer is available as a system container, which can be run from a RHEL Atomic Host system.
6.2. Retrying the installation
If the Ansible installer fails, you can still install OpenShift Container Platform:
- Review the Known Issues to check for any specific instructions or workarounds.
- Address the errors from your installation.
Determine if you need to uninstall and reinstall or retry the installation:
- If you did not modify the SDN configuration or generate new certificates, retry the installation.
- If you modified the SDN configuration, generated new certificates, or the installer fails again, you must either start over with a clean operating system installation or uninstall and install again.
- If you use virtual machines, start from a new image or uninstall and install again.
- If you use bare metal machines, uninstall and install again.
Retry the installation:
- You can run the deploy_cluster.yml playbook again.
You can run the remaining individual installation playbooks.
If you want to run only the remaining playbooks, start by running the playbook for the phase that failed and then run each of the remaining playbooks in order. Run each playbook with the following command:
# ansible-playbook [-i /path/to/inventory] <playbook_file_location>
The following table lists the playbooks in the order that they must run:
Table 6.1. Individual Component Playbook Run Order Playbook Name File Location Health Check
/usr/share/ansible/openshift-ansible/playbooks/openshift-checks/pre-install.yml
Node Bootstrap
/usr/share/ansible/openshift-ansible/playbooks/openshift-node/bootstrap.yml
etcd Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-etcd/config.yml
NFS Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-nfs/config.yml
Load Balancer Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-loadbalancer/config.yml
Master Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-master/config.yml
Master Additional Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-master/additional_config.yml
Node Join
/usr/share/ansible/openshift-ansible/playbooks/openshift-node/join.yml
GlusterFS Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml
Hosted Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-hosted/config.yml
Monitoring Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-monitoring/config.yml
Web Console Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-web-console/config.yml
Admin Console Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-console/config.yml
Metrics Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
metrics-server
/usr/share/ansible/openshift-ansible/playbooks/metrics-server/config.yml
Logging Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml
Availability Monitoring Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-monitor-availability/config.yml
Service Catalog Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-service-catalog/config.yml
Management Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-management/config.yml
Descheduler Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-descheduler/config.yml
Node Problem Detector Install
/usr/share/ansible/openshift-ansible/playbooks/openshift-node-problem-detector/config.yml
Operator Lifecycle Manager (OLM) Install (Technology Preview)
/usr/share/ansible/openshift-ansible/playbooks/olm/config.yml
6.3. Verifying the Installation
After the installation completes:
Verify that the master is started and nodes are registered and reporting in Ready status. On the master host, run the following command as root:
# oc get nodes NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.9.1+a0ce1bc657 node1.example.com Ready compute 7h v1.9.1+a0ce1bc657 node2.example.com Ready compute 7h v1.9.1+a0ce1bc657
To verify that the web console is installed correctly, use the master host name and the web console port number to access the web console with a web browser.
For example, for a master host with a host name of
master.openshift.com
and using the default port of8443
, the web console URL ishttps://master.openshift.com:8443/console
.
Verifying Multiple etcd Hosts
If you installed multiple etcd hosts:
First, verify that the etcd package, which provides the
etcdctl
command, is installed:# yum install etcd
On a master host, verify the etcd cluster health, substituting for the FQDNs of your etcd hosts in the following:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key cluster-health
Also verify the member list is correct:
# etcdctl -C \ https://etcd1.example.com:2379,https://etcd2.example.com:2379,https://etcd3.example.com:2379 \ --ca-file=/etc/origin/master/master.etcd-ca.crt \ --cert-file=/etc/origin/master/master.etcd-client.crt \ --key-file=/etc/origin/master/master.etcd-client.key member list
Verifying Multiple Masters Using HAProxy
If you installed multiple masters using HAProxy as a load balancer, open the following URL and check HAProxy’s status:
http://<lb_hostname>:9000 1
- 1
- Provide the load balancer host name listed in the
[lb]
section of your inventory file.
You can verify your installation by consulting the HAProxy Configuration documentation.
6.4. Optionally securing builds
Running docker build
is a privileged process, so the container has more access to the node than might be considered acceptable in some multi-tenant environments. If you do not trust your users, you can configure a more secure option after installation. Disable Docker builds on the cluster and require that users build images outside of the cluster. See Securing Builds by Strategy for more information about this optional process.
6.5. Known Issues
- On failover in multiple master clusters, it is possible for the controller manager to overcorrect, which causes the system to run more pods than what was intended. However, this is a transient event and the system does correct itself over time. See https://github.com/kubernetes/kubernetes/issues/10030 for details.
Due to a known issue, after running the installation, if NFS volumes are provisioned for any component, the following directories might be created whether their components are being deployed to NFS volumes or not:
- /exports/logging-es
- /exports/logging-es-ops/
- /exports/metrics/
- /exports/prometheus
- /exports/prometheus-alertbuffer/
/exports/prometheus-alertmanager/
You can delete these directories after installation, as needed.
6.6. What’s Next?
Now that you have a working OpenShift Container Platform instance, you can:
- Deploy an integrated container image registry.
- Deploy a router.