This documentation is for a release that is no longer maintained
See documentation for the latest supported version 3 or the latest supported version 4.此内容没有您所选择的语言版本。
Chapter 3. Preparing Your Hosts
3.1. Operating System Requirements 复制链接链接已复制到粘贴板!
The operating system requirements for master and node hosts are different depending on your server architecture.
- For servers that use x86_64 architecture, use a base installation of Red Hat Enterprise Linux (RHEL) 7.4 or later with the latest packages from the Extras channel or RHEL Atomic Host 7.4.2 or later.
- For cloud-based installations, use a base installation of RHEL 7.4 or later with the latest packages from the Extras channel.
- For servers that use IBM POWER8 architecture, use a base installation of RHEL 7.5 with the latest packages from the Extras channel.
- For servers that use IBM POWER9 architecture, use a base installation of RHEL-ALT 7.5 with the latest packages from the Extras channel.
See the following documentation for the respective installation instructions, if required:
3.2. Server Type Requirements 复制链接链接已复制到粘贴板!
If you use IBM POWER servers for your nodes, you can use only IBM POWER servers. You cannot add nodes that run on IBM POWER servers to an existing cluster that uses x86_64 servers or deploy cluster nodes on a mix of IBM POWER and x86_64 servers.
3.3. Setting PATH 复制链接链接已复制到粘贴板!
The PATH
for the root user on each host must contain the following directories:
- /bin
- /sbin
- /usr/bin
- /usr/sbin
These should all be included by default in a fresh RHEL 7.x installation.
3.4. Ensuring Host Access 复制链接链接已复制到粘贴板!
The OpenShift Container Platform installer requires a user that has access to all hosts. If you want to run the installer as a non-root user, passwordless sudo rights must be configured on each destination host.
For example, you can generate an SSH key on the host where you will invoke the installation process:
ssh-keygen
# ssh-keygen
Do not use a password.
An easy way to distribute your SSH keys is by using a bash
loop:
for host in master.example.com \ node1.example.com \ node2.example.com; \ do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
# for host in master.example.com \
node1.example.com \
node2.example.com; \
do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
done
Modify the host names in the above command according to your configuration.
After you run the bash
loop, confirm that you can access each host that is listed in the loop through SSH.
3.5. Setting Proxy Overrides 复制链接链接已复制到粘贴板!
If the /etc/environment file on your nodes contains either an http_proxy
or https_proxy
value, you must also set a no_proxy
value in that file to allow open communication between OpenShift Container Platform components.
The no_proxy
parameter in /etc/environment file is not the same value as the global proxy values that you set in your inventory file. The global proxy values configure specific OpenShift Container Platform services with your proxy settings. See Configuring Global Proxy Options for details.
If the /etc/environment file contains proxy values, define the following values in the no_proxy
parameter of that file on each node:
- Master and node host names or their domain suffix.
- Other internal host names or their domain suffix.
- Etcd IP addresses. You must provide IP addresses and not host names because etcd access is controlled by IP address.
-
Kubernetes IP address, by default
172.30.0.1
. Must be the value set in theopenshift_portal_net
parameter in your inventory file. -
Kubernetes internal domain suffix,
cluster.local
. -
Kubernetes internal domain suffix,
.svc
.
Because no_proxy
does not support CIDR, you can use domain suffixes.
If you use either an http_proxy
or https_proxy
value, your no_proxy
parameter value resembles the following example:
no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1
no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1
3.6. Host Registration 复制链接链接已复制到粘贴板!
Each host must be registered using Red Hat Subscription Manager (RHSM) and have an active OpenShift Container Platform subscription attached to access the required packages.
On each host, register with RHSM:
subscription-manager register --username=<user_name> --password=<password>
# subscription-manager register --username=<user_name> --password=<password>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from RHSM:
subscription-manager refresh
# subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the available subscriptions:
subscription-manager list --available --matches '*OpenShift*'
# subscription-manager list --available --matches '*OpenShift*'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:
subscription-manager attach --pool=<pool_id>
# subscription-manager attach --pool=<pool_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable all yum repositories:
Disable all the enabled RHSM repositories:
subscription-manager repos --disable="*"
# subscription-manager repos --disable="*"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the remaining yum repositories and note their names under
repo id
, if any:yum repolist
# yum repolist
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use
yum-config-manager
to disable the remaining yum repositories:yum-config-manager --disable <repo_id>
# yum-config-manager --disable <repo_id>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, disable all repositories:
yum-config-manager --disable \*
yum-config-manager --disable \*
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this could take a few minutes if you have a large number of available repositories
Enable only the repositories required by OpenShift Container Platform 3.10.
For cloud installations and on-premise installations on x86_64 servers, run the following command:
subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.10-rpms" \ --enable="rhel-7-server-ansible-2.4-rpms"
# subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ose-3.10-rpms" \ --enable="rhel-7-server-ansible-2.4-rpms"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For on-premise installations on IBM POWER8 servers, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For on-premise installations on IBM POWER9 servers, run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Installing Base Packages 复制链接链接已复制到粘贴板!
If your hosts are running RHEL 7.5 and you want to accept OpenShift Container Platform’s default docker configuration (using OverlayFS storage and all default logging options), you can skip to Configuring Your Inventory File to create an inventory representing your cluster. A prerequisites.yml playbook used when running the installation will ensure that the default packages and configuration are correctly applied.
If your hosts are running RHEL 7.4 or if they are running RHEL 7.5 and you want to customize the docker configuration further, following the guidance in the remaining sections of this topic.
For RHEL 7 systems:
Install the following base packages:
yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the system to the latest packages:
yum update reboot
# yum update # reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you plan to use the RPM-based installer to run the installation, you can skip this step. However, if you plan to use the containerized installer:
Install the atomic package:
yum install atomic
# yum install atomic
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Skip to Installing Docker.
Install the following package, which provides RPM-based OpenShift Container Platform installer utilities and pulls in other packages required by the cluster installation process, such as Ansible, playbooks, and related configuration files:
yum install openshift-ansible
# yum install openshift-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn previous OpenShift Container Platform releases, the atomic-openshift-utils package was installed for this step. However, starting with OpenShift Container Platform 3.10, that package is removed and the openshift-ansible package provides all requirements.
For RHEL Atomic Host 7 systems:
Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:
atomic host upgrade
# atomic host upgrade
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the upgrade is completed and prepared for the next boot, reboot the host:
systemctl reboot
# systemctl reboot
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Installing Docker 复制链接链接已复制到粘贴板!
At this point, you should install Docker on all master and node hosts. This allows you to configure your Docker storage options before installing OpenShift Container Platform.
For RHEL 7 systems, install Docker 1.13:
On RHEL Atomic Host 7 systems, Docker should already be installed, configured, and running by default.
yum install docker-1.13.1
# yum install docker-1.13.1
After the package installation is complete, verify that version 1.13 was installed:
rpm -V docker-1.13.1 docker version
# rpm -V docker-1.13.1
# docker version
The cluster installation process automatically modifies the /etc/sysconfig/docker file.
3.9. Configuring Docker Storage 复制链接链接已复制到粘贴板!
Containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications. With Ephemeral storage, container-saved data is lost when the container is removed. With persistent storage, container-saved data remains if the container is removed.
You must configure storage for each system that runs a container daemon. For containerized installations, you need storage on masters. Also, by default, the web console is run in containers on masters, and storage is needed on masters to run the web console. Containers are run on nodes, so storage is always required on the nodes. The size of storage depends on workload, number of containers, the size of the containers being run, and the containers' storage requirements. Containerized etcd also needs container storage configured.
If your hosts are running RHEL 7.5 and you want to accept OpenShift Container Platform’s default docker configuration (using OverlayFS storage and all default logging options), you can skip to Configuring Your Inventory File to create an inventory representing your cluster. A prerequisites.yml playbook used when running the installation will ensure that the default packages and configuration are correctly applied.
If your hosts are running RHEL 7.4 or if they are running RHEL 7.5 and you want to customize the docker configuration further, following the guidance in the remaining sections of this topic.
For RHEL Atomic Host
The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.
If you do not have enough allocated, see Managing Storage with Docker Formatted Containers for details on using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.
For RHEL
The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.
Docker stores images and containers in a graph driver, which is a pluggable storage technology, such as DeviceMapper, OverlayFS, and Btrfs. Each has advantages and disadvantages. For example, OverlayFS is faster than DeviceMapper at starting and stopping containers, but is not Portable Operating System Interface for Unix (POSIX) compliant because of the architectural limitations of a union file system and is not supported prior to Red Hat Enterprise Linux 7.2. See the Red Hat Enterprise Linux release notes for information on using OverlayFS with your version of RHEL.
For more information on the benefits and limitations of DeviceMapper and OverlayFS, see Choosing a Graph Driver.
3.9.1. Configuring OverlayFS 复制链接链接已复制到粘贴板!
OverlayFS is a type of union file system. It allows you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified.
Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.
For information on enabling the OverlayFS storage driver for the Docker service, see the Red Hat Enterprise Linux Atomic Host documentation.
3.9.2. Configuring Thin Pool Storage 复制链接链接已复制到粘贴板!
You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. This can be done after installing Docker and should be done before creating images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:
- Option A) Use an additional block device.
- Option B) Use an existing, specified volume group.
- Option C) Use the remaining free space from the volume group where your root file system is located.
Option A is the most robust option, however it requires adding an additional block device to your host before configuring Docker storage. Options B and C both require leaving free space available when provisioning your host. Option C is known to cause issues with some applications, for example Red Hat Mobile Application Platform (RHMAP).
Create the docker-pool volume using one of the following three options:
Option A) Use an additional block device.
In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device you wish to use. Set VG to the volume group name you wish to create; docker-vg is a reasonable choice. For example:
cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS=/dev/vdc VG=docker-vg EOF
# cat <<EOF > /etc/sysconfig/docker-storage-setup DEVS=/dev/vdc VG=docker-vg EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Option B) Use an existing, specified volume group.
In /etc/sysconfig/docker-storage-setup, set VG to the desired volume group. For example:
cat <<EOF > /etc/sysconfig/docker-storage-setup VG=docker-vg EOF
# cat <<EOF > /etc/sysconfig/docker-storage-setup VG=docker-vg EOF
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Option C) Use the remaining free space from the volume group where your root file system is located.
Verify that the volume group where your root file system resides has the desired free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify your configuration. You should have a dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool logical volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBefore using Docker or OpenShift Container Platform, verify that the docker-pool logical volume is large enough to meet your needs. The docker-pool volume should be 60% of the available volume group and will grow to fill the volume group via LVM monitoring.
If Docker has not yet been started on the host, enable and start the service, then verify it is running:
systemctl enable docker systemctl start docker systemctl is-active docker
# systemctl enable docker # systemctl start docker # systemctl is-active docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If Docker is already running, re-initialize Docker:
WarningThis will destroy any containers or images currently on the host.
systemctl stop docker rm -rf /var/lib/docker/* systemctl restart docker
# systemctl stop docker # rm -rf /var/lib/docker/* # systemctl restart docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there is any content in /var/lib/docker/, it must be deleted. Files will be present if Docker has been used prior to the installation of OpenShift Container Platform.
3.9.3. Reconfiguring Docker Storage 复制链接链接已复制到粘贴板!
Should you need to reconfigure Docker storage after having created the docker-pool, you should first remove the docker-pool logical volume. If you are using a dedicated volume group, you should also remove the volume group and any associated physical volumes before reconfiguring docker-storage-setup according to the instructions above.
See Logical Volume Manager Administration for more detailed information on LVM management.
3.9.4. Enabling Image Signature Support 复制链接链接已复制到粘贴板!
OpenShift Container Platform is capable of cryptographically verifying images are from trusted sources. The Container Security Guide provides a high-level description of how image signing works.
You can configure image signature verification using the atomic
command line interface (CLI), version 1.12.5 or greater. The atomic
CLI is pre-installed on RHEL Atomic Host systems.
For more on the atomic
CLI, see the Atomic CLI documentation.
Install the atomic package if it is not installed on the host system:
yum install atomic
$ yum install atomic
The atomic trust sub-command manages trust configuration. The default configuration is to whitelist all registries. This means no signature verification is configured.
atomic trust show
$ atomic trust show
* (default) accept
A reasonable configuration might be to whitelist a particular registry or namespace, blacklist (reject) untrusted registries, and require signature verification on a vendor registry. The following set of commands performs this example configuration:
Example Atomic Trust Configuration
When all the signed sources are verified, nodes may be further hardened with a global reject
default:
Use the atomic
man page man atomic-trust
for additional examples.
The following files and directories comprise the trust configuration of a host:
- /etc/containers/registries.d/*
- /etc/containers/policy.json
The trust configuration may be managed directly on each node or the generated files managed on a separate host and distributed to the appropriate nodes using Ansible, for example. See the Container Image Signing Integration Guide for an example of automating file distribution with Ansible.
3.9.5. Managing Container Logs 复制链接链接已复制到粘贴板!
Sometimes a container’s log file (the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running) can increase to a problematic size. You can manage this by configuring Docker’s json-file
logging driver to restrict the size and number of log files.
Option | Purpose |
---|---|
| Sets the size at which a new log file is created. |
| Sets the maximum number of log files to be kept per host. |
To configure the log file, edit the /etc/sysconfig/docker file. For example, to set the maximum file size to 1MB and always keep the last three log files, append
max-size=1M
andmax-file=3
to theOPTIONS=
line, ensuring that the values maintain the single quotation mark formatting:OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'
OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the Docker service:
systemctl restart docker
# systemctl restart docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9.6. Viewing Available Container Logs 复制链接链接已复制到粘贴板!
Container logs are stored in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:
See Docker’s documentation for additional information on how to configure logging drivers.
3.9.7. Blocking Local Volume Usage 复制链接链接已复制到粘贴板!
When a volume is provisioned using the VOLUME
instruction in a Dockerfile or using the docker run -v <volumename>
command, a host’s storage space is used. Using this storage can lead to an unexpected out of space issue and could bring down the host.
In OpenShift Container Platform, users trying to run their own images risk filling the entire storage space on a node host. One solution to this issue is to prevent users from running images with volumes. This way, the only storage a user has access to can be limited, and the cluster administrator can assign storage quota.
Using docker-novolume-plugin solves this issue by disallowing starting a container with local volumes defined. In particular, the plug-in blocks docker run
commands that contain:
-
The
--volumes-from
option -
Images that have
VOLUME
(s) defined -
References to existing volumes that were provisioned with the
docker volume
command
The plug-in does not block references to bind mounts.
To enable docker-novolume-plugin, perform the following steps on each node host:
Install the docker-novolume-plugin package:
yum install docker-novolume-plugin
$ yum install docker-novolume-plugin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the docker-novolume-plugin service:
systemctl enable docker-novolume-plugin systemctl start docker-novolume-plugin
$ systemctl enable docker-novolume-plugin $ systemctl start docker-novolume-plugin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the /etc/sysconfig/docker file and append the following to the
OPTIONS
list:--authorization-plugin=docker-novolume-plugin
--authorization-plugin=docker-novolume-plugin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the docker service:
systemctl restart docker
$ systemctl restart docker
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After you enable this plug-in, containers with local volumes defined fail to start and show the following error message:
runContainer: API error (500): authorization denied by plugin docker-novolume-plugin: volumes are not allowed
runContainer: API error (500): authorization denied by plugin
docker-novolume-plugin: volumes are not allowed
3.10. Red Hat Gluster Storage Software Requirements 复制链接链接已复制到粘贴板!
To access GlusterFS volumes, the mount.glusterfs
command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:
yum install glusterfs-fuse
# yum install glusterfs-fuse
This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do this, the following RPM repository must be enabled:
subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms
If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:
yum update glusterfs-fuse
# yum update glusterfs-fuse
3.11. What’s Next? 复制链接链接已复制到粘贴板!
After you have finished preparing your hosts, you can proceed to configure your inventory file.
If you are installing a stand-alone registry, continue instead with the Installing a Stand-alone Registry topic.