Search

Chapter 3. Preparing your hosts

download PDF

Before you install OpenShift Container Platform, you must prepare the node hosts. They must meet the following requirements.

3.1. Operating system requirements

The operating system requirements for master and node hosts are different depending on your server architecture.

  • For servers that use x86_64 architecture, use a base installation of Red Hat Enterprise Linux (RHEL) 7.5 or later with the latest packages from the Extras channel or RHEL Atomic Host 7.4.2 or later.
  • For cloud-based installations, use a base installation of RHEL 7.5 or later with the latest packages from the Extras channel.
  • For servers that use IBM POWER8 architecture, use a base installation of RHEL 7.5 or later with the latest packages from the Extras channel.
  • For servers that use IBM POWER9 architecture, use a base installation of RHEL-ALT 7.5 or later with the latest packages from the Extras channel.

See the following documentation for the respective installation instructions, if required:

3.2. Server Type Requirements

If you use IBM POWER servers for your nodes, you can use only IBM POWER servers. You cannot add nodes that run on IBM POWER servers to an existing cluster that uses x86_64 servers or deploy cluster nodes on a mix of IBM POWER and x86_64 servers.

3.3. Setting PATH

The PATH for the root user on each host must contain the following directories:

  • /bin
  • /sbin
  • /usr/bin
  • /usr/sbin

These directories set by default in a new RHEL 7.x installation.

3.4. Ensuring host access

The OpenShift Container Platform installer requires a user that has access to all hosts. If you want to run the installer as a non-root user, first configure passwordless sudo rights each host:

  1. Generate an SSH key on the host you run the installation playbook on:

    # ssh-keygen

    Do not use a password.

  2. Distribute the key to the other cluster hosts. You can use a bash loop:

    # for host in master.example.com \ 1
        node1.example.com \  2
        node2.example.com; \ 3
        do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
        done
    1 2 3
    Provide the host name for each cluster host.
  3. Confirm that you can access each host that is listed in the loop through SSH.

3.5. Setting proxy overrides

If the /etc/environment file on your nodes contains either an http_proxy or https_proxy value, you must also set a no_proxy value in that file to allow open communication between OpenShift Container Platform components.

Note

The no_proxy parameter in /etc/environment file is not the same value as the global proxy values that you set in your inventory file. The global proxy values configure specific OpenShift Container Platform services with your proxy settings. See Configuring Global Proxy Options for details.

If the /etc/environment file contains proxy values, define the following values in the no_proxy parameter of that file on each node:

  • Master and node host names or their domain suffix.
  • Other internal host names or their domain suffix.
  • Etcd IP addresses. You must provide IP addresses and not host names because etcd access is controlled by IP address.
  • Kubernetes IP address, by default 172.30.0.1. Must be the value set in the openshift_portal_net parameter in your inventory file.
  • Kubernetes internal domain suffix, cluster.local.
  • Kubernetes internal domain suffix, .svc.
Note

Because no_proxy does not support CIDR, you can use domain suffixes.

If you use either an http_proxy or https_proxy value, your no_proxy parameter value resembles the following example:

no_proxy=.internal.example.com,10.0.0.1,10.0.0.2,10.0.0.3,.cluster.local,.svc,localhost,127.0.0.1,172.30.0.1

3.6. Registering hosts

To access the installation packages, you must register each host with Red Hat Subscription Manager (RHSM) and attach an active OpenShift Container Platform subscription.

  1. On each host, register with RHSM:

    # subscription-manager register --username=<user_name> --password=<password>
  2. Pull the latest subscription data from RHSM:

    # subscription-manager refresh
  3. List the available subscriptions:

    # subscription-manager list --available --matches '*OpenShift*'
  4. In the output for the previous command, find the pool ID for an OpenShift Container Platform subscription and attach it:

    # subscription-manager attach --pool=<pool_id>
  5. Disable all yum repositories:

    1. Disable all the enabled RHSM repositories:

      # subscription-manager repos --disable="*"
    2. List the remaining yum repositories and note their names under repo id, if any:

      # yum repolist
    3. Use yum-config-manager to disable the remaining yum repositories:

      # yum-config-manager --disable <repo_id>

      Alternatively, disable all repositories:

       yum-config-manager --disable \*

      Note that this might take a few minutes if you have a large number of available repositories

  6. Enable only the repositories required by OpenShift Container Platform 3.11.

    • For cloud installations and on-premise installations on x86_64 servers, run the following command:

      # subscription-manager repos \
          --enable="rhel-7-server-rpms" \
          --enable="rhel-7-server-extras-rpms" \
          --enable="rhel-7-server-ose-3.11-rpms" \
          --enable="rhel-7-server-ansible-2.9-rpms"
    • For on-premise installations on IBM POWER8 servers, run the following command:

      # subscription-manager repos \
          --enable="rhel-7-for-power-le-rpms" \
          --enable="rhel-7-for-power-le-extras-rpms" \
          --enable="rhel-7-for-power-le-optional-rpms" \
          --enable="rhel-7-server-ansible-2.9-for-power-le-rpms" \
          --enable="rhel-7-server-for-power-le-rhscl-rpms" \
          --enable="rhel-7-for-power-le-ose-3.11-rpms"
    • For on-premise installations on IBM POWER9 servers, run the following command:

      # subscription-manager repos \
          --enable="rhel-7-for-power-9-rpms" \
          --enable="rhel-7-for-power-9-extras-rpms" \
          --enable="rhel-7-for-power-9-optional-rpms" \
          --enable="rhel-7-server-ansible-2.9-for-power-9-rpms" \
          --enable="rhel-7-server-for-power-9-rhscl-rpms" \
          --enable="rhel-7-for-power-9-ose-3.11-rpms"
    Note

    Older versions of OpenShift Container Platform 3.11 supported only Ansible 2.6. The most recent versions of the playbooks now support Ansible 2.9, which is the preferred version to use.

3.7. Installing base packages

Important

If your hosts use RHEL 7.5 and you want to accept OpenShift Container Platform’s default docker configuration (using OverlayFS storage and all default logging options), do not manually install these packages. These packages are installed when you run the prerequisites.yml playbook during installation.

If your hosts use RHEL 7.4 or if they use RHEL 7.5 and you want to customize the docker configuration, install these packages.

For RHEL 7 systems:

  1. Install the following base packages:

    # yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct
  2. Update the system to the latest packages:

    # yum update
    # reboot
  3. Install packages that are required for your installation method:

    • If you plan to use the containerized installer, install the following package:

      # yum install atomic
    • If you plan to use the RPM-based installer, install the following package:

      # yum install openshift-ansible
      Note

      If you encounter an error while running the yum install openshift-ansible command, see the resolution to fix that error.

      This package provides installer utilities and pulls in other packages that the cluster installation process needs, such as Ansible, playbooks, and related configuration files

For RHEL Atomic Host 7 systems:

  1. Ensure the host is up to date by upgrading to the latest Atomic tree if one is available:

    # atomic host upgrade
  2. After the upgrade is completed and prepared for the next boot, reboot the host:

    # reboot

3.8. Installing Docker

At this point, install Docker on all master and node hosts. This allows you to configure your Docker storage options before you install OpenShift Container Platform.

Note

The cluster installation process automatically modifies the /etc/sysconfig/docker file.

For RHEL 7 systems:

  1. Install Docker 1.13:

    # yum install docker-1.13.1
  2. Verify that version 1.13 was installed:

    # rpm -V docker-1.13.1
    # docker version

For RHEL Atomic Host 7 systems:

No action is required. Docker is installed, configured, and running by default.

3.9. Configuring Docker Storage

Containers and the images they are created from are stored in Docker’s storage back end. This storage is ephemeral and separate from any persistent storage allocated to meet the needs of your applications. With Ephemeral storage, container-saved data is lost when the container is removed. With persistent storage, container-saved data remains if the container is removed.

You must configure storage for all master and node hosts because by default each system runs a container daemon. For containerized installations, you need storage on masters. Also, by default, the web console and etcd, which require storage, run in containers on masters. Containers run on nodes, so storage is always required on them.

The size of storage depends on workload, number of containers, the size of the containers being run, and the containers' storage requirements.

Important

If your hosts use RHEL 7.5 and you want to accept OpenShift Container Platform’s default docker configuration (using OverlayFS storage and all default logging options), do not manually install these packages. These packages are installed when you run the prerequisites.yml playbook during installation.

If your hosts use RHEL 7.4 or if they use RHEL 7.5 and you want to customize the docker configuration, install these packages.

For RHEL 7 systems:

The default storage back end for Docker on RHEL 7 is a thin pool on loopback devices, which is not supported for production use and only appropriate for proof of concept environments. For production environments, you must create a thin pool logical volume and re-configure Docker to use that volume.

Docker stores images and containers in a graph driver, which is a pluggable storage technology, such as DeviceMapper, OverlayFS, and Btrfs. Each has advantages and disadvantages. For example, OverlayFS is faster than DeviceMapper at starting and stopping containers but is not Portable Operating System Interface for Unix (POSIX) compliant because of the architectural limitations of a union file system. See the Red Hat Enterprise Linux release notes for information on using OverlayFS with your version of RHEL.

For more information about the benefits and limitations of DeviceMapper and OverlayFS, see Choosing a Graph Driver.

For RHEL Atomic Host 7 systems:

The default storage back end for Docker on RHEL Atomic Host is a thin pool logical volume, which is supported for production environments. You must ensure that enough space is allocated for this volume per the Docker storage requirements mentioned in System Requirements.

If you do not have enough space allocated, see Managing Storage with Docker Formatted Containers for details about using docker-storage-setup and basic instructions on storage management in RHEL Atomic Host.

3.9.1. Configuring OverlayFS

OverlayFS is a type of union file system. OverlayFS enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified.

Comparing the Overlay Versus Overlay2 Graph Drivers has more information about the overlay and overlay2 drivers.

To use the overlay2 driver, the lower layer must use the XFS file system. The lower-layer file system is the file system that remains unmodified.

For information about enabling the OverlayFS storage driver for the Docker service, see the Red Hat Enterprise Linux Atomic Host documentation.

3.9.2. Configuring thin pool storage

You can use the docker-storage-setup script included with Docker to create a thin pool device and configure Docker’s storage driver. You can do this after you install Docker and must do it before you create images or containers. The script reads configuration options from the /etc/sysconfig/docker-storage-setup file and supports three options for creating the logical volume:

  • Use an additional block device.
  • Use an existing, specified volume group.
  • Use the remaining free space from the volume group where your root file system is located.

Using an additional block device is the most robust option, but it requires adding another block device to your host before you configure Docker storage. The other options both require leaving free space available when you provision your host. Using the remaining free space in the root file system volume group is known to cause issues with some applications, for example Red Hat Mobile Application Platform (RHMAP).

  1. Create the docker-pool volume using one of the following three options:

    • To use an additional block device:

      1. In /etc/sysconfig/docker-storage-setup, set DEVS to the path of the block device to use. Set VG to the volume group name to create, such as docker-vg. For example:

        # cat <<EOF > /etc/sysconfig/docker-storage-setup
        DEVS=/dev/vdc
        VG=docker-vg
        EOF
      2. Run docker-storage-setup and review the output to ensure the docker-pool volume was created:

        # docker-storage-setup                                                                                                                                                                                                                                [5/1868]
        0
        Checking that no-one is using this disk right now ...
        OK
        
        Disk /dev/vdc: 31207 cylinders, 16 heads, 63 sectors/track
        sfdisk:  /dev/vdc: unrecognized partition table type
        
        Old situation:
        sfdisk: No partitions found
        
        New situation:
        Units: sectors of 512 bytes, counting from 0
        
           Device Boot    Start       End   #sectors  Id  System
        /dev/vdc1          2048  31457279   31455232  8e  Linux LVM
        /dev/vdc2             0         -          0   0  Empty
        /dev/vdc3             0         -          0   0  Empty
        /dev/vdc4             0         -          0   0  Empty
        Warning: partition 1 does not start at a cylinder boundary
        Warning: partition 1 does not end at a cylinder boundary
        Warning: no primary partition is marked bootable (active)
        This does not matter for LILO, but the DOS MBR will not boot this disk.
        Successfully wrote the new partition table
        
        Re-reading the partition table ...
        
        If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
        to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
        (See fdisk(8).)
          Physical volume "/dev/vdc1" successfully created
          Volume group "docker-vg" successfully created
          Rounding up size to full physical extent 16.00 MiB
          Logical volume "docker-poolmeta" created.
          Logical volume "docker-pool" created.
          WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
          THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
          Converted docker-vg/docker-pool to thin pool.
          Logical volume "docker-pool" changed.
    • To use an existing, specified volume group:

      1. In /etc/sysconfig/docker-storage-setup, set VG to the volume group. For example:

        # cat <<EOF > /etc/sysconfig/docker-storage-setup
        VG=docker-vg
        EOF
      2. Then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

        # docker-storage-setup
          Rounding up size to full physical extent 16.00 MiB
          Logical volume "docker-poolmeta" created.
          Logical volume "docker-pool" created.
          WARNING: Converting logical volume docker-vg/docker-pool and docker-vg/docker-poolmeta to pool's data and metadata volumes.
          THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
          Converted docker-vg/docker-pool to thin pool.
          Logical volume "docker-pool" changed.
    • To use the remaining free space from the volume group where your root file system is located:

      1. Verify that the volume group where your root file system resides has the required free space, then run docker-storage-setup and review the output to ensure the docker-pool volume was created:

        # docker-storage-setup
          Rounding up size to full physical extent 32.00 MiB
          Logical volume "docker-poolmeta" created.
          Logical volume "docker-pool" created.
          WARNING: Converting logical volume rhel/docker-pool and rhel/docker-poolmeta to pool's data and metadata volumes.
          THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
          Converted rhel/docker-pool to thin pool.
          Logical volume "docker-pool" changed.
  2. Verify your configuration. Confirm that the /etc/sysconfig/docker-storage file has dm.thinpooldev and docker-pool logical volume values:

    # cat /etc/sysconfig/docker-storage
    DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/rhel-docker--pool --storage-opt dm.use_deferred_removal=true --storage-opt dm.use_deferred_deletion=true "
    
    # lvs
      LV          VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
      docker-pool rhel twi-a-t---  9.29g             0.00   0.12
    Important

    Before using Docker or OpenShift Container Platform, verify that the docker-pool logical volume is large enough to meet your needs. Make the docker-pool volume 60% of the available volume group; it will grow to fill the volume group through LVM monitoring.

  3. Start or restart Docker.

    • If Docker has never run on the host, enable and start the service, then verify that it is running:

      # systemctl enable docker
      # systemctl start docker
      # systemctl is-active docker
    • If Docker is already running:

      1. Re-initialize Docker:

        Warning

        This will destroy any containers or images currently on the host.

        # systemctl stop docker
        # rm -rf /var/lib/docker/*
        # systemctl restart docker
      2. Delete any content in the /var/lib/docker/ folder.

3.9.3. Reconfiguring Docker storage

If you need to reconfigure Docker storage after you create the docker-pool:

  1. Remove the docker-pool logical volume.
  2. If you use a dedicated volume group, remove the volume group and any associated physical volumes
  3. Run docker-storage-setup again.

See Logical Volume Manager Administration for more detailed information about LVM management.

3.9.4. Enabling image signature support

OpenShift Container Platform is capable of cryptographically verifying that images are from trusted sources. The Container Security Guide provides a high-level description of how image signing works.

You can configure image signature verification using the atomic command line interface (CLI), version 1.12.5 or greater. The atomic CLI is pre-installed on RHEL Atomic Host systems.

Note

For more on the atomic CLI, see the Atomic CLI documentation.

The following files and directories comprise the trust configuration of a host:

  • /etc/containers/registries.d/*
  • /etc/containers/policy.json

You can manage trust configuration directly on each node or manage the files on a separate host distribute them to the appropriate nodes using Ansible, for example. See the Container Image Signing Integration Guide for an example of automating file distribution with Ansible.

  1. Install the atomic package if it is not installed on the host system:

    $ yum install atomic
  2. View the current trust configuration:

    $ atomic trust show
    * (default)                         accept

    The default configuration is to whitelist all registries, which means that no signature verification is configured.

  3. Customize your trust configuration. In the following example, you whitelist one registry or namespace, blacklist (reject) untrusted registries, and require signature verification on a vendor registry:

    $ atomic trust add --type insecureAcceptAnything 172.30.1.1:5000
    
    $ atomic trust add --sigstoretype atomic \
      --pubkeys pub@example.com \
      172.30.1.1:5000/production
    
    $ atomic trust add --sigstoretype atomic \
      --pubkeys /etc/pki/example.com.pub \
      172.30.1.1:5000/production
    
    $ atomic trust add --sigstoretype web \
      --sigstore https://access.redhat.com/webassets/docker/content/sigstore \
      --pubkeys /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release \
      registry.redhat.io
    
    # atomic trust show
    * (default)                         accept
    172.30.1.1:5000                     accept
    172.30.1.1:5000/production          signed security@example.com
    registry.redhat.io                  signed security@redhat.com,security@redhat.com
  4. You can further harden nodes by adding a global reject default trust:

    $ atomic trust default reject
    
    $ atomic trust show
    * (default)                         reject
    172.30.1.1:5000                     accept
    172.30.1.1:5000/production          signed security@example.com
    registry.redhat.io                  signed security@redhat.com,security@redhat.com
  5. Optionally, review the atomic man page man atomic-trust for more configuration options.

3.9.5. Managing container logs

To prevent a container’s log file, the /var/lib/docker/containers/<hash>/<hash>-json.log file on the node where the container is running, from increasing to a problematic size, you can configure Docker’s json-file logging driver to restrict the size and number of log files.

OptionPurpose

--log-opt max-size

Sets the size at which a new log file is created.

--log-opt max-file

Sets the maximum number of log files to be kept per host.

  1. To configure the log file, edit the /etc/sysconfig/docker file. For example, to set the maximum file size to 1 MB and always keep the last three log files, append max-size=1M and max-file=3 to the OPTIONS= line, ensuring that the values maintain the single quotation mark formatting:

    OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3'

    See Docker’s documentation for additional information on how to configure logging drivers.

  2. Restart the Docker service:

    # systemctl restart docker

3.9.6. Viewing available container logs

You can view the container logs in the /var/lib/docker/containers/<hash>/ directory on the node where the container is running. For example:

# ls -lh /var/lib/docker/containers/f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8/
total 2.6M
-rw-r--r--. 1 root root 5.6K Nov 24 00:12 config.json
-rw-r--r--. 1 root root 649K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.1
-rw-r--r--. 1 root root 977K Nov 24 00:15 f088349cceac173305d3e2c2e4790051799efe363842fdab5732f51f5b001fd8-json.log.2
-rw-r--r--. 1 root root 1.3K Nov 24 00:12 hostconfig.json
drwx------. 2 root root    6 Nov 24 00:12 secrets

3.9.7. Blocking local volume usage

When a volume is provisioned using the VOLUME instruction in a Dockerfile or using the docker run -v <volumename> command, a host’s storage space is used. Using this storage can lead to an unexpected out of space issue and can bring down the host.

In OpenShift Container Platform, users trying to run their own images risk filling the entire storage space on a node host. One solution to this issue is to prevent users from running images with volumes. This way, the only storage a user has access to can be limited, and the cluster administrator can assign storage quota.

Using docker-novolume-plugin solves this issue by disallowing starting a container with local volumes defined. In particular, the plug-in blocks docker run commands that contain:

  • The --volumes-from option
  • Images that have VOLUME(s) defined
  • References to existing volumes that were provisioned with the docker volume command

The plug-in does not block references to bind mounts.

To enable docker-novolume-plugin, perform the following steps on each node host:

  1. Install the docker-novolume-plugin package:

    $ yum install docker-novolume-plugin
  2. Enable and start the docker-novolume-plugin service:

    $ systemctl enable docker-novolume-plugin
    $ systemctl start docker-novolume-plugin
  3. Edit the /etc/sysconfig/docker file and append the following to the OPTIONS list:

    --authorization-plugin=docker-novolume-plugin
  4. Restart the docker service:

    $ systemctl restart docker

After you enable this plug-in, containers with local volumes defined fail to start and show the following error message:

runContainer: API error (500): authorization denied by plugin
docker-novolume-plugin: volumes are not allowed

3.10. Red Hat Gluster Storage Software Requirements

To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:

# yum install glusterfs-fuse

This package comes installed on every RHEL system. However, it is recommended to update to the latest available version from Red Hat Gluster Storage if your servers use x86_64 architecture. To do this, the following RPM repository must be enabled:

# subscription-manager repos --enable=rh-gluster-3-client-for-rhel-7-server-rpms

If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:

# yum update glusterfs-fuse

3.11. Next steps

After you finish preparing your hosts, if you are installing OpenShift Container Platform, configure your inventory file. If you are installing a stand-alone registry, continue instead to Installing a stand-alone registry.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.