Chapter 13. Installation configuration
13.1. Supported installation methods for different platforms
You can perform different types of installations on different platforms.
Not all installation options are supported for all platforms, as shown in the following tables.
AWS | Azure | GCP | OpenStack | RHV | Bare metal | vSphere | IBM Z | |
---|---|---|---|---|---|---|---|---|
Default | ||||||||
Custom | ||||||||
Network Operator | ||||||||
Restricted network | ||||||||
Private clusters | ||||||||
Existing virtual private networks |
AWS | Azure | GCP | OpenStack | RHV | Bare metal | vSphere | IBM Z | |
---|---|---|---|---|---|---|---|---|
Custom | ||||||||
Network Operator | ||||||||
Restricted network | ||||||||
Shared VPC hosted outside of cluster project |
13.2. Customizing nodes
Although directly making changes to OpenShift Container Platform nodes is discouraged, there are times when it is necessary to implement a required low-level security, networking, or performance feature. Direct changes to OpenShift Container Platform nodes can be done by:
-
Creating machine configs that are included in manifest files to start up a cluster during
openshift-install
. - Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator.
The following sections describe features that you might want to configure on your nodes in this way.
13.2.1. Adding day-1 kernel arguments
Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up:
- You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up.
- You need to do some low-level network configuration before the systems start.
To add kernel arguments to master or worker nodes, you can create a MachineConfig
object and inject that object into the set of manifest files used by Ignition during cluster setup.
For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters. It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation.
Procedure
Generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir=<installation_directory>
- Decide if you want to add kernel arguments to worker or master nodes.
In the
openshift
directory, create a file (for example,99-openshift-machineconfig-master-kargs.yaml
) to define aMachineConfig
object to add the kernel settings. This example adds aloglevel=7
kernel argument to master nodes:$ cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - 'loglevel=7' EOF
You can change
master
toworker
to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes.
You can now continue on to create the cluster.
13.2.2. Adding kernel modules to nodes
For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster.
When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel.
The way that this feature is able to keep the module up to date on each node is by:
- Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and
- If a new kernel is detected, the service rebuilds the module and installs it to the kernel
For information on the software needed for this procedure, see the kmods-via-containers github site.
A few important issues to keep in mind:
- This procedure is Technology Preview.
-
Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial
github.com
sites noted in the procedure. - Third-party kernel modules you might add through these procedures are not supported by Red Hat.
-
In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a
yum
repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription.
13.2.2.1. Building and testing the kernel module container
Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module’s source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following:
Procedure
Register a RHEL 8 system:
# subscription-manager register
Attach a subscription to the RHEL 8 system:
# subscription-manager attach --auto
Install software that is required to build the software and container:
# yum install podman make git -y
Clone the
kmod-via-containers
repository:Create a folder for the repository:
$ mkdir kmods; cd kmods
Clone the repository:
$ git clone https://github.com/kmods-via-containers/kmods-via-containers
Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a
kmods-via-container
systemd service and loads it:Change to the
kmod-via-containers
directory:$ cd kmods-via-containers/
Install the KVC framework instance:
$ sudo make install
Reload the systemd manager configuration:
$ sudo systemctl daemon-reload
Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the
kvc-simple-kmod
example that can be cloned to your system as follows:$ cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod
Edit the configuration file,
simple-kmod.conf
file, in this example, and change the name of the Dockerfile toDockerfile.rhel
:Change to the
kvc-simple-kmod
directory:$ cd kvc-simple-kmod
Rename the Dockerfile:
$ cat simple-kmod.conf
Example Dockerfile
KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod"
Create an instance of
kmods-via-containers@.service
for your kernel module,simple-kmod
in this example:$ sudo make install
Enable the
kmods-via-containers@.service
instance:$ sudo kmods-via-containers build simple-kmod $(uname -r)
Enable and start the systemd service:
Enable the service:
$ sudo systemctl enable kmods-via-containers@simple-kmod.service
Start the service:
$ sudo systemctl start kmods-via-containers@simple-kmod.service
Review the service status:
$ sudo systemctl status kmods-via-containers@simple-kmod.service
Example output
● kmods-via-containers@simple-kmod.service - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/kmods-via-containers@.service; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago...
To confirm that the kernel modules are loaded, use the
lsmod
command to list the modules:$ lsmod | grep simple_
Example output
simple_procfs_kmod 16384 0 simple_kmod 16384 0
Optional. Use other methods to check that the
simple-kmod
example is working:Look for a "Hello world" message in the kernel ring buffer with
dmesg
:$ dmesg | grep 'Hello world'
Example output
[ 6420.761332] Hello world from simple_kmod.
Check the value of
simple-procfs-kmod
in/proc
:$ sudo cat /proc/simple-procfs-kmod
Example output
simple-procfs-kmod number = 0
Run the
spkut
command to get more information from the module:$ sudo spkut 44
Example output
KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44
Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it.
13.2.2.2. Provisioning a kernel module to OpenShift Container Platform
Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways:
-
Provision kernel modules at cluster install time (day-1): You can create the content as a
MachineConfig
object and provide it toopenshift-install
by including it with a set of manifest files. - Provision kernel modules via Machine Config Operator (day-2): If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO).
In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content.
- Provide RHEL entitlements to each node.
-
Get RHEL entitlements from an existing RHEL host, from the
/etc/pki/entitlement
directory and copy them to the same location as the other files you provide when you build your Ignition config. -
Inside the Dockerfile, add pointers to a
yum
repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels.
13.2.2.2.1. Provision kernel modules via a MachineConfig
object
By packaging kernel module software with a MachineConfig
object, you can deliver that software to worker or master nodes at installation time or via the Machine Config Operator.
First create a base Ignition config that you would like to use. At installation time, the Ignition config will contain the ssh public key to add to the authorized_keys
file for the core
user on the cluster. To add the MachineConfig
object later via the MCO instead, the SSH public key is not required. For both type, the example simple-kmod service creates a systemd unit file, which requires a kmods-via-containers@simple-kmod.service
.
The systemd unit is a workaround for an upstream bug and makes sure that the kmods-via-containers@simple-kmod.service
gets started on boot:
Register a RHEL 8 system:
# subscription-manager register
Attach a subscription to the RHEL 8 system:
# subscription-manager attach --auto
Install software needed to build the software:
# yum install podman make git -y
Create an Ignition config file that creates a systemd unit file:
Create a directory to host the Ignition config file:
$ mkdir kmods; cd kmods
Create the Ignition config file that creates a systemd unit file:
$ cat <<EOF > ./baseconfig.ign { "ignition": { "version": "2.2.0" }, "passwd": { "users": [ { "name": "core", "groups": ["sudo"], "sshAuthorizedKeys": [ "ssh-rsa AAAA" ] } ] }, "systemd": { "units": [{ "name": "require-kvc-simple-kmod.service", "enabled": true, "contents": "[Unit]\nRequires=kmods-via-containers@simple-kmod.service\n[Service]\nType=oneshot\nExecStart=/usr/bin/true\n\n[Install]\nWantedBy=multi-user.target" }] } } EOF
NoteYou must add your public SSH key to the
baseconfig.ign
file to use the file duringopenshift-install
. The public SSH key is not needed if you create theMachineConfig
object using the MCO.
Create a base MCO YAML snippet that uses the following configuration:
$ cat <<EOF > mc-base.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 10-kvc-simple-kmod spec: config: EOF
NoteThe
mc-base.yaml
is set to deploy the kernel module onworker
nodes. To deploy on master nodes, change the role fromworker
tomaster
. To do both, you could repeat the whole procedure using different file names for the two types of deployments.Get the
kmods-via-containers
software:Clone the
kmods-via-containers
repository:$ git clone https://github.com/kmods-via-containers/kmods-via-containers
Clone the
kvc-simple-kmod
repository:$ git clone https://github.com/kmods-via-containers/kvc-simple-kmod
-
Get your module software. In this example,
kvc-simple-kmod
is used: Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier:
Create the directory:
$ FAKEROOT=$(mktemp -d)
Change to the
kmod-via-containers
directory:$ cd kmods-via-containers
Install the KVC framework instance:
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
Change to the
kvc-simple-kmod
directory:$ cd ../kvc-simple-kmod
Create the instance:
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
Get a tool called
filetranspiler
and dependent software:$ cd .. ; sudo yum install -y python3 git clone https://github.com/ashcrow/filetranspiler.git
Generate a final machine config YAML (
mc.yaml
) and have it include the base Ignition config, base machine config, and the fakeroot directory with files you would like to deliver:$ ./filetranspiler/filetranspile -i ./baseconfig.ign \ -f ${FAKEROOT} --format=yaml --dereference-symlinks \ | sed 's/^/ /' | (cat mc-base.yaml -) > 99-simple-kmod.yaml
If the cluster is not up yet, generate manifest files and add this file to the
openshift
directory. If the cluster is already running, apply the file as follows:$ oc create -f 99-simple-kmod.yaml
Your nodes will start the
kmods-via-containers@simple-kmod.service
service and the kernel modules will be loaded.To confirm that the kernel modules are loaded, you can log in to a node (using
oc debug node/<openshift-node>
, thenchroot /host
). To list the modules, use thelsmod
command:$ lsmod | grep simple_
Example output
simple_procfs_kmod 16384 0 simple_kmod 16384 0
13.2.3. Encrypting disks during installation
During OpenShift Container Platform installation, you can enable disk encryption on all master and worker nodes. This feature:
- Is available for installer-provisioned infrastructure and user-provisioned infrastructure deployments
- Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only
- Sets up disk encryption during the manifest installation phase so all data written to disk, from first boot forward, is encrypted
-
Encrypts data on the root filesystem only (
/dev/mapper/coreos-luks-root
on/
) - Requires no user intervention for providing passphrases
- Uses AES-256-CBC encryption
There are two different supported encryption modes:
- TPM v2: This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor. To implement TPM v2 disk encryption, create an Ignition config file as described below.
- Tang: To use Tang to encrypt your cluster, you need to use a Tang server. Clevis implements decryption on the client side. Tang encryption mode is only supported for bare metal installs.
Follow one of the two procedures to enable disk encryption for the nodes in your cluster.
13.2.3.1. Enabling TPM v2 disk encryption
Use this procedure to enable TPM v2 mode disk encryption during OpenShift Container Platform deployment.
Procedure
- Check to see if TPM v2 encryption needs to be enabled in the BIOS on each node. This is required on most Dell systems. Check the manual for your computer.
Generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir=<installation_directory>
In the
openshift
directory, create master or worker files to encrypt disks for those nodes.To create a worker file, run the following command:
$ cat << EOF > ./99-openshift-worker-tpmv2-encryption.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: worker-tpm labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;base64,e30K filesystem: root mode: 420 path: /etc/clevis.json EOF
To create a master file, run the following command:
$ cat << EOF > ./99-openshift-master-tpmv2-encryption.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: master-tpm labels: machineconfiguration.openshift.io/role: master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;base64,e30K filesystem: root mode: 420 path: /etc/clevis.json EOF
- Make a backup copy of the YAML file. You should do this because the file will be deleted when you create the cluster.
- Continue with the remainder of the OpenShift Container Platform deployment.
13.2.3.2. Enabling Tang disk encryption
Use this procedure to enable Tang mode disk encryption during OpenShift Container Platform deployment.
Procedure
-
Access a Red Hat Enterprise Linux server from which you can configure the encryption settings and run
openshift-install
to install a cluster andoc
to work with it. - Set up or access an existing Tang server. See Network-bound disk encryption for instructions. See Securing Automated Decryption New Cryptography and Techniques for a presentation on Tang.
-
Add kernel arguments to configure networking when you do the Red Hat Enterprise Linux CoreOS (RHCOS) installations for your cluster. For example, to configure DHCP networking, identify
ip=dhcp
, or set static networking when you add parameters to the kernel command line. For both DHCP and static networking, you also must provide therd.neednet=1
kernel argument.
Skipping this step causes the second boot to fail.
-
Install the
clevis
package, if it is not already installed:
$ sudo yum install clevis -y
Generate a thumbprint from the Tang server.
In the following command, replace the value of
url
with the Tang server URL:$ echo nifty random wordwords \ | clevis-encrypt-tang \ '{"url":"https://tang.example.org"}'
Example output
The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9
When the
Do you wish to trust these keys? [ynYN]
prompt displays, typeY
, and the thumbprint is displayed:Example output
eyJhbmc3SlRyMXpPenc3ajhEQ01tZVJiTi1oM...
Create a Base64 encoded file, replacing the URL of the Tang server (
url
) and thumbprint (thp
) you just generated:$ (cat <<EOM { "url": "https://tang.example.com", "thp": "PLjNyRdGw03zlRoGjQYMahSZGu9" } EOM ) | base64 -w0
Example output
ewogInVybCI6ICJodHRwczovL3RhbmcuZXhhbXBsZS5jb20iLAogInRocCI6ICJaUk1leTFjR3cwN3psVExHYlhuUWFoUzBHdTAiCn0K
In the
openshift
directory, create master or worker files to encrypt disks for those nodes.For worker nodes, use the following command:
$ cat << EOF > ./99-openshift-worker-tang-encryption.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: worker-tang labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;base64,e30K source: data:text/plain;base64,ewogInVybCI6ICJodHRwczovL3RhbmcuZXhhbXBsZS5jb20iLAogInRocCI6ICJaUk1leTFjR3cwN3psVExHYlhuUWFoUzBHdTAiCn0K filesystem: root mode: 420 path: /etc/clevis.json EOF
For master nodes, use the following command:
$ cat << EOF > ./99-openshift-master-tang-encryption.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: master-tang labels: machineconfiguration.openshift.io/role: master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;base64,e30K source: data:text/plain;base64,ewogInVybCI6ICJodHRwczovL3RhbmcuZXhhbXBsZS5jb20iLAogInRocCI6ICJaUk1leTFjR3cwN3psVExHYlhuUWFoUzBHdTAiCn0K filesystem: root mode: 420 path: /etc/clevis.json EOF
Add the
rd.neednet=1
kernel argument, as shown in the following example:apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: <node_type>-tang <.> spec: config: ignition: version: 3.1.0 kernelArguments: - rd.neednet=1 <.>
- Use the name you defined in the previous examples based on the type of node you are configuring, for example:
name: worker-tang
. - Required.
- Continue with the remainder of the OpenShift Container Platform deployment.
13.2.4. Configuring chrony time service
You can set the time server and related settings used by the chrony time service (chronyd
) by modifying the contents of the chrony.conf
file and passing those contents to your nodes as a machine config.
Procedure
Create the contents of the
chrony.conf
file and encode it as base64. For example:$ cat << EOF | base64 pool 0.rhel.pool.ntp.org iburst 1 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF
- 1
- Specify any valid, reachable time source, such as the one provided by your DHCP server. Alternately, you can specify any of the following NTP servers:
1.rhel.pool.ntp.org
,2.rhel.pool.ntp.org
, or3.rhel.pool.ntp.org
.
Example output
ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv dmFyL2xvZy9jaHJvbnkK
Create the
MachineConfig
object file, replacing the base64 string with the one you just created. This example adds the file tomaster
nodes. You can change it toworker
or make an additional MachineConfig for theworker
role. Create MachineConfig files for each type of machine that your cluster uses:$ cat << EOF > ./99-masters-chrony-configuration.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-masters-chrony-configuration spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 2.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK verification: {} filesystem: root mode: 420 path: /etc/chrony.conf osImageURL: "" EOF
- Make a backup copy of the configuration files.
Apply the configurations in one of two ways:
-
If the cluster is not up yet, after you generate manifest files, add this file to the
<installation_directory>/openshift
directory, and then continue to create the cluster. If the cluster is already running, apply the file:
$ oc apply -f ./99-masters-chrony-configuration.yaml
-
If the cluster is not up yet, after you generate manifest files, add this file to the
13.2.5. Additional resources
See Support for FIPS cryptography for information on FIPS support.
13.3. Available cluster customizations
You complete most of the cluster configuration and customization after you deploy your OpenShift Container Platform cluster. A number of configuration resources are available.
You modify the configuration resources to configure the major features of the cluster, such as the image registry, networking configuration, image build behavior, and the identity provider.
For current documentation of the settings that you control by using these resources, use the oc explain
command, for example oc explain builds --api-version=config.openshift.io/v1
13.3.1. Cluster configuration resources
All cluster configuration resources are globally scoped (not namespaced) and named cluster
.
Resource name | Description |
---|---|
| Provides API server configuration such as certificates and certificate authorities. |
| Controls the identity provider and authentication configuration for the cluster. |
| Controls default and enforced configuration for all builds on the cluster. |
| Configures the behavior of the web console interface, including the logout behavior. |
| Enables FeatureGates so that you can use Tech Preview features. |
| Configures how specific image registries should be treated (allowed, disallowed, insecure, CA details). |
| Configuration details related to routing such as the default domain for routes. |
| Configures identity providers and other behavior related to internal OAuth server flows. |
| Configures how projects are created including the project template. |
| Defines proxies to be used by components needing external network access. Note: not all components currently consume this value. |
| Configures scheduler behavior such as policies and default node selectors. |
13.3.2. Operator configuration resources
These configuration resources are cluster-scoped instances, named cluster
, which control the behavior of a specific component as owned by a particular Operator.
Resource name | Description |
---|---|
| Controls console appearance such as branding customizations |
| Configures internal image registry settings such as public routing, log levels, proxy settings, resource constraints, replica counts, and storage type. |
| Configures the Samples Operator to control which example image streams and templates are installed on the cluster. |
13.3.3. Additional configuration resources
These configuration resources represent a single instance of a particular component. In some cases, you can request multiple instances by creating multiple instances of the resource. In other cases, the Operator can use only a specific resource instance name in a specific namespace. Reference the component-specific documentation for details on how and when you can create additional resource instances.
Resource name | Instance name | Namespace | Description |
---|---|---|---|
|
|
| Controls the Alertmanager deployment parameters. |
|
|
| Configures Ingress Operator behavior such as domain, number of replicas, certificates, and controller placement. |
13.3.4. Informational Resources
You use these resources to retrieve information about the cluster. Do not edit these resources directly.
Resource name | Instance name | Description |
---|---|---|
|
|
In OpenShift Container Platform 4.5, you must not customize the |
|
| You cannot modify the DNS settings for your cluster. You can view the DNS Operator status. |
|
| Configuration details allowing the cluster to interact with its cloud provider. |
|
| You cannot modify your cluster networking after installation. To customize your network, follow the process to customize networking during installation. |
13.3.5. Updating the global cluster pull secret
You can update the global pull secret for your cluster.
Cluster resources must adjust to the new pull secret, which can temporarily limit the usability of the cluster.
Updating the global pull secret will cause node reboots while the Machine Config Operator (MCO) syncs the changes.
Prerequisites
- You have a new or modified pull secret file to upload.
-
You have access to the cluster as a user with the
cluster-admin
role.
Procedure
Enter the following command to update the global pull secret for your cluster:
$ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull-secret-location> 1
- 1
- Provide the path to the new pull secret file.
This update is rolled out to all nodes, which can take some time depending on the size of your cluster. During this time, nodes are drained and pods are rescheduled on the remaining nodes.
13.4. Configuring your firewall
If you use a firewall, you must configure it so that OpenShift Container Platform can access the sites that it requires to function. You must always grant access to some sites, and you grant access to more if you use Red Hat Insights, the Telemetry service, a cloud to host your cluster, and certain build strategies.
13.4.1. Configuring your firewall for OpenShift Container Platform
Before you install OpenShift Container Platform, you must configure your firewall to grant access to the sites that OpenShift Container Platform requires.
There are no special configuration considerations for services running on only controller nodes versus worker nodes.
Procedure
Allowlist the following registry URLs:
URL Port Function registry.redhat.io
443, 80
Provides core container images
quay.io
443, 80
Provides core container images
*.quay.io
443, 80
Provides core container images
sso.redhat.com
443, 80
The
https://cloud.redhat.com/openshift
site uses authentication fromsso.redhat.com
openshift.org
443, 80
Provides Red Hat Enterprise Linux CoreOS (RHCOS) images
When you add a site, such as
quay.io
, to your allowlist, do not add a wildcard entry, such as*.quay.io
, to your denylist. In most cases, image registries use a content delivery network (CDN) to serve images. If a firewall blocks access, then image downloads are denied when the initial download request is redirected to a host name such ascdn01.quay.io
.CDN host names, such as
cdn01.quay.io
, are covered when you add a wildcard entry, such as*.quay.io
, in your allowlist.- Allowlist any site that provides resources for a language or framework that your builds require.
If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat Insights:
URL Port Function cert-api.access.redhat.com
443, 80
Required for Telemetry
api.access.redhat.com
443, 80
Required for Telemetry
infogw.api.openshift.com
443, 80
Required for Telemetry
443, 80
Required for Telemetry and for
insights-operator
If you use Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to host your cluster, you must grant access to the URLs that provide the cloud provider API and DNS for that cloud:
Cloud URL Port Function AWS
*.amazonaws.com
443, 80
Required to access AWS services and resources. Review the AWS Service Endpoints in the AWS documentation to determine the exact endpoints to allow for the regions that you use.
oso-rhc4tp-docker-registry.s3-us-west-2.amazonaws.com
443, 80
Required to access AWS services and resources when using strict security requirements. Review the AWS Service Endpoints in the AWS documentation to determine the exact endpoints to allow for the regions that you use.
GCP
*.googleapis.com
443, 80
Required to access GCP services and resources. Review Cloud Endpoints in the GCP documentation to determine the endpoints to allow for your APIs.
accounts.google.com
443, 80
Required to access your GCP account.
Azure
management.azure.com
443, 80
Required to access Azure services and resources. Review the Azure REST API Reference in the Azure documentation to determine the endpoints to allow for your APIs.
Allowlist the following URLs:
URL Port Function mirror.openshift.com
443, 80
Required to access mirrored installation content and images. This site is also a source of release image signatures, although the Cluster Version Operator needs only a single functioning source.
storage.googleapis.com/openshift-release
443, 80
A source of release image signatures, although the Cluster Version Operator needs only a single functioning source.
*.apps.<cluster_name>.<base_domain>
443, 80
Required to access the default cluster routes unless you set an ingress wildcard during installation.
quay-registry.s3.amazonaws.com
443, 80
Required to access Quay image content in AWS.
api.openshift.com
443, 80
Required to check if updates are available for the cluster.
art-rhcos-ci.s3.amazonaws.com
443, 80
Required to download Red Hat Enterprise Linux CoreOS (RHCOS) images.
api.openshift.com
443, 80
Required for your cluster token.
cloud.redhat.com/openshift
443, 80
Required for your cluster token.
registry.access.redhat.com
443, 80
Required for
odo
CLI.Operators require route access to perform health checks. Specifically, the authentication and web console Operators connect to two routes to verify that the routes work. If you are the cluster administrator and do not want to allow
*.apps.<cluster_name>.<base_domain>
, then allow these routes:-
oauth-openshift.apps.<cluster_name>.<base_domain>
-
console-openshift-console.apps.<cluster_name>.<base_domain>
, or the host name that is specified in thespec.route.hostname
field of theconsoles.operator/cluster
object if the field is not empty.
-
If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs:
-
1.rhel.pool.ntp.org
-
2.rhel.pool.ntp.org
-
3.rhel.pool.ntp.org
-
If you do not use a default Red Hat NTP server, verify the NTP server for your platform and allow it in your firewall.
13.5. Configuring a private cluster
After you install an OpenShift Container Platform version 4.5 cluster, you can set some of its core components to be private.
You can configure this change for only clusters that use infrastructure that you provision to a cloud provider.
13.5.1. About private clusters
By default, OpenShift Container Platform is provisioned using publicly-accessible DNS and endpoints. You can set the DNS, Ingress Controller, and API server to private after you deploy your cluster.
DNS
If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or cluster creates DNS entries for *.apps
, for the Ingress
object, and api
, for the API server.
The *.apps
records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all DNS resolution for the cluster.
Ingress Controller
Because the default Ingress
object is created as public, the load balancer is internet-facing and in the public subnets. You can replace the default Ingress Controller with an internal one.
API server
By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.
On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz
path.
On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.
On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.
13.5.2. Setting DNS to private
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Review the
DNS
custom resource for your cluster:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructureID>-int kubernetes.io/cluster/<infrastructureID>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}
Note that the
spec
section contains both a private and a public zone.Patch the
DNS
custom resource to remove the public zone:$ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched
Because the Ingress Controller consults the
DNS
definition when it createsIngress
objects, when you create or modifyIngress
objects, only private records are created.ImportantDNS records for the existing Ingress objects are not modified when you remove the public zone.
Optional: Review the
DNS
custom resource for your cluster and confirm that the public zone was removed:$ oc get dnses.config.openshift.io/cluster -o yaml
Example output
apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructureID>-int kubernetes.io/cluster/<infrastructureID>-wfpg4: owned status: {}
13.5.3. Setting the Ingress Controller to private
After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.
Procedure
Modify the default Ingress Controller to use only an internal endpoint:
$ oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF
Example output
ingresscontroller.operator.openshift.io "default" deleted ingresscontroller.operator.openshift.io/default replaced
The public DNS entry is removed, and the private zone entry is updated.
13.5.4. Restricting the API server to private
After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.
Prerequisites
-
Install the OpenShift CLI (
oc
). -
Have access to the web console as a user with
admin
privileges.
Procedure
In the web portal or console for AWS or Azure, take the following actions:
Locate and delete appropriate load balancer component.
- For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
-
For Azure, delete the
api-internal
rule for the load balancer.
-
Delete the
api.$clustername.$yourdomain
DNS entry in the public zone.
From your terminal, list the cluster machines:
$ oc get machine -n openshift-machine-api
Example output
NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m
You modify the control plane machines, which contain
master
in the name, in the following step.Remove the external load balancer from each control plane machine.
Edit a control plane
Machine
object to remove the reference to the external load balancer.$ oc edit machines -n openshift-machine-api <master_name> 1
- 1
- Specify the name of the control plane, or master,
Machine
object to modify.
Remove the lines that describe the external load balancer, which are marked in the following example, and save and exit the object specification:
... spec: providerSpec: value: ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network
-
Repeat this process for each of the machines that contains
master
in the name.