Installing on vSphere
Installing OpenShift Container Platform 4.3 vSphere clusters
Abstract
Chapter 1. Installing on vSphere
1.1. Installing a cluster on vSphere
In OpenShift Container Platform version 4.3, you can install a cluster on VMware vSphere infrastructure that you provision.
1.1.1. Prerequisites
- Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
- Review details about the OpenShift Container Platform installation and update processes.
If you use a firewall, you must configure it to allow the sites that your cluster requires access to.
NoteBe sure to also review this site list if you are configuring a proxy.
1.1.2. Internet and Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.3, you require access to the internet to install your cluster. The Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, also requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM).
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
You must have internet access to:
- Access the Red Hat OpenShift Cluster Manager page to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
1.1.3. VMware vSphere infrastructure requirements
You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 instance that meets the requirements for the components that you use.
Component | Minimum supported versions | Description |
---|---|---|
Hypervisor | vSphere 6.5 with HW version 13 | This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list. |
Networking (NSX-T) | n/a | vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. Because previous versions of vSphere with NSX-T are not currently compatible with OpenShift Container Platform, NSX-T is not supported. NSX-T certification is in process and will be supported in a future release. |
Storage with in-tree drivers | vSphere 6.5 or vSphere 6.7 | This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform and can be used when vSphere CSI drivers are not available. |
Storage with vSphere CSI driver | vSphere 6.7U3 and later | This plug-in creates vSphere storage by using the standard Container Storage Interface. The vSphere CSI driver is provided and supported by VMware. |
If you use a vSphere version 6.5 instance, consider upgrading to 6.7U2 before you install OpenShift Container Platform.
You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.
A limitation of using VPC is that the Storage Distributed Resource Scheduler (SDRS) is not supported. See link:https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/faqs.html[vSphere Storage for Kubernetes FAQs] in the VMware documentation.
1.1.4. Required vCenter account privileges
To install an OpenShift Container Platform cluster in vCenter, the cluster requires access to an account with privileges to read and create the required resources. Using an account that has administrative privileges is the simplest way to access all of the necessary permissions.
A user requires the following privileges to install an OpenShift Container Platform cluster:
Datastore
- Allocate space
Folder
- Create folder
- Delete folder
vSphere Tagging
- All privileges
Network
- Assign network
Resource
- Assign virtual machine to resource pool
Profile-driven storage
- All privileges
vApp
- All privileges
Virtual machine
- All privileges
For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.
1.1.5. Machine requirements for a cluster with user-provisioned infrastructure
For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.
1.1.5.1. Required machines
The smallest OpenShift Container Platform clusters require the following hosts:
- One temporary bootstrap machine
- Three control plane, or master, machines
- At least two compute machines, which are also known as worker machines
The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.
To maintain high availability of your cluster, use separate physical hosts for these cluster machines.
The bootstrap, control plane, and compute machines must use the Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.
Note that RHCOS is based on Red Hat Enterprise Linux 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.
1.1.5.2. Network connectivity requirements
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs
during boot to fetch Ignition config files from the Machine Config Server. During the initial boot, the machines require either a DHCP server or that static IP addresses be set in order to establish a network connection to download their Ignition config files.
1.1.5.3. Minimum resource requirements
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU | Virtual RAM | Storage |
---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 120 GB |
Control plane | RHCOS | 4 | 16 GB | 120 GB |
Compute | RHCOS or RHEL 7.6 | 2 | 8 GB | 120 GB |
1.1.5.4. Certificate signing requests management
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
1.1.6. Creating the user-provisioned infrastructure
Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure.
Prerequistes
- Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster.
Procedure
- Configure DHCP or set static IP addresses on each node.
- Provision the required load balancers.
- Configure the ports for your machines.
- Configure DNS.
- Ensure network connectivity.
1.1.6.1. Networking requirements for user-provisioned infrastructure
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs
during boot to fetch Ignition config from the Machine Config Server.
During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster in order to establish a network connection, which allows them to download their Ignition config files.
It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.
The Kubernetes API server, which runs on each master node after a successful cluster installation, must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.
You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.
Protocol | Port | Description |
---|---|---|
ICMP | N/A | Network reachability tests |
TCP |
|
Host level services, including the node exporter on ports |
| The default ports that Kubernetes reserves | |
| openshift-sdn | |
UDP |
| VXLAN and GENEVE |
| VXLAN and GENEVE | |
|
Host level services, including the node exporter on ports | |
TCP/UDP |
| Kubernetes NodePort |
Protocol | Port | Description |
---|---|---|
TCP |
| etcd server, peer, and metrics ports |
| Kubernetes API |
Network topology requirements
The infrastructure that you provision for your cluster must meet the following network topology requirements.
OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.
Load balancers
Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode.
- A stateless load balancing algorithm. The options vary based on the load balancer implementation.
NoteSession persistence is not required for the API load balancer to function properly.
Configure the following ports on both the front and back of the load balancers:
Table 1.4. API load balancer Port Back-end machines (pool members) Internal External Description 6443
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the
/readyz
endpoint for the API server health check probe.X
X
Kubernetes API server
22623
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.
X
Machine Config server
NoteThe load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the
/readyz
endpoint to the removal of the API server instance from the pool. Within the time frame after/readyz
returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode.
- A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
Configure the following ports on both the front and back of the load balancers:
Table 1.5. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443
The machines that run the Ingress router pods, compute, or worker, by default.
X
X
HTTPS traffic
80
The machines that run the Ingress router pods, compute, or worker by default.
X
X
HTTP traffic
If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.
Ethernet adaptor hardware address requirements
When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges:
-
00:05:69:00:00:00
to00:05:69:FF:FF:FF
-
00:0c:29:00:00:00
to00:0c:29:FF:FF:FF
-
00:1c:14:00:00:00
to00:1c:14:FF:FF:FF
-
00:50:56:00:00:00
to00:50:56:FF:FF:FF
If a MAC address outside the VMware OUI is used, the cluster installation will not succeed.
1.1.6.2. User-provisioned DNS requirements
The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name>
is the cluster name and <base_domain>
is the cluster base domain that you specify in the install-config.yaml
file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
Kubernetes API |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If it cannot resolve the node names, proxied API calls can fail, and you cannot retrieve logs from Pods. | |
Routes |
| A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
etcd |
|
OpenShift Container Platform requires DNS A/AAAA records for each etcd instance to point to the control plane machines that host the instances. The etcd instances are differentiated by |
|
For each control plane machine, OpenShift Container Platform also requires a SRV DNS record for etcd server on that machine with priority # _service._proto.name. TTL class SRV priority weight port target. _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-0.<cluster_name>.<base_domain> _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-1.<cluster_name>.<base_domain> _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-2.<cluster_name>.<base_domain> |
1.1.7. Generating an SSH private key and adding it to the agent
If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent
and to the installation program.
In a production environment, you require disaster recovery and debugging.
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N '' \ -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_rsa
, of the SSH key. Do not specify an existing SSH key, as it will be overwritten.
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1 Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_rsa
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.
1.1.8. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You must install the cluster from a computer that uses Linux or macOS.
- You need 500 MB of local disk space to download the installation program.
Procedure
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. You must complete the OpenShift Container Platform uninstallation procedures outlined for your specific cloud provider to remove your cluster entirely.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf <installation_program>.tar.gz
-
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
1.1.9. Manually creating the installation configuration file
For installations of OpenShift Container Platform that use user-provisioned infrastructure, you must manually generate your installation configuration file.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the access token for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the following
install-config.yaml
file template and save it in the<installation_directory>
.NoteYou must name this configuration file
install-config.yaml
.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the next step of the installation process. You must back it up now.
1.1.9.1. Sample install-config.yaml
file for VMware vSphere
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16
- 1
- The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
- 2 5
- The
controlPlane
section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. - 3 6
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.
- 4
- You must set the value of the
replicas
parameter to0
. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. - 7
- The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
- 8
- The cluster name that you specified in your DNS records.
- 9
- The fully-qualified host name or IP address of the vCenter server.
- 10
- The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.
- 11
- The password associated with the vSphere user.
- 12
- The vSphere datacenter.
- 13
- The default vSphere datastore to use.
- 14
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
- 15
- The pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
- 16
- The public portion of the default SSH key for the
core
user in Red Hat Enterprise Linux CoreOS (RHCOS).NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your
ssh-agent
process uses.
1.1.9.2. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
An existing
install-config.yaml
file. Review the sites that your cluster requires access to and determine whether any need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. Add sites to the Proxy object’s
spec.noProxy
field to bypass the proxy if necessary.NoteThe Proxy object’s
status.noProxy
field is populated by default with the instance metadata endpoint (169.254.169.254
) and with the values of thenetworking.machineCIDR
,networking.clusterNetwork.cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify anhttpProxy
value. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not specified, then
httpProxy
is used for both HTTP and HTTPS connections. The URL scheme must behttp
;https
is currently not supported. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify anhttpsProxy
value. - 3
- A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with
.
to include all subdomains of that domain. Use*
to bypass proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines. - 4
- If provided, the installation program generates a ConfigMap that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
ConfigMap that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this ConfigMap is referenced in the Proxy object’strustedCA
field. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must provide the MITM CA certificate.
NoteThe installation program does not support the proxy
readinessEndpoints
field.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy object is still created, but it will have a nil spec
.
Only the Proxy object named cluster
is supported, and no additional proxies can be created.
1.1.10. Creating the Kubernetes manifest and Ignition config files
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.
The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must complete your cluster installation and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Prerequisites
- Obtain the OpenShift Container Platform installation program.
-
Create the
install-config.yaml
installation configuration file.
Procedure
Generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir=<installation_directory> 1 INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
- 1
- For
<installation_directory>
, specify the installation directory that contains theinstall-config.yaml
file you created.
Because you create your own compute machines later in the installation process, you can safely ignore this warning.
Modify the
<installation_directory>/manifests/cluster-scheduler-02-config.yml
Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines:-
Open the
<installation_directory>/manifests/cluster-scheduler-02-config.yml
file. -
Locate the
mastersSchedulable
parameter and set its value toFalse
. - Save and exit the file.
NoteCurrently, due to a Kubernetes limitation, router Pods running on control plane machines will not be reachable by the ingress load balancer. This step might not be required in a future minor version of OpenShift Container Platform.
-
Open the
Obtain the Ignition config files:
$ ./openshift-install create ignition-configs --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the same installation directory.
The following files are generated in the directory:
. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
1.1.11. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere
Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use.
Prerequisites
- Obtain the Ignition config files for your cluster.
- Have access to an HTTP server that you can access from your computer and that the machines that you create can access.
- Create a vSphere cluster.
Procedure
Upload the bootstrap Ignition config file, which is named
<installation_directory>/bootstrap.ign
, that the installation program created to your HTTP server. Note the URL of this file.You must host the bootstrap Ignition config file because it is too large to fit in a vApp property.
Save the following secondary Ignition config file for your bootstrap node to your computer as
<installation_directory>/append-bootstrap.ign
.{ "ignition": { "config": { "append": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "2.1.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} }
- 1
- Specify the URL of the bootstrap Ignition config file that you hosted.
When you create the Virtual Machine (VM) for the bootstrap machine, you use this Ignition config file.
Convert the master, worker, and secondary bootstrap Ignition config files to Base64 encoding.
For example, if you use a Linux operating system, you can use the
base64
command to encode the files.$ base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 $ base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 $ base64 -w0 <installation_directory>/append-bootstrap.ign > <installation_directory>/append-bootstrap.64
Obtain the RHCOS OVA image from the Product Downloads page on the Red Hat customer portal or the RHCOS image mirror page.
ImportantThe RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.
The file name contains the OpenShift Container Platform version number in the format
rhcos-<version>-vmware.<architecture>.ova
.In the vSphere Client, create a folder in your datacenter to store your VMs.
- Click the VMs and Templates view.
- Right-click the name of your datacenter.
- Click New Folder → New VM and Template Folder.
-
In the window that is displayed, enter the folder name. The folder name must match the cluster name that you specified in the
install-config.yaml
file.
In the vSphere Client, create a template for the OVA image.
NoteIn the following steps, you use the same template for all of your cluster machines and provide the location for the Ignition config file for that machine type when you provision the VMs.
- From the Hosts and Clusters tab, right-click your cluster’s name and click Deploy OVF Template.
- On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded.
- On the Select a name and folder tab, set a Virtual machine name, such as RHCOS, click the name of your vSphere cluster, and select the folder you created in the previous step.
- On the Select a compute resource tab, click the name of your vSphere cluster.
On the Select storage tab, configure the storage options for your VM.
- Select Thin Provision or Thick Provision, based on your storage preferences.
-
Select the datastore that you specified in your
install-config.yaml
file.
- On the Select network tab, specify the network that you configured for the cluster, if available.
- If you plan to use the same template for all cluster machine types, do not specify values on the Customize template tab.
After the template deploys, deploy a VM for a machine in the cluster.
- Right-click the template’s name and click Clone → Clone to Virtual Machine.
-
On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as
control-plane-0
orcompute-1
. - On the Select a name and folder tab, select the name of the folder that you created for the cluster.
- On the Select a compute resource tab, select the name of a host in your datacenter.
- Optional: On the Select storage tab, customize the storage options.
- On the Select clone options, select Customize this virtual machine’s hardware.
On the Customize hardware tab, click VM Options → Advanced.
- Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High.
Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:
-
guestinfo.ignition.config.data
: Paste the contents of the base64-encoded Ignition config file for this machine type. -
guestinfo.ignition.config.data.encoding
: Specifybase64
. -
disk.EnableUUID
: SpecifyTRUE
.
-
Alternatively, prior to powering on the virtual machine add via vApp properties:
- Navigate to a virtual machine from the vCenter Server inventory.
- On the Configure tab, expand Settings and select vApp options.
- Scroll down and under Properties apply the configurations from above.
- In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type.
- Complete the configuration and power on the VM.
Create the rest of the machines for your cluster by following the preceding steps for each machine.
ImportantYou must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machine before you install the cluster.
1.1.12. Installing the CLI by downloading the binary
You can install the OpenShift CLI (oc
) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.3. Download and install the new version of oc
.
1.1.12.1. Installing the CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select Linux from the drop-down menu and click Download command-line tools.
Unpack the archive:
$ tar xvzf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the CLI, it is available using the oc
command:
$ oc <command>
1.1.12.2. Installing the CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select Windows from the drop-down menu and click Download command-line tools.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the CLI, it is available using the oc
command:
C:\> oc <command>
1.1.12.3. Installing the CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select MacOS from the drop-down menu and click Download command-line tools.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the CLI, it is available using the oc
command:
$ oc <command>
1.1.13. Creating the cluster
To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.
Prerequisites
- Create the required infrastructure for the cluster.
- You obtained the installation program and generated the Ignition config files for your cluster.
- You used the Ignition config files to create RHCOS machines for your cluster.
- Your machines have direct internet access.
Procedure
Monitor the bootstrap process:
$ ./openshift-install --dir=<installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com... INFO API v1.16.2 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources
The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines.
After bootstrap process is complete, remove the bootstrap machine from the load balancer.
ImportantYou must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself.
1.1.14. Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami system:admin
1.1.15. Approving the CSRs for your machines
When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.16.2 master-1 Ready master 63m v1.16.2 master-2 Ready master 64m v1.16.2 worker-0 NotReady worker 76s v1.16.2 worker-1 NotReady worker 70s v1.16.2
The output lists all of the machines that you created.
Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
. You must implement a method of automatically approving the kubelet serving certificate requests.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
1.1.16. Initial Operator configuration
After the control plane initializes, you must immediately configure some Operators so that they all become available.
Prerequisites
- Your control plane has initialized.
Procedure
Watch the cluster components come online:
$ watch -n5 oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.0 True False False 69s cloud-credential 4.3.0 True False False 12m cluster-autoscaler 4.3.0 True False False 11m console 4.3.0 True False False 46s dns 4.3.0 True False False 11m image-registry 4.3.0 True False False 5m26s ingress 4.3.0 True False False 5m36s kube-apiserver 4.3.0 True False False 8m53s kube-controller-manager 4.3.0 True False False 7m24s kube-scheduler 4.3.0 True False False 12m machine-api 4.3.0 True False False 12m machine-config 4.3.0 True False False 7m36s marketplace 4.3.0 True False False 7m54m monitoring 4.3.0 True False False 7h54s network 4.3.0 True False False 5m9s node-tuning 4.3.0 True False False 11m openshift-apiserver 4.3.0 True False False 11m openshift-controller-manager 4.3.0 True False False 5m943s openshift-samples 4.3.0 True False False 3m55s operator-lifecycle-manager 4.3.0 True False False 11m operator-lifecycle-manager-catalog 4.3.0 True False False 11m service-ca 4.3.0 True False False 11m service-catalog-apiserver 4.3.0 True False False 5m26s service-catalog-controller-manager 4.3.0 True False False 5m25s storage 4.3.0 True False False 5m30s
- Configure the Operators that are not available.
1.1.16.1. Image registry removed during installation
On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed
. This allows openshift-installer
to complete installations on these platform types.
After installation, you must edit the Image Registry Operator configuration to switch the ManagementState
from Removed
to Managed
.
The Prometheus console provides an ImageRegistryRemoved
alert, for example:
"Image Registry has been removed. ImageStreamTags
, BuildConfigs
and DeploymentConfigs
which reference ImageStreamTags
may not work as expected. Please configure storage and update the config to Managed
state by editing configs.imageregistry.operator.openshift.io."
1.1.16.2. Image registry storage configuration
The image-registry
Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so the Registry Operator is made available.
Instructions for both configuring a PersistentVolume, which is required for production clusters, and for configuring an empty directory as the storage location, which is available for only non-production clusters, are shown.
1.1.16.2.1. Configuring registry storage for VMware vSphere
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
- Cluster administrator permissions.
- A cluster on VMware vSphere.
Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access mode.
ImportantvSphere volumes do not support the
ReadWriteMany
access mode. You must use a different storage backend, such asNFS
, to configure the registry storage.- Must have "100Gi" capacity.
Procedure
To configure your registry to use storage, change the
spec.storage.pvc
in theconfigs.imageregistry/cluster
resource.NoteWhen using shared storage such as NFS, it is strongly recommended to use the
supplementalGroups
strategy, which dictates the allowable supplemental groups for the Security Context, rather than thefsGroup
ID. Refer to the NFS Group IDs documentation for details.Verify you do not have a registry Pod:
$ oc get pod -n openshift-image-registry
Note-
If the storage type is
emptyDIR
, the replica number cannot be greater than1
. If the storage type is
NFS
, you must enable theno_wdelay
androot_squash
mount options. For example:# cat /etc/exports /mnt/data *(rw,sync,no_wdelay,root_squash,insecure,fsid=0) sh-4.3# exportfs -rv exporting *:/mnt/data
-
If the storage type is
Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io storage: pvc: claim:
Leave the
claim
field blank to allow the automatic creation of animage-registry-storage
PVC.Optional: Add a new storage class to a PV:
Create the PV:
$ oc create -f -
apiVersion: v1 kind: PersistentVolume metadata: name: image-registry-pv spec: accessModes: - ReadWriteMany capacity: storage: 100Gi nfs: path: /registry server: 172.16.231.181 persistentVolumeReclaimPolicy: Retain storageClassName: nfs01
$ oc get pv
Create the PVC:
$ oc create -n openshift-image-registry -f -
apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "image-registry-pvc" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: nfs01 volumeMode: Filesystem
$ oc get pvc -n openshift-image-registry
Finally, add the name of your PVC:
$ oc edit configs.imageregistry.operator.openshift.io -o yaml
storage: pvc: claim: image-registry-pvc 1
- 1
- Creating a custom PVC allows you to leave the
claim
field blank for default automatic creation of animage-registry-storage
PVC.
Check the
clusteroperator
status:$ oc get clusteroperator image-registry
1.1.16.2.2. Configuring storage for the image registry in non-production clusters
You must configure storage for the image registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
To set the image registry storage to an empty directory:
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
WarningConfigure this option for only non-production clusters.
If you run this command before the Image Registry Operator initializes its components, the
oc patch
command fails with the following error:Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found
Wait a few minutes and run the command again.
1.1.17. Completing installation on user-provisioned infrastructure
After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide.
Prerequisites
- Your control plane has initialized.
- You have completed the initial Operator configuration.
Procedure
Confirm that all the cluster components are online:
$ watch -n5 oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.0 True False False 10m cloud-credential 4.3.0 True False False 22m cluster-autoscaler 4.3.0 True False False 21m console 4.3.0 True False False 10m dns 4.3.0 True False False 21m image-registry 4.3.0 True False False 16m ingress 4.3.0 True False False 16m kube-apiserver 4.3.0 True False False 19m kube-controller-manager 4.3.0 True False False 18m kube-scheduler 4.3.0 True False False 22m machine-api 4.3.0 True False False 22m machine-config 4.3.0 True False False 18m marketplace 4.3.0 True False False 18m monitoring 4.3.0 True False False 18m network 4.3.0 True False False 16m node-tuning 4.3.0 True False False 21m openshift-apiserver 4.3.0 True False False 21m openshift-controller-manager 4.3.0 True False False 17m openshift-samples 4.3.0 True False False 14m operator-lifecycle-manager 4.3.0 True False False 21m operator-lifecycle-manager-catalog 4.3.0 True False False 21m service-ca 4.3.0 True False False 21m service-catalog-apiserver 4.3.0 True False False 16m service-catalog-controller-manager 4.3.0 True False False 16m storage 4.3.0 True False False 16m
When all of the cluster Operators are
AVAILABLE
, you can complete the installation.Monitor for cluster completion:
$ ./openshift-install --dir=<installation_directory> wait-for install-complete 1 INFO Waiting up to 30m0s for the cluster to initialize...
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server.
ImportantThe Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Confirm that the Kubernetes API server is communicating with the Pods.
To view a list of all Pods, use the following command:
$ oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ...
View the logs for a Pod that is listed in the output of the previous command by using the following command:
$ oc logs <pod_name> -n <namespace> 1
- 1
- Specify the Pod name and namespace, as shown in the output of the previous command.
If the Pod logs display, the Kubernetes API server can communicate with the cluster machines.
1.1.18. Next steps
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- Set up your registry and configure registry storage.
1.2. Installing a cluster on vSphere with network customizations
In OpenShift Container Platform version 4.3, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy
configuration parameters in a running cluster.
1.2.1. Prerequisites
- Review details about the OpenShift Container Platform installation and update processes.
- If you use a firewall, you must configure it to access Red Hat Insights.
1.2.2. Internet and Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.3, you require access to the internet to install your cluster. The Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, also requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM).
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
You must have internet access to:
- Access the Red Hat OpenShift Cluster Manager page to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
1.2.3. VMware vSphere infrastructure requirements
You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 instance that meets the requirements for the components that you use.
Component | Minimum supported versions | Description |
---|---|---|
Hypervisor | vSphere 6.5 with HW version 13 | This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list. |
Networking (NSX-T) | n/a | vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. Because previous versions of vSphere with NSX-T are not currently compatible with OpenShift Container Platform, NSX-T is not supported. NSX-T certification is in process and will be supported in a future release. |
Storage with in-tree drivers | vSphere 6.5 or vSphere 6.7 | This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform and can be used when vSphere CSI drivers are not available. |
Storage with vSphere CSI driver | vSphere 6.7U3 and later | This plug-in creates vSphere storage by using the standard Container Storage Interface. The vSphere CSI driver is provided and supported by VMware. |
If you use a vSphere version 6.5 instance, consider upgrading to 6.7U2 before you install OpenShift Container Platform.
You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.
A limitation of using VPC is that the Storage Distributed Resource Scheduler (SDRS) is not supported. See link:https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/faqs.html[vSphere Storage for Kubernetes FAQs] in the VMware documentation.
1.2.4. Required vCenter account privileges
To install an OpenShift Container Platform cluster in vCenter, the cluster requires access to an account with privileges to read and create the required resources. Using an account that has administrative privileges is the simplest way to access all of the necessary permissions.
A user requires the following privileges to install an OpenShift Container Platform cluster:
Datastore
- Allocate space
Folder
- Create folder
- Delete folder
vSphere Tagging
- All privileges
Network
- Assign network
Resource
- Assign virtual machine to resource pool
Profile-driven storage
- All privileges
vApp
- All privileges
Virtual machine
- All privileges
For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.
1.2.5. Machine requirements for a cluster with user-provisioned infrastructure
For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.
1.2.5.1. Required machines
The smallest OpenShift Container Platform clusters require the following hosts:
- One temporary bootstrap machine
- Three control plane, or master, machines
- At least two compute machines, which are also known as worker machines
The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.
To maintain high availability of your cluster, use separate physical hosts for these cluster machines.
The bootstrap, control plane, and compute machines must use the Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.
Note that RHCOS is based on Red Hat Enterprise Linux 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.
1.2.5.2. Network connectivity requirements
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs
during boot to fetch Ignition config files from the Machine Config Server. During the initial boot, the machines require either a DHCP server or that static IP addresses be set in order to establish a network connection to download their Ignition config files.
1.2.5.3. Minimum resource requirements
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU | Virtual RAM | Storage |
---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 120 GB |
Control plane | RHCOS | 4 | 16 GB | 120 GB |
Compute | RHCOS or RHEL 7.6 | 2 | 8 GB | 120 GB |
1.2.5.4. Certificate signing requests management
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
1.2.6. Creating the user-provisioned infrastructure
Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure.
Prerequistes
- Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster.
Procedure
- Configure DHCP or set static IP addresses on each node.
- Provision the required load balancers.
- Configure the ports for your machines.
- Configure DNS.
- Ensure network connectivity.
1.2.6.1. Networking requirements for user-provisioned infrastructure
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs
during boot to fetch Ignition config from the Machine Config Server.
During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster in order to establish a network connection, which allows them to download their Ignition config files.
It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.
The Kubernetes API server, which runs on each master node after a successful cluster installation, must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.
You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.
Protocol | Port | Description |
---|---|---|
ICMP | N/A | Network reachability tests |
TCP |
|
Host level services, including the node exporter on ports |
| The default ports that Kubernetes reserves | |
| openshift-sdn | |
UDP |
| VXLAN and GENEVE |
| VXLAN and GENEVE | |
|
Host level services, including the node exporter on ports | |
TCP/UDP |
| Kubernetes NodePort |
Protocol | Port | Description |
---|---|---|
TCP |
| etcd server, peer, and metrics ports |
| Kubernetes API |
Network topology requirements
The infrastructure that you provision for your cluster must meet the following network topology requirements.
OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.
Load balancers
Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode.
- A stateless load balancing algorithm. The options vary based on the load balancer implementation.
NoteSession persistence is not required for the API load balancer to function properly.
Configure the following ports on both the front and back of the load balancers:
Table 1.10. API load balancer Port Back-end machines (pool members) Internal External Description 6443
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the
/readyz
endpoint for the API server health check probe.X
X
Kubernetes API server
22623
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.
X
Machine Config server
NoteThe load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the
/readyz
endpoint to the removal of the API server instance from the pool. Within the time frame after/readyz
returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode.
- A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
Configure the following ports on both the front and back of the load balancers:
Table 1.11. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443
The machines that run the Ingress router pods, compute, or worker, by default.
X
X
HTTPS traffic
80
The machines that run the Ingress router pods, compute, or worker by default.
X
X
HTTP traffic
If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.
1.2.6.2. User-provisioned DNS requirements
The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name>
is the cluster name and <base_domain>
is the cluster base domain that you specify in the install-config.yaml
file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
Kubernetes API |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If it cannot resolve the node names, proxied API calls can fail, and you cannot retrieve logs from Pods. | |
Routes |
| A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
etcd |
|
OpenShift Container Platform requires DNS A/AAAA records for each etcd instance to point to the control plane machines that host the instances. The etcd instances are differentiated by |
|
For each control plane machine, OpenShift Container Platform also requires a SRV DNS record for etcd server on that machine with priority # _service._proto.name. TTL class SRV priority weight port target. _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-0.<cluster_name>.<base_domain> _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-1.<cluster_name>.<base_domain> _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-2.<cluster_name>.<base_domain> |
1.2.7. Generating an SSH private key and adding it to the agent
If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent
and to the installation program.
In a production environment, you require disaster recovery and debugging.
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N '' \ -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_rsa
, of the SSH key. Do not specify an existing SSH key, as it will be overwritten.
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1 Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_rsa
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
1.2.8. Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You must install the cluster from a computer that uses Linux or macOS.
- You need 500 MB of local disk space to download the installation program.
Procedure
- Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep both the installation program and the files that the installation program creates after you finish installing the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. You must complete the OpenShift Container Platform uninstallation procedures outlined for your specific cloud provider to remove your cluster entirely.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf <installation_program>.tar.gz
-
From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your installation pull secret as a
.txt
file. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
1.2.9. Manually creating the installation configuration file
For installations of OpenShift Container Platform that use user-provisioned infrastructure, you must manually generate your installation configuration file.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the access token for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the following
install-config.yaml
file template and save it in the<installation_directory>
.NoteYou must name this configuration file
install-config.yaml
.Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the next step of the installation process. You must back it up now.
1.2.9.1. Sample install-config.yaml
file for VMware vSphere
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16
- 1
- The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
- 2 5
- The
controlPlane
section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. - 3 6
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.
- 4
- You must set the value of the
replicas
parameter to0
. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. - 7
- The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
- 8
- The cluster name that you specified in your DNS records.
- 9
- The fully-qualified host name or IP address of the vCenter server.
- 10
- The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.
- 11
- The password associated with the vSphere user.
- 12
- The vSphere datacenter.
- 13
- The default vSphere datastore to use.
- 14
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
- 15
- The pull secret that you obtained from the Pull Secret page on the Red Hat OpenShift Cluster Manager site. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
- 16
- The public portion of the default SSH key for the
core
user in Red Hat Enterprise Linux CoreOS (RHCOS).NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your
ssh-agent
process uses.
1.2.9.2. Network configuration parameters
You can modify your cluster network configuration parameters in the install-config.yaml
configuration file. The following table describes the parameters.
You cannot modify these parameters in the install-config.yaml
file after installation.
Parameter | Description | Value |
---|---|---|
|
The default Container Network Interface (CNI) network provider plug-in to deploy. The |
The default value is |
|
A block of IP addresses from which Pod IP addresses are allocated. The |
An IP address allocation in CIDR format. The default value is |
|
The subnet prefix length to assign to each individual node. For example, if |
A subnet prefix. The default value is |
|
A block of IP addresses for services. |
An IP address allocation in CIDR format. The default value is |
| A block of IP addresses used by the OpenShift Container Platform installation program while installing the cluster. The address block must not overlap with any other network block. |
An IP address allocation in CIDR format. The default value is |
1.2.10. Modifying advanced network configuration parameters
You can modify the advanced network configuration parameters only before you install the cluster. Advanced configuration customization lets you integrate your cluster into your existing network environment by specifying an MTU or VXLAN port, by allowing customization of kube-proxy settings, and by specifying a different mode
for the openshiftSDNConfig
parameter.
Modifying the OpenShift Container Platform manifest files directly is not supported.
Prerequisites
-
Create the
install-config.yaml
file and complete any modifications to it. - Create the Ignition config files for your cluster.
Procedure
Use the following command to create manifests:
$ ./openshift-install create manifests --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the name of the directory that contains theinstall-config.yaml
file for your cluster.
Modify the
<installation_directory>/manifests/cluster-scheduler-02-config.yml
Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines:-
Open the
manifests/cluster-scheduler-02-config.yml
file. -
Locate the
mastersSchedulable
parameter and set its value toFalse
. - Save and exit the file.
NoteCurrently, due to a Kubernetes limitation, router Pods running on control plane machines will not be reachable by the ingress load balancer.
-
Open the
Create a file that is named
cluster-network-03-config.yml
in the<installation_directory>/manifests/
directory:$ touch <installation_directory>/manifests/cluster-network-03-config.yml 1
- 1
- For
<installation_directory>
, specify the directory name that contains themanifests/
directory for your cluster.
After creating the file, several network configuration files are in the
manifests/
directory, as shown:$ ls <installation_directory>/manifests/cluster-network-*
Example output
cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml
Open the
cluster-network-03-config.yml
file in an editor and enter a CR that describes the Operator configuration you want:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: 1 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789
- 1
- The parameters for the
spec
parameter are only an example. Specify your configuration for the Cluster Network Operator in the CR.
The CNO provides default values for the parameters in the CR, so you must specify only the parameters that you want to change.
-
Save the
cluster-network-03-config.yml
file and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.yml
file. The installation program deletes themanifests/
directory when creating the cluster.
1.2.11. Cluster Network Operator configuration
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a CR object that is named cluster
. The CR specifies the parameters for the Network
API in the operator.openshift.io
API group.
You can specify the cluster network configuration for your OpenShift Container Platform cluster by setting the parameter values for the defaultNetwork
parameter in the CNO CR. The following CR displays the default configuration for the CNO and explains both the parameters you can configure and the valid parameter values:
Cluster Network Operator CR
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: 1 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 2 - 172.30.0.0/16 defaultNetwork: 3 ... kubeProxyConfig: 4 iptablesSyncPeriod: 30s 5 proxyArguments: iptables-min-sync-period: 6 - 0s
- 1 2
- Specified in the
install-config.yaml
file. - 3
- Configures the default Container Network Interface (CNI) network provider for the cluster network.
- 4
- The parameters for this object specify the
kube-proxy
configuration. If you do not specify the parameter values, the Cluster Network Operator applies the displayed default parameter values. If you are using the OVN-Kubernetes default CNI network provider, the kube-proxy configuration has no effect. - 5
- The refresh period for
iptables
rules. The default value is30s
. Valid suffixes includes
,m
, andh
and are described in the Go time package documentation.NoteBecause of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the
iptablesSyncPeriod
parameter is no longer necessary. - 6
- The minimum duration before refreshing
iptables
rules. This parameter ensures that the refresh does not happen too frequently. Valid suffixes includes
,m
, andh
and are described in the Go time package.
1.2.11.1. Configuration parameters for the OpenShift SDN default CNI network provider
The following YAML object describes the configuration parameters for the OpenShift SDN default Container Network Interface (CNI) network provider.
defaultNetwork: type: OpenShiftSDN 1 openshiftSDNConfig: 2 mode: NetworkPolicy 3 mtu: 1450 4 vxlanPort: 4789 5
- 1
- Specified in the
install-config.yaml
file. - 2
- Specify only if you want to override part of the OpenShift SDN configuration.
- 3
- Configures the network isolation mode for OpenShift SDN. The allowed values are
Multitenant
,Subnet
, orNetworkPolicy
. The default value isNetworkPolicy
. - 4
- The maximum transmission unit (MTU) for the VXLAN overlay network. This value is normally configured automatically, but if the nodes in your cluster do not all use the same MTU, then you must set this explicitly to 50 less than the smallest node MTU value.
- 5
- The port to use for all VXLAN packets. The default value is
4789
. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for VXLAN, since both SDNs use the same default VXLAN port number.On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port
9000
and port9999
.
1.2.11.2. Cluster Network Operator example configuration
A complete CR object for the CNO is displayed in the following example:
Cluster Network Operator example CR
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 0s
1.2.12. Creating the Ignition config files
Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines.
The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must complete your cluster installation and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Obtain the Ignition config files:
$ ./openshift-install create ignition-configs --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the directory name to store the files that the installation program creates.
ImportantIf you created an
install-config.yaml
file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.The following files are generated in the directory:
. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
1.2.13. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere
Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use.
Prerequisites
- Obtain the Ignition config files for your cluster.
- Have access to an HTTP server that you can access from your computer and that the machines that you create can access.
- Create a vSphere cluster.
Procedure
Upload the bootstrap Ignition config file, which is named
<installation_directory>/bootstrap.ign
, that the installation program created to your HTTP server. Note the URL of this file.You must host the bootstrap Ignition config file because it is too large to fit in a vApp property.
Save the following secondary Ignition config file for your bootstrap node to your computer as
<installation_directory>/append-bootstrap.ign
.{ "ignition": { "config": { "append": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "2.1.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} }
- 1
- Specify the URL of the bootstrap Ignition config file that you hosted.
When you create the Virtual Machine (VM) for the bootstrap machine, you use this Ignition config file.
Convert the master, worker, and secondary bootstrap Ignition config files to Base64 encoding.
For example, if you use a Linux operating system, you can use the
base64
command to encode the files.$ base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 $ base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 $ base64 -w0 <installation_directory>/append-bootstrap.ign > <installation_directory>/append-bootstrap.64
Obtain the RHCOS OVA image from the Product Downloads page on the Red Hat customer portal or the RHCOS image mirror page.
ImportantThe RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.
The file name contains the OpenShift Container Platform version number in the format
rhcos-<version>-vmware.<architecture>.ova
.In the vSphere Client, create a folder in your datacenter to store your VMs.
- Click the VMs and Templates view.
- Right-click the name of your datacenter.
- Click New Folder → New VM and Template Folder.
-
In the window that is displayed, enter the folder name. The folder name must match the cluster name that you specified in the
install-config.yaml
file.
In the vSphere Client, create a template for the OVA image.
NoteIn the following steps, you use the same template for all of your cluster machines and provide the location for the Ignition config file for that machine type when you provision the VMs.
- From the Hosts and Clusters tab, right-click your cluster’s name and click Deploy OVF Template.
- On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded.
- On the Select a name and folder tab, set a Virtual machine name, such as RHCOS, click the name of your vSphere cluster, and select the folder you created in the previous step.
- On the Select a compute resource tab, click the name of your vSphere cluster.
On the Select storage tab, configure the storage options for your VM.
- Select Thin Provision or Thick Provision, based on your storage preferences.
-
Select the datastore that you specified in your
install-config.yaml
file.
- On the Select network tab, specify the network that you configured for the cluster, if available.
- If you plan to use the same template for all cluster machine types, do not specify values on the Customize template tab.
After the template deploys, deploy a VM for a machine in the cluster.
- Right-click the template’s name and click Clone → Clone to Virtual Machine.
-
On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as
control-plane-0
orcompute-1
. - On the Select a name and folder tab, select the name of the folder that you created for the cluster.
- On the Select a compute resource tab, select the name of a host in your datacenter.
- Optional: On the Select storage tab, customize the storage options.
- On the Select clone options, select Customize this virtual machine’s hardware.
On the Customize hardware tab, click VM Options → Advanced.
- Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High.
Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:
-
guestinfo.ignition.config.data
: Paste the contents of the base64-encoded Ignition config file for this machine type. -
guestinfo.ignition.config.data.encoding
: Specifybase64
. -
disk.EnableUUID
: SpecifyTRUE
.
-
Alternatively, prior to powering on the virtual machine add via vApp properties:
- Navigate to a virtual machine from the vCenter Server inventory.
- On the Configure tab, expand Settings and select vApp options.
- Scroll down and under Properties apply the configurations from above.
- In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type.
- Complete the configuration and power on the VM.
Create the rest of the machines for your cluster by following the preceding steps for each machine.
ImportantYou must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machine before you install the cluster.
1.2.14. Installing the CLI by downloading the binary
You can install the OpenShift CLI (oc
) in order to interact with OpenShift Container Platform from a command-line interface. You can install oc
on Linux, Windows, or macOS.
If you installed an earlier version of oc
, you cannot use it to complete all of the commands in OpenShift Container Platform 4.3. Download and install the new version of oc
.
1.2.14.1. Installing the CLI on Linux
You can install the OpenShift CLI (oc
) binary on Linux by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select Linux from the drop-down menu and click Download command-line tools.
Unpack the archive:
$ tar xvzf <file>
Place the
oc
binary in a directory that is on yourPATH
.To check your
PATH
, execute the following command:$ echo $PATH
After you install the CLI, it is available using the oc
command:
$ oc <command>
1.2.14.2. Installing the CLI on Windows
You can install the OpenShift CLI (oc
) binary on Windows by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select Windows from the drop-down menu and click Download command-line tools.
- Unzip the archive with a ZIP program.
Move the
oc
binary to a directory that is on yourPATH
.To check your
PATH
, open the command prompt and execute the following command:C:\> path
After you install the CLI, it is available using the oc
command:
C:\> oc <command>
1.2.14.3. Installing the CLI on macOS
You can install the OpenShift CLI (oc
) binary on macOS by using the following procedure.
Procedure
- Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
- Select your infrastructure provider, and, if applicable, your installation type.
- In the Command-line interface section, select MacOS from the drop-down menu and click Download command-line tools.
- Unpack and unzip the archive.
Move the
oc
binary to a directory on your PATH.To check your
PATH
, open a terminal and execute the following command:$ echo $PATH
After you install the CLI, it is available using the oc
command:
$ oc <command>
1.2.15. Creating the cluster
To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.
Prerequisites
- Create the required infrastructure for the cluster.
- You obtained the installation program and generated the Ignition config files for your cluster.
- You used the Ignition config files to create RHCOS machines for your cluster.
- Your machines have direct internet access.
Procedure
Monitor the bootstrap process:
$ ./openshift-install --dir=<installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com... INFO API v1.16.2 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources
The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines.
After bootstrap process is complete, remove the bootstrap machine from the load balancer.
ImportantYou must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself.
1.2.16. Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami system:admin
1.2.17. Approving the CSRs for your machines
When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.16.2 master-1 Ready master 63m v1.16.2 master-2 Ready master 64m v1.16.2 worker-0 NotReady worker 76s v1.16.2 worker-1 NotReady worker 70s v1.16.2
The output lists all of the machines that you created.
Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
. You must implement a method of automatically approving the kubelet serving certificate requests.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
1.2.18. Initial Operator configuration
After the control plane initializes, you must immediately configure some Operators so that they all become available.
Prerequisites
- Your control plane has initialized.
Procedure
Watch the cluster components come online:
$ watch -n5 oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.0 True False False 69s cloud-credential 4.3.0 True False False 12m cluster-autoscaler 4.3.0 True False False 11m console 4.3.0 True False False 46s dns 4.3.0 True False False 11m image-registry 4.3.0 True False False 5m26s ingress 4.3.0 True False False 5m36s kube-apiserver 4.3.0 True False False 8m53s kube-controller-manager 4.3.0 True False False 7m24s kube-scheduler 4.3.0 True False False 12m machine-api 4.3.0 True False False 12m machine-config 4.3.0 True False False 7m36s marketplace 4.3.0 True False False 7m54m monitoring 4.3.0 True False False 7h54s network 4.3.0 True False False 5m9s node-tuning 4.3.0 True False False 11m openshift-apiserver 4.3.0 True False False 11m openshift-controller-manager 4.3.0 True False False 5m943s openshift-samples 4.3.0 True False False 3m55s operator-lifecycle-manager 4.3.0 True False False 11m operator-lifecycle-manager-catalog 4.3.0 True False False 11m service-ca 4.3.0 True False False 11m service-catalog-apiserver 4.3.0 True False False 5m26s service-catalog-controller-manager 4.3.0 True False False 5m25s storage 4.3.0 True False False 5m30s
- Configure the Operators that are not available.
1.2.18.1. Image registry removed during installation
On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed
. This allows openshift-installer
to complete installations on these platform types.
After installation, you must edit the Image Registry Operator configuration to switch the ManagementState
from Removed
to Managed
.
The Prometheus console provides an ImageRegistryRemoved
alert, for example:
"Image Registry has been removed. ImageStreamTags
, BuildConfigs
and DeploymentConfigs
which reference ImageStreamTags
may not work as expected. Please configure storage and update the config to Managed
state by editing configs.imageregistry.operator.openshift.io."
1.2.18.2. Image registry storage configuration
The image-registry
Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so the Registry Operator is made available.
Instructions for both configuring a PersistentVolume, which is required for production clusters, and for configuring an empty directory as the storage location, which is available for only non-production clusters, are shown.
1.2.19. Completing installation on user-provisioned infrastructure
After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide.
Prerequisites
- Your control plane has initialized.
- You have completed the initial Operator configuration.
Procedure
Confirm that all the cluster components are online:
$ watch -n5 oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.0 True False False 10m cloud-credential 4.3.0 True False False 22m cluster-autoscaler 4.3.0 True False False 21m console 4.3.0 True False False 10m dns 4.3.0 True False False 21m image-registry 4.3.0 True False False 16m ingress 4.3.0 True False False 16m kube-apiserver 4.3.0 True False False 19m kube-controller-manager 4.3.0 True False False 18m kube-scheduler 4.3.0 True False False 22m machine-api 4.3.0 True False False 22m machine-config 4.3.0 True False False 18m marketplace 4.3.0 True False False 18m monitoring 4.3.0 True False False 18m network 4.3.0 True False False 16m node-tuning 4.3.0 True False False 21m openshift-apiserver 4.3.0 True False False 21m openshift-controller-manager 4.3.0 True False False 17m openshift-samples 4.3.0 True False False 14m operator-lifecycle-manager 4.3.0 True False False 21m operator-lifecycle-manager-catalog 4.3.0 True False False 21m service-ca 4.3.0 True False False 21m service-catalog-apiserver 4.3.0 True False False 16m service-catalog-controller-manager 4.3.0 True False False 16m storage 4.3.0 True False False 16m
When all of the cluster Operators are
AVAILABLE
, you can complete the installation.Monitor for cluster completion:
$ ./openshift-install --dir=<installation_directory> wait-for install-complete 1 INFO Waiting up to 30m0s for the cluster to initialize...
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server.
ImportantThe Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Confirm that the Kubernetes API server is communicating with the Pods.
To view a list of all Pods, use the following command:
$ oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ...
View the logs for a Pod that is listed in the output of the previous command by using the following command:
$ oc logs <pod_name> -n <namespace> 1
- 1
- Specify the Pod name and namespace, as shown in the output of the previous command.
If the Pod logs display, the Kubernetes API server can communicate with the cluster machines.
1.2.20. Next steps
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- Set up your registry and configure registry storage.
1.3. Installing a cluster on vSphere in a restricted network
In OpenShift Container Platform version 4.3, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network.
1.3.1. Prerequisites
Create a registry on your mirror host and obtain the
imageContentSources
data for your version of OpenShift Container Platform.ImportantBecause the installation media is on the mirror host, you can use that computer to complete all installation steps.
- Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes.
- Review details about the OpenShift Container Platform installation and update processes.
If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to.
NoteBe sure to also review this site list if you are configuring a proxy.
1.3.2. About installations in restricted networks
In OpenShift Container Platform 4.3, you can perform an installation that does not require an active connection to the internet to obtain software components. You complete an installation in a restricted network on only infrastructure that you provision, not infrastructure that the installation program provisions, so your platform selection is limited.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s IAM service, require internet access, so you might still require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
Restricted network installations always use user-provisioned infrastructure. Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network.
1.3.2.1. Additional limits
Clusters in restricted networks have the following additional limitations and restrictions:
-
The ClusterVersion status includes an
Unable to retrieve available updates
error. - By default, you cannot use the contents of the Developer Catalog because you cannot access the required ImageStreamTags.
1.3.3. Internet and Telemetry access for OpenShift Container Platform
In OpenShift Container Platform 4.3, you require access to the internet to obtain the images that are necessary to install your cluster. The Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, also requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM).
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
You must have internet access to:
- Access the Red Hat OpenShift Cluster Manager page to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
1.3.4. VMware vSphere infrastructure requirements
You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 instance that meets the requirements for the components that you use.
Component | Minimum supported versions | Description |
---|---|---|
Hypervisor | vSphere 6.5 with HW version 13 | This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list. |
Networking (NSX-T) | n/a | vSphere 6.5U3 or vSphere 6.7U2+ are required for OpenShift Container Platform. Because previous versions of vSphere with NSX-T are not currently compatible with OpenShift Container Platform, NSX-T is not supported. NSX-T certification is in process and will be supported in a future release. |
Storage with in-tree drivers | vSphere 6.5 or vSphere 6.7 | This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform and can be used when vSphere CSI drivers are not available. |
Storage with vSphere CSI driver | vSphere 6.7U3 and later | This plug-in creates vSphere storage by using the standard Container Storage Interface. The vSphere CSI driver is provided and supported by VMware. |
If you use a vSphere version 6.5 instance, consider upgrading to 6.7U2 before you install OpenShift Container Platform.
You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation.
A limitation of using VPC is that the Storage Distributed Resource Scheduler (SDRS) is not supported. See link:https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/faqs.html[vSphere Storage for Kubernetes FAQs] in the VMware documentation.
1.3.5. Required vCenter account privileges
To install an OpenShift Container Platform cluster in vCenter, the cluster requires access to an account with privileges to read and create the required resources. Using an account that has administrative privileges is the simplest way to access all of the necessary permissions.
A user requires the following privileges to install an OpenShift Container Platform cluster:
Datastore
- Allocate space
Folder
- Create folder
- Delete folder
vSphere Tagging
- All privileges
Network
- Assign network
Resource
- Assign virtual machine to resource pool
Profile-driven storage
- All privileges
vApp
- All privileges
Virtual machine
- All privileges
For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation.
1.3.6. Machine requirements for a cluster with user-provisioned infrastructure
For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines.
1.3.6.1. Required machines
The smallest OpenShift Container Platform clusters require the following hosts:
- One temporary bootstrap machine
- Three control plane, or master, machines
- At least two compute machines, which are also known as worker machines
The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster.
To maintain high availability of your cluster, use separate physical hosts for these cluster machines.
The bootstrap, control plane, and compute machines must use the Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system.
Note that RHCOS is based on Red Hat Enterprise Linux 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits.
1.3.6.2. Network connectivity requirements
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs
during boot to fetch Ignition config files from the Machine Config Server. During the initial boot, the machines require either a DHCP server or that static IP addresses be set in order to establish a network connection to download their Ignition config files.
1.3.6.3. Minimum resource requirements
Each cluster machine must meet the following minimum requirements:
Machine | Operating System | vCPU | Virtual RAM | Storage |
---|---|---|---|---|
Bootstrap | RHCOS | 4 | 16 GB | 120 GB |
Control plane | RHCOS | 4 | 16 GB | 120 GB |
Compute | RHCOS or RHEL 7.6 | 2 | 8 GB | 120 GB |
1.3.6.4. Certificate signing requests management
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager
only approves the kubelet client CSRs. The machine-approver
cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
1.3.7. Creating the user-provisioned infrastructure
Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure.
Prerequistes
- Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster.
Procedure
- Configure DHCP or set static IP addresses on each node.
- Provision the required load balancers.
- Configure the ports for your machines.
- Configure DNS.
- Ensure network connectivity.
1.3.7.1. Networking requirements for user-provisioned infrastructure
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs
during boot to fetch Ignition config from the Machine Config Server.
During the initial boot, the machines require either a DHCP server or that static IP addresses be set on each host in the cluster in order to establish a network connection, which allows them to download their Ignition config files.
It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.
The Kubernetes API server, which runs on each master node after a successful cluster installation, must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.
You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.
Protocol | Port | Description |
---|---|---|
ICMP | N/A | Network reachability tests |
TCP |
|
Host level services, including the node exporter on ports |
| The default ports that Kubernetes reserves | |
| openshift-sdn | |
UDP |
| VXLAN and GENEVE |
| VXLAN and GENEVE | |
|
Host level services, including the node exporter on ports | |
TCP/UDP |
| Kubernetes NodePort |
Protocol | Port | Description |
---|---|---|
TCP |
| etcd server, peer, and metrics ports |
| Kubernetes API |
Network topology requirements
The infrastructure that you provision for your cluster must meet the following network topology requirements.
OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.
Load balancers
Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements:
API load balancer: Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode.
- A stateless load balancing algorithm. The options vary based on the load balancer implementation.
NoteSession persistence is not required for the API load balancer to function properly.
Configure the following ports on both the front and back of the load balancers:
Table 1.17. API load balancer Port Back-end machines (pool members) Internal External Description 6443
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the
/readyz
endpoint for the API server health check probe.X
X
Kubernetes API server
22623
Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.
X
Machine Config server
NoteThe load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the
/readyz
endpoint to the removal of the API server instance from the pool. Within the time frame after/readyz
returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values.Application Ingress load balancer: Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions:
- Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode.
- A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform.
Configure the following ports on both the front and back of the load balancers:
Table 1.18. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443
The machines that run the Ingress router pods, compute, or worker, by default.
X
X
HTTPS traffic
80
The machines that run the Ingress router pods, compute, or worker by default.
X
X
HTTP traffic
If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption.
A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.
1.3.7.2. User-provisioned DNS requirements
The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name>
is the cluster name and <base_domain>
is the cluster base domain that you specify in the install-config.yaml
file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>.
.
Component | Record | Description |
---|---|---|
Kubernetes API |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
| This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If it cannot resolve the node names, proxied API calls can fail, and you cannot retrieve logs from Pods. | |
Routes |
| A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. |
etcd |
|
OpenShift Container Platform requires DNS A/AAAA records for each etcd instance to point to the control plane machines that host the instances. The etcd instances are differentiated by |
|
For each control plane machine, OpenShift Container Platform also requires a SRV DNS record for etcd server on that machine with priority # _service._proto.name. TTL class SRV priority weight port target. _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-0.<cluster_name>.<base_domain> _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-1.<cluster_name>.<base_domain> _etcd-server-ssl._tcp.<cluster_name>.<base_domain>. 86400 IN SRV 0 10 2380 etcd-2.<cluster_name>.<base_domain> |
1.3.8. Generating an SSH private key and adding it to the agent
If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent
and to the installation program.
In a production environment, you require disaster recovery and debugging.
You can use this key to SSH into the master nodes as the user core
. When you deploy the cluster, the key is added to the core
user’s ~/.ssh/authorized_keys
list.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t rsa -b 4096 -N '' \ -f <path>/<file_name> 1
- 1
- Specify the path and file name, such as
~/.ssh/id_rsa
, of the SSH key. Do not specify an existing SSH key, as it will be overwritten.
Running this command generates an SSH key that does not require a password in the location that you specified.
Start the
ssh-agent
process as a background task:$ eval "$(ssh-agent -s)" Agent pid 31874
Add your SSH private key to the
ssh-agent
:$ ssh-add <path>/<file_name> 1 Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
- 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_rsa
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster’s machines.
1.3.9. Manually creating the installation configuration file
For installations of OpenShift Container Platform that use user-provisioned infrastructure, you must manually generate your installation configuration file.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the access token for your cluster.
-
Obtain the
imageContentSources
section from the output of the command to mirror the repository. - Obtain the contents of the certificate for your mirror registry.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>
ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the following
install-config.yaml
file template and save it in the<installation_directory>
.NoteYou must name this configuration file
install-config.yaml
.-
Unless you use a registry that RHCOS trusts by default, such as
docker.io
, you must provide the contents of the certificate for your mirror repository in theadditionalTrustBundle
section. In most cases, you must provide the certificate for your mirror. -
You must include the
imageContentSources
section from the output of the command to mirror the repository.
-
Unless you use a registry that RHCOS trusts by default, such as
Back up the
install-config.yaml
file so that you can use it to install multiple clusters.ImportantThe
install-config.yaml
file is consumed during the next step of the installation process. You must back it up now.
1.3.9.1. Sample install-config.yaml
file for VMware vSphere
You can customize the install-config.yaml
file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 fips: false 14 pullSecret: '{"auths":{"<mirror_registry>": {"auth": "<credentials>","email": "you@example.com"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: registry.svc.ci.openshift.org/ocp/release
- 1
- The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
- 2 5
- The
controlPlane
section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecompute
section must begin with a hyphen,-
, and the first line of thecontrolPlane
section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. - 3 6
- Whether to enable or disable simultaneous multithreading, or
hyperthreading
. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled
. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading.
- 4
- You must set the value of the
replicas
parameter to0
. This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. - 7
- The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy.
- 8
- The cluster name that you specified in your DNS records.
- 9
- The fully-qualified host name or IP address of the vCenter server.
- 10
- The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere.
- 11
- The password associated with the vSphere user.
- 12
- The vSphere datacenter.
- 13
- The default vSphere datastore to use.
- 14
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
- 15
- For
<mirror_registry>
, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For exampleregistry.example.com
orregistry.example.com:5000
. For<credentials>
, specify the base64-encoded user name and password for your mirror registry. - 16
- The public portion of the default SSH key for the
core
user in Red Hat Enterprise Linux CoreOS (RHCOS).NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery on, specify an SSH key that your
ssh-agent
process uses. - 17
- Provide the contents of the certificate file that you used for your mirror registry.
- 18
- Provide the
imageContentSources
section from the output of the command to mirror the repository.
1.3.9.2. Configuring the cluster-wide proxy during installation
Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml
file.
Prerequisites
-
An existing
install-config.yaml
file. Review the sites that your cluster requires access to and determine whether any need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. Add sites to the Proxy object’s
spec.noProxy
field to bypass the proxy if necessary.NoteThe Proxy object’s
status.noProxy
field is populated by default with the instance metadata endpoint (169.254.169.254
) and with the values of thenetworking.machineCIDR
,networking.clusterNetwork.cidr
, andnetworking.serviceNetwork[]
fields from your installation configuration.
Procedure
Edit your
install-config.yaml
file and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...
- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http
. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify anhttpProxy
value. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster. If this field is not specified, then
httpProxy
is used for both HTTP and HTTPS connections. The URL scheme must behttp
;https
is currently not supported. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must not specify anhttpsProxy
value. - 3
- A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with
.
to include all subdomains of that domain. Use*
to bypass proxy for all destinations. - 4
- If provided, the installation program generates a ConfigMap that is named
user-ca-bundle
in theopenshift-config
namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates atrusted-ca-bundle
ConfigMap that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this ConfigMap is referenced in the Proxy object’strustedCA
field. TheadditionalTrustBundle
field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. If you use an MITM transparent proxy network that does not require additional proxy configuration but requires additional CAs, you must provide the MITM CA certificate.
NoteThe installation program does not support the proxy
readinessEndpoints
field.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster
that uses the proxy settings in the provided install-config.yaml
file. If no proxy settings are provided, a cluster
Proxy object is still created, but it will have a nil spec
.
Only the Proxy object named cluster
is supported, and no additional proxies can be created.
1.3.10. Creating the Kubernetes manifest and Ignition config files
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.
The Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must complete your cluster installation and keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Prerequisites
- Obtain the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host.
-
Create the
install-config.yaml
installation configuration file.
Procedure
Generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir=<installation_directory> 1 INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
- 1
- For
<installation_directory>
, specify the installation directory that contains theinstall-config.yaml
file you created.
Because you create your own compute machines later in the installation process, you can safely ignore this warning.
Modify the
<installation_directory>/manifests/cluster-scheduler-02-config.yml
Kubernetes manifest file to prevent Pods from being scheduled on the control plane machines:-
Open the
<installation_directory>/manifests/cluster-scheduler-02-config.yml
file. -
Locate the
mastersSchedulable
parameter and set its value toFalse
. - Save and exit the file.
NoteCurrently, due to a Kubernetes limitation, router Pods running on control plane machines will not be reachable by the ingress load balancer. This step might not be required in a future minor version of OpenShift Container Platform.
-
Open the
Obtain the Ignition config files:
$ ./openshift-install create ignition-configs --dir=<installation_directory> 1
- 1
- For
<installation_directory>
, specify the same installation directory.
The following files are generated in the directory:
. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
1.3.11. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere
Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use.
Prerequisites
- Obtain the Ignition config files for your cluster.
- Have access to an HTTP server that you can access from your computer and that the machines that you create can access.
- Create a vSphere cluster.
Procedure
Upload the bootstrap Ignition config file, which is named
<installation_directory>/bootstrap.ign
, that the installation program created to your HTTP server. Note the URL of this file.You must host the bootstrap Ignition config file because it is too large to fit in a vApp property.
Save the following secondary Ignition config file for your bootstrap node to your computer as
<installation_directory>/append-bootstrap.ign
.{ "ignition": { "config": { "append": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "2.1.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} }
- 1
- Specify the URL of the bootstrap Ignition config file that you hosted.
When you create the Virtual Machine (VM) for the bootstrap machine, you use this Ignition config file.
Convert the master, worker, and secondary bootstrap Ignition config files to Base64 encoding.
For example, if you use a Linux operating system, you can use the
base64
command to encode the files.$ base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 $ base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 $ base64 -w0 <installation_directory>/append-bootstrap.ign > <installation_directory>/append-bootstrap.64
Obtain the RHCOS OVA image from the Product Downloads page on the Red Hat customer portal or the RHCOS image mirror page.
ImportantThe RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.
The file name contains the OpenShift Container Platform version number in the format
rhcos-<version>-vmware.<architecture>.ova
.In the vSphere Client, create a folder in your datacenter to store your VMs.
- Click the VMs and Templates view.
- Right-click the name of your datacenter.
- Click New Folder → New VM and Template Folder.
-
In the window that is displayed, enter the folder name. The folder name must match the cluster name that you specified in the
install-config.yaml
file.
In the vSphere Client, create a template for the OVA image.
NoteIn the following steps, you use the same template for all of your cluster machines and provide the location for the Ignition config file for that machine type when you provision the VMs.
- From the Hosts and Clusters tab, right-click your cluster’s name and click Deploy OVF Template.
- On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded.
- On the Select a name and folder tab, set a Virtual machine name, such as RHCOS, click the name of your vSphere cluster, and select the folder you created in the previous step.
- On the Select a compute resource tab, click the name of your vSphere cluster.
On the Select storage tab, configure the storage options for your VM.
- Select Thin Provision or Thick Provision, based on your storage preferences.
-
Select the datastore that you specified in your
install-config.yaml
file.
- On the Select network tab, specify the network that you configured for the cluster, if available.
- If you plan to use the same template for all cluster machine types, do not specify values on the Customize template tab.
After the template deploys, deploy a VM for a machine in the cluster.
- Right-click the template’s name and click Clone → Clone to Virtual Machine.
-
On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as
control-plane-0
orcompute-1
. - On the Select a name and folder tab, select the name of the folder that you created for the cluster.
- On the Select a compute resource tab, select the name of a host in your datacenter.
- Optional: On the Select storage tab, customize the storage options.
- On the Select clone options, select Customize this virtual machine’s hardware.
On the Customize hardware tab, click VM Options → Advanced.
- Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High.
Click Edit Configuration, and on the Configuration Parameters window, click Add Configuration Params. Define the following parameter names and values:
-
guestinfo.ignition.config.data
: Paste the contents of the base64-encoded Ignition config file for this machine type. -
guestinfo.ignition.config.data.encoding
: Specifybase64
. -
disk.EnableUUID
: SpecifyTRUE
.
-
Alternatively, prior to powering on the virtual machine add via vApp properties:
- Navigate to a virtual machine from the vCenter Server inventory.
- On the Configure tab, expand Settings and select vApp options.
- Scroll down and under Properties apply the configurations from above.
- In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type.
- Complete the configuration and power on the VM.
Create the rest of the machines for your cluster by following the preceding steps for each machine.
ImportantYou must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machine before you install the cluster.
1.3.12. Creating the cluster
To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.
Prerequisites
- Create the required infrastructure for the cluster.
- You obtained the installation program and generated the Ignition config files for your cluster.
- You used the Ignition config files to create RHCOS machines for your cluster.
Procedure
Monitor the bootstrap process:
$ ./openshift-install --dir=<installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com... INFO API v1.16.2 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources
The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines.
After bootstrap process is complete, remove the bootstrap machine from the load balancer.
ImportantYou must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself.
1.3.13. Logging in to the cluster
You can log in to your cluster as a default system user by exporting the cluster kubeconfig
file. The kubeconfig
file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- Deploy an OpenShift Container Platform cluster.
-
Install the
oc
CLI.
Procedure
Export the
kubeadmin
credentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
Verify you can run
oc
commands successfully using the exported configuration:$ oc whoami system:admin
1.3.14. Approving the CSRs for your machines
When you add machines to a cluster, two pending certificates signing request (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.16.2 master-1 Ready master 63m v1.16.2 master-2 Ready master 64m v1.16.2 worker-0 NotReady worker 76s v1.16.2 worker-1 NotReady worker 70s v1.16.2
The output lists all of the machines that you created.
Review the pending certificate signing requests (CSRs) and ensure that the you see a client and server request with
Pending
orApproved
status for each machine that you added to the cluster:$ oc get csr NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...
In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending
status, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster
kube-controller-manager
. You must implement a method of automatically approving the kubelet serving certificate requests.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name> 1
- 1
<csr_name>
is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
1.3.15. Initial Operator configuration
After the control plane initializes, you must immediately configure some Operators so that they all become available.
Prerequisites
- Your control plane has initialized.
Procedure
Watch the cluster components come online:
$ watch -n5 oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.0 True False False 69s cloud-credential 4.3.0 True False False 12m cluster-autoscaler 4.3.0 True False False 11m console 4.3.0 True False False 46s dns 4.3.0 True False False 11m image-registry 4.3.0 True False False 5m26s ingress 4.3.0 True False False 5m36s kube-apiserver 4.3.0 True False False 8m53s kube-controller-manager 4.3.0 True False False 7m24s kube-scheduler 4.3.0 True False False 12m machine-api 4.3.0 True False False 12m machine-config 4.3.0 True False False 7m36s marketplace 4.3.0 True False False 7m54m monitoring 4.3.0 True False False 7h54s network 4.3.0 True False False 5m9s node-tuning 4.3.0 True False False 11m openshift-apiserver 4.3.0 True False False 11m openshift-controller-manager 4.3.0 True False False 5m943s openshift-samples 4.3.0 True False False 3m55s operator-lifecycle-manager 4.3.0 True False False 11m operator-lifecycle-manager-catalog 4.3.0 True False False 11m service-ca 4.3.0 True False False 11m service-catalog-apiserver 4.3.0 True False False 5m26s service-catalog-controller-manager 4.3.0 True False False 5m25s storage 4.3.0 True False False 5m30s
- Configure the Operators that are not available.
1.3.15.1. Image registry storage configuration
The image-registry
Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so the Registry Operator is made available.
Instructions for both configuring a PersistentVolume, which is required for production clusters, and for configuring an empty directory as the storage location, which is available for only non-production clusters, are shown.
1.3.15.1.1. Configuring registry storage for VMware vSphere
As a cluster administrator, following installation you must configure your registry to use storage.
Prerequisites
- Cluster administrator permissions.
- A cluster on VMware vSphere.
Provision persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access mode.
ImportantvSphere volumes do not support the
ReadWriteMany
access mode. You must use a different storage backend, such asNFS
, to configure the registry storage.- Must have "100Gi" capacity.
Procedure
To configure your registry to use storage, change the
spec.storage.pvc
in theconfigs.imageregistry/cluster
resource.NoteWhen using shared storage such as NFS, it is strongly recommended to use the
supplementalGroups
strategy, which dictates the allowable supplemental groups for the Security Context, rather than thefsGroup
ID. Refer to the NFS Group IDs documentation for details.Verify you do not have a registry Pod:
$ oc get pod -n openshift-image-registry
Note-
If the storage type is
emptyDIR
, the replica number cannot be greater than1
. If the storage type is
NFS
, you must enable theno_wdelay
androot_squash
mount options. For example:# cat /etc/exports /mnt/data *(rw,sync,no_wdelay,root_squash,insecure,fsid=0) sh-4.3# exportfs -rv exporting *:/mnt/data
-
If the storage type is
Check the registry configuration:
$ oc edit configs.imageregistry.operator.openshift.io storage: pvc: claim:
Leave the
claim
field blank to allow the automatic creation of animage-registry-storage
PVC.Optional: Add a new storage class to a PV:
Create the PV:
$ oc create -f -
apiVersion: v1 kind: PersistentVolume metadata: name: image-registry-pv spec: accessModes: - ReadWriteMany capacity: storage: 100Gi nfs: path: /registry server: 172.16.231.181 persistentVolumeReclaimPolicy: Retain storageClassName: nfs01
$ oc get pv
Create the PVC:
$ oc create -n openshift-image-registry -f -
apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "image-registry-pvc" spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: nfs01 volumeMode: Filesystem
$ oc get pvc -n openshift-image-registry
Finally, add the name of your PVC:
$ oc edit configs.imageregistry.operator.openshift.io -o yaml
storage: pvc: claim: image-registry-pvc 1
- 1
- Creating a custom PVC allows you to leave the
claim
field blank for default automatic creation of animage-registry-storage
PVC.
Check the
clusteroperator
status:$ oc get clusteroperator image-registry
1.3.15.1.2. Configuring storage for the image registry in non-production clusters
You must configure storage for the image registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
To set the image registry storage to an empty directory:
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
WarningConfigure this option for only non-production clusters.
If you run this command before the Image Registry Operator initializes its components, the
oc patch
command fails with the following error:Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found
Wait a few minutes and run the command again.
1.3.16. Completing installation on user-provisioned infrastructure
After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide.
Prerequisites
- Your control plane has initialized.
- You have completed the initial Operator configuration.
Procedure
Confirm that all the cluster components are online:
$ watch -n5 oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.0 True False False 10m cloud-credential 4.3.0 True False False 22m cluster-autoscaler 4.3.0 True False False 21m console 4.3.0 True False False 10m dns 4.3.0 True False False 21m image-registry 4.3.0 True False False 16m ingress 4.3.0 True False False 16m kube-apiserver 4.3.0 True False False 19m kube-controller-manager 4.3.0 True False False 18m kube-scheduler 4.3.0 True False False 22m machine-api 4.3.0 True False False 22m machine-config 4.3.0 True False False 18m marketplace 4.3.0 True False False 18m monitoring 4.3.0 True False False 18m network 4.3.0 True False False 16m node-tuning 4.3.0 True False False 21m openshift-apiserver 4.3.0 True False False 21m openshift-controller-manager 4.3.0 True False False 17m openshift-samples 4.3.0 True False False 14m operator-lifecycle-manager 4.3.0 True False False 21m operator-lifecycle-manager-catalog 4.3.0 True False False 21m service-ca 4.3.0 True False False 21m service-catalog-apiserver 4.3.0 True False False 16m service-catalog-controller-manager 4.3.0 True False False 16m storage 4.3.0 True False False 16m
When all of the cluster Operators are
AVAILABLE
, you can complete the installation.Monitor for cluster completion:
$ ./openshift-install --dir=<installation_directory> wait-for install-complete 1 INFO Waiting up to 30m0s for the cluster to initialize...
- 1
- For
<installation_directory>
, specify the path to the directory that you stored the installation files in.
The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server.
ImportantThe Ignition config files that the installation program generates contain certificates that expire after 24 hours. You must keep the cluster running for 24 hours in a non-degraded state to ensure that the first certificate rotation has finished.
Confirm that the Kubernetes API server is communicating with the Pods.
To view a list of all Pods, use the following command:
$ oc get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ...
View the logs for a Pod that is listed in the output of the previous command by using the following command:
$ oc logs <pod_name> -n <namespace> 1
- 1
- Specify the Pod name and namespace, as shown in the output of the previous command.
If the Pod logs display, the Kubernetes API server can communicate with the cluster machines.
- Register your cluster on the Cluster registration page.
1.3.17. Next steps
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.