Installing
Installing and configuring OpenShift Container Platform clusters
Abstract
Chapter 1. OpenShift Container Platform installation overview Copy linkLink copied to clipboard!
1.1. OpenShift Container Platform installation overview Copy linkLink copied to clipboard!
The OpenShift Container Platform installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain.
These two basic types of OpenShift Container Platform clusters are frequently called installer-provisioned infrastructure clusters and user-provisioned infrastructure clusters.
Both types of clusters have the following characteristics:
- Highly available infrastructure with no single points of failure is available by default
- Administrators maintain control over what updates are applied and when
You use the same installation program to deploy both types of clusters. The main assets generated by the installation program are the Ignition config files for the bootstrap, master, and worker machines. With these three configurations and correctly configured infrastructure, you can start an OpenShift Container Platform cluster.
The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installation. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel. The ultimate target is a running cluster. By meeting dependencies instead of running commands, the installation program is able to recognize and use existing components instead of running the commands to create them again.
The following diagram shows a subset of the installation targets and dependencies:
Figure 1.1. OpenShift Container Platform installation targets and dependencies
After installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. It includes the kubelet, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.
Every control plane machine in an OpenShift Container Platform 4.8 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as an Atomic OSTree repository that is embedded in a container image that is rolled out across the cluster by an Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree. Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, via in-place upgrades that keep the entire platform up-to-date. These in-place updates can reduce the burden on operations teams.
If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.
1.1.1. Installation process Copy linkLink copied to clipboard!
When you install an OpenShift Container Platform cluster, you download the installation program from the appropriate Infrastructure Provider page on the OpenShift Cluster Manager site. This site manages:
- REST API for accounts
- Registry tokens, which are the pull secrets that you use to obtain the required components
- Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics
In OpenShift Container Platform 4.8, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type.
- For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster.
- If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines.
You use three sets of files during installation: an installation configuration file that is named install-config.yaml, Kubernetes manifests, and Ignition config files for your machine types.
It is possible to modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support.
The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster.
The installation configuration files are all pruned when you run the installation program, so be sure to back up all configuration files that you want to use again.
You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation.
The installation process with installer-provisioned infrastructure
The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster.
You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network.
If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure.
With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied.
The installation process with user-provisioned infrastructure
You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided.
If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself, including:
- The underlying infrastructure for the control plane and compute machines that make up the cluster
- Load balancers
- Cluster networking, including the DNS records and required subnets
- Storage for the cluster infrastructure and applications
If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster.
Installation process details
Because each machine in the cluster requires information about the cluster when it is provisioned, OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines (also known as the master machines) that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process:
Figure 1.2. Creating the bootstrap, control plane, and compute machines
After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Bootstrapping a cluster involves the following steps:
- The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure)
- The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane.
- The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)
- The temporary control plane schedules the production control plane to the production control plane machines.
- The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes.
- The temporary control plane shuts down and passes control to the production control plane.
- The bootstrap machine injects OpenShift Container Platform components into the production control plane.
- The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)
- The control plane sets up the compute nodes.
- The control plane installs additional services in the form of a set of Operators.
The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments.
1.1.2. Verifying node state after installation Copy linkLink copied to clipboard!
The OpenShift Container Platform installation completes when the following installation health checks are successful:
- The provisioning host can access the OpenShift Container Platform web console.
- All control plane nodes are ready.
- All cluster Operators are available.
After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. It can take some time before all worker nodes report as READY. For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes.
After your installation completes, you can continue to monitor the condition of the nodes in your cluster using the following steps.
Prerequisites
- The installation program resolves successfully in the terminal.
Procedure
Show the status of all worker nodes:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50aShow the phase of all worker machine nodes:
$ oc get machines -AExample output
NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m
Installation scope
The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes.
1.2. Supported platforms for OpenShift Container Platform clusters Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can install a cluster that uses installer-provisioned infrastructure on the following platforms:
- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- Microsoft Azure
Red Hat OpenStack Platform (RHOSP) version 13 and 16
- The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix.
- Red Hat Virtualization (RHV)
- VMware vSphere
- VMware Cloud (VMC) on AWS
- Bare metal
For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat.
After installation, the following changes are not supported:
- Mixing cloud provider platforms
- Mixing cloud provider components, such as using a persistent storage framework from a differing platform than what the cluster is installed on
In OpenShift Container Platform 4.8, you can install a cluster that uses user-provisioned infrastructure on the following platforms:
- AWS
- Azure
- GCP
- RHOSP
- RHV
- VMware vSphere
- VMware Cloud on AWS
- Bare metal
- IBM Z or LinuxONE
- IBM Power Systems
Depending on the supported cases for the platform, installations on user-provisioned infrastructure allow you to run machines with full internet access, place your cluster behind a proxy, or perform a restricted network installation. In a restricted network installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a restricted network installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access.
The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms.
Chapter 2. Selecting a cluster installation method and preparing it for users Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, decide what kind of installation process to follow and make sure you that you have all of the required resources to prepare the cluster for users.
2.1. Selecting a cluster installation type Copy linkLink copied to clipboard!
Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option.
2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? Copy linkLink copied to clipboard!
If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms:
- Amazon Web Services (AWS)
- Microsoft Azure
- Google Cloud Platform (GCP)
- Red Hat OpenStack Platform (RHOSP)
- Red Hat Virtualization (RHV)
- IBM Z and LinuxONE
- IBM Z and LinuxONE for Red Hat Enterprise Linux (RHEL) KVM
- IBM Power
- VMware vSphere
- VMware Cloud (VMC) on AWS
- Bare metal or other platform agnostic infrastructure
You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same datacenter or cloud hosting service.
If you want to use OpenShift Container Platform but do not want to manage the cluster yourself, you have several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated or OpenShift Online. You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud, or Google Cloud. For more information about managed services, see the OpenShift Products page. If you install an OpenShift Container Platform cluster with a cloud virtual machine as a virtual bare metal, the corresponding cloud-based storage is not supported.
2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? Copy linkLink copied to clipboard!
If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines as part of the installation process for OpenShift Container Platform. See Comparing OpenShift Container Platform 3 and OpenShift Container Platform 4.
Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see OpenShift Migration Best Practices. Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster.
2.1.3. Do you want to use existing components in your cluster? Copy linkLink copied to clipboard!
Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs.
You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to AWS, Azure, GCP, or VMC on AWS. These installation methods are the fastest way to deploy a production-capable OpenShift Container Platform cluster.
If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for AWS, Azure, GCP, or VMC on AWS.
For installer-provisioned infrastructure installations, you can use an existing VPC in AWS, vNet in Azure, or VPC in GCP. You can also reuse part of your networking infrastructure so that your cluster in AWS, Azure, GCP, or VMC on AWS can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them.
You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for RHOSP, RHOSP with Kuryr, RHV, vSphere, and bare metal.
If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS, Azure, GCP, or VMC on AWS, you can use the provided templates to help you stand up all of the required components. Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds.
You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP, RHOSP on SR-IOV, RHV, IBM Z or LinuxONE, IBM Power, or vSphere, use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure.
2.1.4. Do you need extra security for your cluster? Copy linkLink copied to clipboard!
If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure.
If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS, Azure, or GCP.
If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for AWS, GCP, IBM Z or LinuxONE, IBM Z or LinuxONE with RHEL KVM, IBM Power, vSphere, VMC on AWS, or bare metal. You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS, GCP, VMC on AWS, RHOSP, RHV, and vSphere.
If you need to deploy your cluster to an AWS GovCloud region or Azure government region, you can configure those custom regions during an installer-provisioned infrastructure installation.
You can also configure the cluster machines to use FIPS Validated / Modules in Process cryptographic libraries during installation.
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture.
2.2. Preparing your cluster for users after installation Copy linkLink copied to clipboard!
Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider.
For a production cluster, you must configure the following integrations:
2.3. Preparing your cluster for workloads Copy linkLink copied to clipboard!
Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy, you might need to make provisions for low-latency workloads or to protect sensitive workloads. You can also configure monitoring for application workloads. If you plan to run Windows workloads, you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed.
2.4. Supported installation methods for different platforms Copy linkLink copied to clipboard!
You can perform different types of installations on different platforms.
Not all installation options are supported for all platforms, as shown in the following tables. A checkmark indicates that the option is supported and links to the relevant section.
| AWS | Azure | GCP | RHOSP | RHV | Bare metal | vSphere | VMC | IBM Z | IBM Power | |
|---|---|---|---|---|---|---|---|---|---|---|
| Default | ||||||||||
| Custom | ||||||||||
| Network customization | ||||||||||
| Restricted network | ||||||||||
| Private clusters | ||||||||||
| Existing virtual private networks | ||||||||||
| Government regions |
| AWS | Azure | GCP | RHOSP | RHOSP on SR-IOV | RHV | Bare metal | vSphere | VMC | IBM Z | IBM Z with RHEL KVM | IBM Power | Platform agnostic | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Custom | |||||||||||||
| Network customization | |||||||||||||
| Restricted network | |||||||||||||
| Shared VPC hosted outside of cluster project |
Chapter 3. Mirroring images for a disconnected installation Copy linkLink copied to clipboard!
You can use the procedures in this section to ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring.
You must have access to the internet to obtain the necessary container images. In this procedure, you place your mirror registry on a mirror host that has access to both your network and the Internet. If you do not have access to a mirror host, use the Mirroring an Operator catalog procedure to copy images to a device you can move across network boundaries with.
3.1. Prerequisites Copy linkLink copied to clipboard!
You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries:
If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator. If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat support.
- If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift. The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations.
3.2. About the mirror registry Copy linkLink copied to clipboard!
You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift, a small-scale container registry included with OpenShift Container Platform subscriptions.
You can use any container registry that supports Docker v2-2, such as Red Hat Quay, the mirror registry for Red Hat OpenShift, Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry.
The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process.
If choosing a container registry that is not the mirror registry for Red Hat OpenShift, it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters.
When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring. If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring.
For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location.
Red Hat does not test third party registries with OpenShift Container Platform.
Additional information
For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source.
3.3. Preparing your mirror host Copy linkLink copied to clipboard!
Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location.
3.3.1. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
3.4. Configuring credentials that allow images to be mirrored Copy linkLink copied to clipboard!
Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror.
Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry.
This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret.
Prerequisites
- You configured a mirror registry to use in your disconnected environment.
- You identified an image repository location on your mirror registry to mirror images into.
- You provisioned a mirror registry account that allows images to be uploaded to that image repository.
Procedure
Complete the following steps on the installation host:
-
Download your
registry.redhat.iopull secret from the Red Hat OpenShift Cluster Manager and save it to a.jsonfile. Generate the base64-encoded user name and password or token for your mirror registry:
$ echo -n '<user_name>:<password>' | base64 -w01 BGVtbYk3ZHAtqXs=- 1
- For
<user_name>and<password>, specify the user name and password that you configured for your registry.
Make a copy of your pull secret in JSON format:
$ cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json>1 - 1
- Specify the path to the folder to store the pull secret in and a name for the JSON file that you create.
Save the file either as
~/.docker/config.jsonor$XDG_RUNTIME_DIR/containers/auth.json.The contents of the file resemble the following example:
{ "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "quay.io": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" } } }Edit the new file and add a section that describes your registry to it:
"auths": { "<mirror_registry>": {1 "auth": "<credentials>",2 "email": "you@example.com" } },The file resembles the following example:
{ "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "you@example.com" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "quay.io": { "auth": "b3BlbnNo...", "email": "you@example.com" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "you@example.com" } } }
3.5. Mirror registry for Red Hat OpenShift Copy linkLink copied to clipboard!
The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations.
If you already have a container image registry, such as Red Hat Quay, you can skip these steps and go straight to Mirroring the OpenShift Container Platform image repository.
Prerequisites
- An OpenShift Container Platform subscription.
- Red Hat Enterprise Linux (RHEL) 8 with Podman 3.3 and OpenSSL installed.
- Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server.
-
Passwordless
sudoaccess on the target host. - Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys.
- 2 or more vCPUs.
- 8 GB of RAM.
About 8.7 GB for OpenShift Container Platform 4.8 Release images, or about 668 GB for OpenShift Container Platform 4.8 Release images and OpenShift Container Platform 4.8 Red Hat Operator images. Up to 1 TB per stream or more is suggested.
ImportantThese requirements are based on local testing results with only Release images and Operator images tested. Storage requirements can vary based on your organization’s needs. Some users might require more space, for example, when they mirror multiple z-streams. You can use standard Red Hat Quay functionality to remove unnecessary images and free up space.
3.5.1. Mirror registry for Red Hat OpenShift introduction Copy linkLink copied to clipboard!
For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page.
The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with pre-configured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started.
The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments.
The mirror registry for Red Hat OpenShift is limited to hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift. Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift.
Unlike Red Hat Quay, the mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged, because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters.
Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment.
3.5.2. Mirroring on a local host with mirror registry for Red Hat OpenShift Copy linkLink copied to clipboard!
This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images.
Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine.
Procedure
-
Download the
mirror-registry.tar.gzpackage for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the
mirror-registrytool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags".$ sudo ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name>Use the user name and password generated during installation to log into the registry by running the following command:
$ podman login --authfile pull-secret.txt \ -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false1 - 1
- You can avoid running
--tls-verify=falseby configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information.
NoteYou can also log in by accessing the UI at
https://<host.example.com>:8443after installation.You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring an Operator catalog" sections of this document.
NoteIf there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage.
3.5.3. Mirroring on a remote host with mirror registry for Red Hat OpenShift Copy linkLink copied to clipboard!
This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images.
Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine.
Procedure
-
Download the
mirror-registry.tar.gzpackage for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the
mirror-registrytool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags".$ sudo ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name>Use the user name and password generated during installation to log into the mirror registry by running the following command:
$ podman login --authfile pull-secret.txt \ -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false1 - 1
- You can avoid running
--tls-verify=falseby configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information.
NoteYou can also log in by accessing the UI at
https://<host.example.com>:8443after installation.You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring an Operator catalog" sections of this document.
NoteIf there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage.
3.6. Upgrading the mirror registry for Red Hat OpenShift Copy linkLink copied to clipboard!
You can upgrade the mirror registry for Red Hat OpenShift from your local host by running the following command:
$ sudo ./mirror-registry upgradeNote-
Users who upgrade the mirror registry for Red Hat OpenShift with the
./mirror-registry upgradeflag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with--quayHostname <host_example_com>and--quayRoot <example_directory_name>, you must include that string to properly upgrade the mirror registry.
-
Users who upgrade the mirror registry for Red Hat OpenShift with the
3.6.1. Uninstalling the mirror registry for Red Hat OpenShift Copy linkLink copied to clipboard!
You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command:
$ sudo ./mirror-registry uninstall -v \ --quayRoot <example_directory_name>Note-
Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use
--autoApproveto skip this prompt. -
Users who install the mirror registry for Red Hat OpenShift with the
--quayRootflag must include the--quayRootflag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with--quayRoot example_directory_name, you must include that string to properly uninstall the mirror registry.
-
Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use
3.6.2. Mirror registry for Red Hat OpenShift flags Copy linkLink copied to clipboard!
The following flags are available for the mirror registry for Red Hat OpenShift:
| Flags | Description |
|---|---|
|
|
A boolean value that disables interactive prompts. If set to |
|
| The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. |
|
|
Shows the username of the initial user. Defaults to |
|
|
The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to |
|
|
The directory where container image layer and configuration data is saved, including |
|
|
The path of your SSH identity key. Defaults to |
|
|
The path to the SSL/TLS public key / certificate. Defaults to |
|
|
Skips the check for the certificate hostname against the |
|
|
The path to the SSL/TLS private key used for HTTPS communication. Defaults to |
|
|
The hostname of the target you want to install Quay to. Defaults to |
|
|
The user on the target host which will be used for SSH. Defaults to |
|
| Shows debug logs and Ansible playbook outputs. |
|
| Shows the version for the mirror registry for Red Hat OpenShift. |
-
--quayHostnamemust be modified if the public DNS name of your system is different from the local hostname. -
--sslCheckSkipis used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation.
3.7. Mirroring the OpenShift Container Platform image repository Copy linkLink copied to clipboard!
Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade.
Prerequisites
- Your mirror host has access to the internet.
- You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured.
- You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository.
If you use self-signed certificates that do not set a Subject Alternative Name, you must precede the
occommands in this procedure withGODEBUG=x509ignoreCN=0. If you do not set this variable, theoccommands will fail with the following error:x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
Procedure
Complete the following steps on the mirror host:
- Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page.
Set the required environment variables:
Export the release version:
$ OCP_RELEASE=<release_version>For
<release_version>, specify the tag that corresponds to the version of OpenShift Container Platform to install, such as4.5.4.Export the local registry name and host port:
$ LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'For
<local_registry_host_name>, specify the registry domain name for your mirror repository, and for<local_registry_host_port>, specify the port that it serves content on.Export the local repository name:
$ LOCAL_REPOSITORY='<local_repository_name>'For
<local_repository_name>, specify the name of the repository to create in your registry, such asocp4/openshift4.Export the name of the repository to mirror:
$ PRODUCT_REPO='openshift-release-dev'For a production release, you must specify
openshift-release-dev.Export the path to your registry pull secret:
$ LOCAL_SECRET_JSON='<path_to_pull_secret>'For
<path_to_pull_secret>, specify the absolute path to and file name of the pull secret for your mirror registry that you created.Export the release mirror:
$ RELEASE_NAME="ocp-release"For a production release, you must specify
ocp-release.Export the type of architecture for your server, such as
x86_64:$ ARCHITECTURE=<server_architecture>Export the path to the directory to host the mirrored images:
$ REMOVABLE_MEDIA_PATH=<path>1 - 1
- Specify the full path, including the initial forward slash (/) character.
Mirror the version images to the mirror registry:
If your mirror host does not have internet access, take the following actions:
- Connect the removable media to a system that is connected to the internet.
Review the images and configuration manifests to mirror:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run-
Record the entire
imageContentSourcessection from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add theimageContentSourcessection to theinstall-config.yamlfile during installation. Mirror the images to a directory on the removable media:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE}Take the media to the restricted network environment and upload the images to the local container registry.
$ oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}1 - 1
- For
REMOVABLE_MEDIA_PATH, you must use the same path that you specified when you mirrored the images.
If the local container registry is connected to the mirror host, take the following actions:
Directly push the release images to the local registry by using following command:
$ oc adm release mirror -a ${LOCAL_SECRET_JSON} \ --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \ --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \ --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}This command pulls the release information as a digest, and its output includes the
imageContentSourcesdata that you require when you install your cluster.Record the entire
imageContentSourcessection from the output of the previous command. The information about your mirrors is unique to your mirrored repository, and you must add theimageContentSourcessection to theinstall-config.yamlfile during installation.NoteThe image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine.
To create the installation program that is based on the content that you mirrored, extract it and pin it to the release:
If your mirror host does not have internet access, run the following command:
$ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}"If the local container registry is connected to the mirror host, run the following command:
$ oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"ImportantTo ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content.
You must perform this step on a machine with an active internet connection.
If you are in a disconnected environment, use the
--imageflag as part of must-gather and point to the payload image.
For clusters using installer-provisioned infrastructure, run the following command:
$ openshift-install
3.8. The Cluster Samples Operator in a disconnected environment Copy linkLink copied to clipboard!
In a disconnected environment, you must take additional steps after you install a cluster to configure the Cluster Samples Operator. Review the following information in preparation.
3.8.1. Cluster Samples Operator assistance for mirroring Copy linkLink copied to clipboard!
During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag.
The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name>.
During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed. If you choose to change it to Managed, it installs samples.
The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators’s objects to reach the services they require.
You can use this config map as a reference for which images need to be mirrored for your image streams to import.
-
While the Cluster Samples Operator is set to
Removed, you can create your mirrored registry, or determine which existing mirrored registry you want to use. - Mirror the samples you want to the mirrored registry using the new config map as your guide.
-
Add any of the image streams you did not mirror to the
skippedImagestreamslist of the Cluster Samples Operator configuration object. -
Set
samplesRegistryof the Cluster Samples Operator configuration object to the mirrored registry. -
Then set the Cluster Samples Operator to
Managedto install the image streams you have mirrored.
3.9. Next steps Copy linkLink copied to clipboard!
- Mirror the OperatorHub images for the Operators that you want to install in your cluster.
- Install a cluster on infrastructure that you provision in your restricted network, such as on VMware vSphere, bare metal, or Amazon Web Services.
Chapter 4. Installing on AWS Copy linkLink copied to clipboard!
4.1. Preparing to install on AWS Copy linkLink copied to clipboard!
4.1.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
4.1.2. Requirements for installing OpenShift Container Platform on AWS Copy linkLink copied to clipboard!
Before installing OpenShift Container Platform on Amazon Web Services (AWS), you must create an AWS account. See Configuring an AWS account for details about configuring an account, account limits, account permissions, IAM user setup, and supported AWS regions.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for AWS for other options, including configuring the Cloud Credential Operator (CCO) to use the Amazon Web Services Security Token Service (AWS STS).
4.1.3. Choosing a method to install OpenShift Container Platform on AWS Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself.
See Installation process for more information about installer-provisioned and user-provisioned installation processes.
4.1.3.1. Installing a cluster on installer-provisioned infrastructure Copy linkLink copied to clipboard!
You can install a cluster on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:
- Installing a cluster quickly on AWS: You can install OpenShift Container Platform on AWS infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options.
- Installing a customized cluster on AWS: You can install a customized cluster on AWS infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
- Installing a cluster on AWS with network customizations: You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements.
- Installing a cluster on AWS in a restricted network: You can install OpenShift Container Platform on AWS on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components.
- Installing a cluster on an existing Virtual Private Cloud: You can install OpenShift Container Platform on an existing AWS Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure.
- Installing a private cluster on an existing VPC: You can install a private cluster on an existing AWS VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet.
- Installing a cluster on AWS into a government or secret region: OpenShift Container Platform can be deployed into AWS regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud.
4.1.3.2. Installing a cluster on user-provisioned infrastructure Copy linkLink copied to clipboard!
You can install a cluster on AWS infrastructure that you provision, by using one of the following methods:
- Installing a cluster on AWS infrastructure that you provide: You can install OpenShift Container Platform on AWS infrastructure that you provide. You can use the provided CloudFormation templates to create stacks of AWS resources that represent each of the components required for an OpenShift Container Platform installation.
- Installing a cluster on AWS in a restricted network with user-provisioned infrastructure: You can install OpenShift Container Platform on AWS infrastructure that you provide by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the AWS APIs.
4.1.4. Next steps Copy linkLink copied to clipboard!
4.2. Configuring an AWS account Copy linkLink copied to clipboard!
Before you can install OpenShift Container Platform, you must configure an Amazon Web Services (AWS) account.
4.2.1. Configuring Route 53 Copy linkLink copied to clipboard!
To install OpenShift Container Platform, the Amazon Web Services (AWS) account you use must have a dedicated public hosted zone in your Route 53 service. This zone must be authoritative for the domain. The Route 53 service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through AWS or another source.
NoteIf you purchase a new domain through AWS, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through AWS, see Registering Domain Names Using Amazon Route 53 in the AWS documentation.
- If you are using an existing domain and registrar, migrate its DNS to AWS. See Making Amazon Route 53 the DNS Service for an Existing Domain in the AWS documentation.
Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation.
Use an appropriate root domain, such as
openshiftcorp.com, or subdomain, such asclusters.openshiftcorp.com.- Extract the new authoritative name servers from the hosted zone records. See Getting the Name Servers for a Public Hosted Zone in the AWS documentation.
- Update the registrar records for the AWS Route 53 name servers that your domain uses. For example, if you registered your domain to a Route 53 service in a different accounts, see the following topic in the AWS documentation: Adding or Changing Name Servers or Glue Records.
- If you are using a subdomain, add its delegation records to the parent domain. This gives Amazon Route 53 responsibility for the subdomain. Follow the delegation procedure outlined by the DNS provider of the parent domain. See Creating a subdomain that uses Amazon Route 53 as the DNS service without migrating the parent domain in the AWS documentation for an example high level procedure.
4.2.1.1. Ingress Operator endpoint configuration for AWS Route 53 Copy linkLink copied to clipboard!
If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region, the Ingress Operator uses us-gov-west-1 region for Route53 and tagging API clients.
The Ingress Operator uses https://tagging.us-gov-west-1.amazonaws.com as the tagging API endpoint if a tagging custom endpoint is configured that includes the string 'us-gov-east-1'.
For more information on AWS GovCloud (US) endpoints, see the Service Endpoints in the AWS documentation about GovCloud (US).
Private, disconnected installations are not supported for AWS GovCloud when you install in the us-gov-east-1 region.
Example Route 53 configuration
platform:
aws:
region: us-gov-west-1
serviceEndpoints:
- name: ec2
url: https://ec2.us-gov-west-1.amazonaws.com
- name: elasticloadbalancing
url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com
- name: route53
url: https://route53.us-gov.amazonaws.com
- name: tagging
url: https://tagging.us-gov-west-1.amazonaws.com
- 1
- Route 53 defaults to
https://route53.us-gov.amazonaws.comfor both AWS GovCloud (US) regions. - 2
- Only the US-West region has endpoints for tagging. Omit this parameter if your cluster is in another region.
4.2.2. AWS account limits Copy linkLink copied to clipboard!
The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) components, and the default Service Limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain AWS regions, or run multiple clusters from your account, you might need to request additional resources for your AWS account.
The following table summarizes the AWS components whose limits can impact your ability to install and run OpenShift Container Platform clusters.
| Component | Number of clusters available by default | Default AWS limit | Description |
|---|---|---|---|
| Instance Limits | Varies | Varies | By default, each cluster creates the following instances:
These instance type counts are within a new account’s default limit. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, review your account limits to ensure that your cluster can deploy the machines that you need.
In most regions, the bootstrap and worker machines uses an |
| Elastic IPs (EIPs) | 0 to 1 | 5 EIPs per account | To provision the cluster in a highly available configuration, the installation program creates a public and private subnet for each availability zone within a region. Each private subnet requires a NAT Gateway, and each NAT gateway requires a separate elastic IP. Review the AWS region map to determine how many availability zones are in each region. To take advantage of the default high availability, install the cluster in a region with at least three availability zones. To install a cluster in a region with more than five availability zones, you must increase the EIP limit. Important
To use the |
| Virtual Private Clouds (VPCs) | 5 | 5 VPCs per region | Each cluster creates its own VPC. |
| Elastic Load Balancing (ELB/NLB) | 3 | 20 per region |
By default, each cluster creates internal and external network load balancers for the master API server and a single classic elastic load balancer for the router. Deploying more Kubernetes |
| NAT Gateways | 5 | 5 per availability zone | The cluster deploys one NAT gateway in each availability zone. |
| Elastic Network Interfaces (ENIs) | At least 12 | 350 per region |
The default installation creates 21 ENIs and an ENI for each availability zone in your region. For example, the Additional ENIs are created for additional machines and elastic load balancers that are created by cluster usage and deployed workloads. |
| VPC Gateway | 20 | 20 per account | Each cluster creates a single VPC Gateway for S3 access. |
| S3 buckets | 99 | 100 buckets per account | Because the installation process creates a temporary bucket and the registry component in each cluster creates a bucket, you can create only 99 OpenShift Container Platform clusters per AWS account. |
| Security Groups | 250 | 2,500 per account | Each cluster creates 10 distinct security groups. |
4.2.3. Required AWS permissions for the IAM user Copy linkLink copied to clipboard!
Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region.
When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions:
Example 4.1. Required EC2 permissions for installation
-
ec2:AuthorizeSecurityGroupEgress -
ec2:AuthorizeSecurityGroupIngress -
ec2:CopyImage -
ec2:CreateNetworkInterface -
ec2:AttachNetworkInterface -
ec2:CreateSecurityGroup -
ec2:CreateTags -
ec2:CreateVolume -
ec2:DeleteSecurityGroup -
ec2:DeleteSnapshot -
ec2:DeleteTags -
ec2:DeregisterImage -
ec2:DescribeAccountAttributes -
ec2:DescribeAddresses -
ec2:DescribeAvailabilityZones -
ec2:DescribeDhcpOptions -
ec2:DescribeImages -
ec2:DescribeInstanceAttribute -
ec2:DescribeInstanceCreditSpecifications -
ec2:DescribeInstances -
ec2:DescribeInstanceTypes -
ec2:DescribeInternetGateways -
ec2:DescribeKeyPairs -
ec2:DescribeNatGateways -
ec2:DescribeNetworkAcls -
ec2:DescribeNetworkInterfaces -
ec2:DescribePrefixLists -
ec2:DescribeRegions -
ec2:DescribeRouteTables -
ec2:DescribeSecurityGroups -
ec2:DescribeSubnets -
ec2:DescribeTags -
ec2:DescribeVolumes -
ec2:DescribeVpcAttribute -
ec2:DescribeVpcClassicLink -
ec2:DescribeVpcClassicLinkDnsSupport -
ec2:DescribeVpcEndpoints -
ec2:DescribeVpcs -
ec2:GetEbsDefaultKmsKeyId -
ec2:ModifyInstanceAttribute -
ec2:ModifyNetworkInterfaceAttribute -
ec2:RevokeSecurityGroupEgress -
ec2:RevokeSecurityGroupIngress -
ec2:RunInstances -
ec2:TerminateInstances
Example 4.2. Required permissions for creating network resources during installation
-
ec2:AllocateAddress -
ec2:AssociateAddress -
ec2:AssociateDhcpOptions -
ec2:AssociateRouteTable -
ec2:AttachInternetGateway -
ec2:CreateDhcpOptions -
ec2:CreateInternetGateway -
ec2:CreateNatGateway -
ec2:CreateRoute -
ec2:CreateRouteTable -
ec2:CreateSubnet -
ec2:CreateVpc -
ec2:CreateVpcEndpoint -
ec2:ModifySubnetAttribute -
ec2:ModifyVpcAttribute
If you use an existing VPC, your account does not require these permissions for creating network resources.
Example 4.3. Required Elastic Load Balancing permissions (ELB) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer -
elasticloadbalancing:AttachLoadBalancerToSubnets -
elasticloadbalancing:ConfigureHealthCheck -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateLoadBalancerListeners -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterInstancesFromLoadBalancer -
elasticloadbalancing:DescribeInstanceHealth -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTags -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:RegisterInstancesWithLoadBalancer -
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
Example 4.4. Required Elastic Load Balancing permissions (ELBv2) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:CreateListener -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateTargetGroup -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterTargets -
elasticloadbalancing:DescribeListeners -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTargetGroupAttributes -
elasticloadbalancing:DescribeTargetHealth -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:ModifyTargetGroup -
elasticloadbalancing:ModifyTargetGroupAttributes -
elasticloadbalancing:RegisterTargets
Example 4.5. Required IAM permissions for installation
-
iam:AddRoleToInstanceProfile -
iam:CreateInstanceProfile -
iam:CreateRole -
iam:DeleteInstanceProfile -
iam:DeleteRole -
iam:DeleteRolePolicy -
iam:GetInstanceProfile -
iam:GetRole -
iam:GetRolePolicy -
iam:GetUser -
iam:ListInstanceProfilesForRole -
iam:ListRoles -
iam:ListUsers -
iam:PassRole -
iam:PutRolePolicy -
iam:RemoveRoleFromInstanceProfile -
iam:SimulatePrincipalPolicy -
iam:TagRole
If you have not created an elastic load balancer (ELB) in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission.
Example 4.6. Required Route 53 permissions for installation
-
route53:ChangeResourceRecordSets -
route53:ChangeTagsForResource -
route53:CreateHostedZone -
route53:DeleteHostedZone -
route53:GetChange -
route53:GetHostedZone -
route53:ListHostedZones -
route53:ListHostedZonesByName -
route53:ListResourceRecordSets -
route53:ListTagsForResource -
route53:UpdateHostedZoneComment
Example 4.7. Required S3 permissions for installation
-
s3:CreateBucket -
s3:DeleteBucket -
s3:GetAccelerateConfiguration -
s3:GetBucketAcl -
s3:GetBucketCors -
s3:GetBucketLocation -
s3:GetBucketLogging -
s3:GetBucketObjectLockConfiguration -
s3:GetBucketReplication -
s3:GetBucketRequestPayment -
s3:GetBucketTagging -
s3:GetBucketVersioning -
s3:GetBucketWebsite -
s3:GetEncryptionConfiguration -
s3:GetLifecycleConfiguration -
s3:GetReplicationConfiguration -
s3:ListBucket -
s3:PutBucketAcl -
s3:PutBucketTagging -
s3:PutEncryptionConfiguration
Example 4.8. S3 permissions that cluster Operators require
-
s3:DeleteObject -
s3:GetObject -
s3:GetObjectAcl -
s3:GetObjectTagging -
s3:GetObjectVersion -
s3:PutObject -
s3:PutObjectAcl -
s3:PutObjectTagging
Example 4.9. Required permissions to delete base cluster resources
-
autoscaling:DescribeAutoScalingGroups -
ec2:DeleteNetworkInterface -
ec2:DeleteVolume -
elasticloadbalancing:DeleteTargetGroup -
elasticloadbalancing:DescribeTargetGroups -
iam:DeleteAccessKey -
iam:DeleteUser -
iam:ListAttachedRolePolicies -
iam:ListInstanceProfiles -
iam:ListRolePolicies -
iam:ListUserPolicies -
s3:DeleteObject -
s3:ListBucketVersions -
tag:GetResources
Example 4.10. Required permissions to delete network resources
-
ec2:DeleteDhcpOptions -
ec2:DeleteInternetGateway -
ec2:DeleteNatGateway -
ec2:DeleteRoute -
ec2:DeleteRouteTable -
ec2:DeleteSubnet -
ec2:DeleteVpc -
ec2:DeleteVpcEndpoints -
ec2:DetachInternetGateway -
ec2:DisassociateRouteTable -
ec2:ReleaseAddress -
ec2:ReplaceRouteTableAssociation
If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources.
Example 4.11. Required permissions to delete a cluster with shared instance roles
-
iam:UntagRole
Example 4.12. Additional IAM and S3 permissions that are required to create manifests
-
iam:DeleteAccessKey -
iam:DeleteUser -
iam:DeleteUserPolicy -
iam:GetUserPolicy -
iam:ListAccessKeys -
iam:PutUserPolicy -
iam:TagUser -
iam:GetUserPolicy -
iam:ListAccessKeys -
s3:PutBucketPublicAccessBlock -
s3:GetBucketPublicAccessBlock -
s3:PutLifecycleConfiguration -
s3:HeadBucket -
s3:ListBucketMultipartUploads -
s3:AbortMultipartUpload
If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions.
Example 4.13. Optional permissions for instance and quota checks for installation
-
ec2:DescribeInstanceTypeOfferings -
servicequotas:ListAWSDefaultServiceQuotas
4.2.4. Creating an IAM user Copy linkLink copied to clipboard!
Each Amazon Web Services (AWS) account contains a root user account that is based on the email address you used to create the account. This is a highly-privileged account, and it is recommended to use it for only initial account and billing configuration, creating an initial set of users, and securing the account.
Before you install OpenShift Container Platform, create a secondary IAM administrative user. As you complete the Creating an IAM User in Your AWS Account procedure in the AWS documentation, set the following options:
Procedure
-
Specify the IAM user name and select
Programmatic access. Attach the
AdministratorAccesspolicy to ensure that the account has sufficient permission to create the cluster. This policy provides the cluster with the ability to grant credentials to each OpenShift Container Platform component. The cluster grants the components only the credentials that they require.NoteWhile it is possible to create a policy that grants the all of the required AWS permissions and attach it to the user, this is not the preferred option. The cluster will not have the ability to grant additional credentials to individual components, so the same credentials are used by all components.
- Optional: Add metadata to the user by attaching tags.
-
Confirm that the user name that you specified is granted the
AdministratorAccesspolicy. Record the access key ID and secret access key values. You must use these values when you configure your local machine to run the installation program.
ImportantYou cannot use a temporary session token that you generated while using a multi-factor authentication device to authenticate to AWS when you deploy a cluster. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials.
4.2.5. IAM Policies and AWS authentication Copy linkLink copied to clipboard!
By default, the installation program creates instance profiles for the bootstrap, control plane, and compute instances with the necessary permissions for the cluster to operate.
However, you can create your own IAM roles and specify them as part of the installation process. You might need to specify your own roles to deploy the cluster or to manage the cluster after installation. For example:
- Your organization’s security policies require that you use a more restrictive set of permissions to install the cluster.
- After the installation, the cluster is configured with an Operator that requires access to additional services.
If you choose to specify your own IAM roles, you can take the following steps:
- Begin with the default policies and adapt as required. For more information, see "Default permissions for IAM instance profiles".
- Use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template that is based on the cluster’s activity. For more information see, "Using AWS IAM Analyzer to create policy templates".
4.2.5.1. Default permissions for IAM instance profiles Copy linkLink copied to clipboard!
By default, the installation program creates IAM instance profiles for the bootstrap, control plane and worker instances with the necessary permissions for the cluster to operate.
The following lists specify the default permissions for control plane and compute machines:
Example 4.14. Default IAM role permissions for control plane instance profiles
-
ec2:AttachVolume -
ec2:AuthorizeSecurityGroupIngress -
ec2:CreateSecurityGroup -
ec2:CreateTags -
ec2:CreateVolume -
ec2:DeleteSecurityGroup -
ec2:DeleteVolume -
ec2:Describe* -
ec2:DetachVolume -
ec2:ModifyInstanceAttribute -
ec2:ModifyVolume -
ec2:RevokeSecurityGroupIngress -
elasticloadbalancing:AddTags -
elasticloadbalancing:AttachLoadBalancerToSubnets -
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer -
elasticloadbalancing:CreateListener -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateLoadBalancerPolicy -
elasticloadbalancing:CreateLoadBalancerListeners -
elasticloadbalancing:CreateTargetGroup -
elasticloadbalancing:ConfigureHealthCheck -
elasticloadbalancing:DeleteListener -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeleteLoadBalancerListeners -
elasticloadbalancing:DeleteTargetGroup -
elasticloadbalancing:DeregisterInstancesFromLoadBalancer -
elasticloadbalancing:DeregisterTargets -
elasticloadbalancing:Describe* -
elasticloadbalancing:DetachLoadBalancerFromSubnets -
elasticloadbalancing:ModifyListener -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:ModifyTargetGroup -
elasticloadbalancing:ModifyTargetGroupAttributes -
elasticloadbalancing:RegisterInstancesWithLoadBalancer -
elasticloadbalancing:RegisterTargets -
elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer -
elasticloadbalancing:SetLoadBalancerPoliciesOfListener -
kms:DescribeKey
Example 4.15. Default IAM role permissions for compute instance profiles
-
ec2:DescribeInstances -
ec2:DescribeRegions
4.2.5.2. Specifying an existing IAM role Copy linkLink copied to clipboard!
Instead of allowing the installation program to create IAM instance profiles with the default permissions, you can use the install-config.yaml file to specify an existing IAM role for control plane and compute instances.
Prerequisites
-
You have an existing
install-config.yamlfile.
Procedure
Update
compute.platform.aws.iamRolewith an existing role for the control plane machines.Sample
install-config.yamlfile with an IAM role for compute instancescompute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRoleUpdate
controlPlane.platform.aws.iamRolewith an existing role for the compute machines.Sample
install-config.yamlfile with an IAM role for control plane instancescontrolPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole- Save the file and reference it when installing the OpenShift Container Platform cluster.
4.2.5.3. Using AWS IAM Analyzer to create policy templates Copy linkLink copied to clipboard!
The minimal set of permissions that the control plane and compute instance profiles require depends on how the cluster is configured for its daily operation.
One way to determine which permissions the cluster instances require is to use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template:
- A policy template contains the permissions the cluster has used over a specified period of time.
- You can then use the template to create policies with fine-grained permissions.
Procedure
The overall process could be:
- Ensure that CloudTrail is enabled. CloudTrail records all of the actions and events in your AWS account, including the API calls that are required to create a policy template. For more information, see the AWS documentation for working with CloudTrail.
- Create an instance profile for control plane instances and an instance profile for compute instances. Be sure to assign each role a permissive policy, such as PowerUserAccess. For more information, see the AWS documentation for creating instance profile roles.
- Install the cluster in a development environment and configure it as required. Be sure to deploy all of applications the cluster will host in a production environment.
- Test the cluster thoroughly. Testing the cluster ensures that all of the required API calls are logged.
- Use the IAM Access Analyzer to create a policy template for each instance profile. For more information, see the AWS documentation for generating policies based on the CloudTrail logs.
- Create and add a fine-grained policy to each instance profile.
- Remove the permissive policy from each instance profile.
- Deploy a production cluster using the existing instance profiles with the new policies.
You can add IAM Conditions to your policy to make it more restrictive and compliant with your organization security requirements.
4.2.6. Supported AWS Marketplace regions Copy linkLink copied to clipboard!
Installing an OpenShift Container Platform cluster using an AWS Marketplace image is available to customers who purchase the offer in North America.
While the offer must be purchased in North America, you can deploy the cluster to any of the following supported partitions:
- Public
- GovCloud
Deploying an OpenShift Container Platform cluster using an AWS Marketplace image is not supported for the AWS secret regions.
4.2.7. Supported AWS regions Copy linkLink copied to clipboard!
You can deploy an OpenShift Container Platform cluster to the following public regions:
Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region.
-
af-south-1(Cape Town) -
ap-east-1(Hong Kong) -
ap-northeast-1(Tokyo) -
ap-northeast-2(Seoul) -
ap-northeast-3(Osaka) -
ap-south-1(Mumbai) -
ap-southeast-1(Singapore) -
ap-southeast-2(Sydney) -
ca-central-1(Central) -
eu-central-1(Frankfurt) -
eu-north-1(Stockholm) -
eu-south-1(Milan) -
eu-west-1(Ireland) -
eu-west-2(London) -
eu-west-3(Paris) -
me-south-1(Bahrain) -
sa-east-1(São Paulo) -
us-east-1(N. Virginia) -
us-east-2(Ohio) -
us-west-1(N. California) -
us-west-2(Oregon)
The following AWS GovCloud regions are supported:
-
us-gov-west-1 -
us-gov-east-1
The AWS C2S Secret Region is supported:
-
us-iso-east-1
4.2.8. Next steps Copy linkLink copied to clipboard!
Install an OpenShift Container Platform cluster:
- Quickly install a cluster with default options on installer-provisioned infrastructure
- Install a cluster with cloud customizations on installer-provisioned infrastructure
- Install a cluster with network customizations on installer-provisioned infrastructure
- Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
4.3. Manually creating IAM for AWS Copy linkLink copied to clipboard!
In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster.
4.3.1. Alternatives to storing administrator-level secrets in the kube-system project Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file.
If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can choose one of the following options when installing OpenShift Container Platform:
Use the Amazon Web Services Security Token Service:
You can use the CCO utility (
ccoctl) to configure the cluster to use the Amazon Web Services Security Token Service (AWS STS). When the CCO utility is used to configure the cluster for STS, it assigns IAM roles that provide short-term, limited-privilege security credentials to components.NoteThis credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature.
Manage cloud credentials manually:
You can set the
credentialsModeparameter for the CCO toManualto manage cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them.Remove the administrator-level credential secret after installing OpenShift Container Platform with mint mode:
If you are using the CCO with the
credentialsModeparameter set toMint, you can remove or rotate the administrator-level credential after installing OpenShift Container Platform. Mint mode is the default configuration for the CCO. This option requires the presence of the administrator-level credential during an installation. The administrator-level credential is used during the installation to mint other credentials with some permissions granted. The original credential secret is not stored in the cluster permanently.
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.
- To learn how to rotate or remove the administrator-level credential secret after installing OpenShift Container Platform, see Rotating or removing cloud provider credentials.
- For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator.
4.3.2. Manually create IAM Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.
Procedure
Change to the directory that contains the installation program and create the
install-config.yamlfile:$ openshift-install create install-config --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files.Edit the
install-config.yamlconfiguration file so that it contains thecredentialsModeparameter set toManual.Example
install-config.yamlconfiguration fileapiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual1 compute: - architecture: amd64 hyperthreading: Enabled ...- 1
- This line is added to set the
credentialsModeparameter toManual.
To generate the manifests, run the following command from the directory that contains the installation program:
$ openshift-install create manifests --dir <installation_directory>From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your
openshift-installbinary is built to use:$ openshift-install versionExample output
release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64Locate all
CredentialsRequestobjects in this release image that target the cloud you are deploying on:$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=awsThis command creates a YAML file for each
CredentialsRequestobject.Sample
CredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*"-
Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject. The format for the secret data varies for each cloud provider. From the directory that contains the installation program, proceed with your cluster creation:
$ openshift-install create cluster --dir <installation_directory>ImportantBefore upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. For details, see the "Upgrading clusters with manually maintained credentials" section of the installation content for your cloud provider.
4.3.3. Upgrading clusters with manually maintained credentials Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default.
-
For minor releases, for example, from 4.7 to 4.8, this status prevents you from upgrading until you have addressed any updated permissions and annotated the
CloudCredentialresource to indicate that the permissions are updated as needed for the next version. This annotation changes theUpgradablestatus toTrue. - For z-stream releases, for example, from 4.8.9 to 4.8.10, no permissions are added or changed, so the upgrade is not blocked.
Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. Additionally, you must review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components.
Procedure
Extract and examine the
CredentialsRequestcustom resource for the new release.The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud.
Update the manually maintained credentials on your cluster:
-
Create new secrets for any
CredentialsRequestcustom resources that are added by the new release image. -
If the
CredentialsRequestcustom resources for any existing credentials that are stored in secrets have changed their permissions requirements, update the permissions as required.
-
Create new secrets for any
When all of the secrets are correct for the new release, indicate that the cluster is ready to upgrade:
-
Log in to the OpenShift Container Platform CLI as a user with the
cluster-adminrole. Edit the
CloudCredentialresource to add anupgradeable-toannotation within themetadatafield:$ oc edit cloudcredential clusterText to add
... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ...Where
<version_number>is the version you are upgrading to, in the formatx.y.z. For example,4.8.2for OpenShift Container Platform 4.8.2.It may take several minutes after adding the annotation for the upgradeable status to change.
-
Log in to the OpenShift Container Platform CLI as a user with the
Verify that the CCO is upgradeable:
- In the Administrator perspective of the web console, navigate to Administration → Cluster Settings.
- To view the CCO status details, click cloud-credential in the Cluster Operators list.
-
If the Upgradeable status in the Conditions section is False, verify that the
upgradeable-toannotation is free of typographical errors.
When the Upgradeable status in the Conditions section is True, you can begin the OpenShift Container Platform upgrade.
4.3.4. Mint mode Copy linkLink copied to clipboard!
Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS and GCP.
In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO to process the CredentialsRequest objects in the cluster and create users for each with specific permissions.
The benefits of mint mode include:
- Each cluster component has only the permissions it requires
- Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades
One drawback is that mint mode requires admin credential storage in a cluster kube-system secret.
4.3.5. Mint mode with removal or rotation of the administrator-level credential Copy linkLink copied to clipboard!
Currently, this mode is only supported on AWS and GCP.
In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation.
The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired.
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.
The administrator-level credential is not stored in the cluster permanently.
Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade.
4.3.6. Next steps Copy linkLink copied to clipboard!
Install an OpenShift Container Platform cluster:
- Installing a cluster quickly on AWS with default options on installer-provisioned infrastructure
- Install a cluster with cloud customizations on installer-provisioned infrastructure
- Install a cluster with network customizations on installer-provisioned infrastructure
- Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
4.4. Installing a cluster quickly on AWS Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options.
4.4.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials. Manual mode can also be used in environments where the cloud IAM APIs are not reachable.
4.4.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.4.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.4.4. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.4.5. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select aws as the platform to target.
If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
NoteThe AWS access key ID and secret access key are stored in
~/.aws/credentialsin the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
4.4.6. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.4.7. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.4.8. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.4.9. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.4.10. Next steps Copy linkLink copied to clipboard!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.5. Installing a cluster on AWS with customizations Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a customized cluster on infrastructure that the installation program provisions on Amazon Web Services (AWS). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes.
4.5.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.5.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.5.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.5.4. Obtaining an AWS Marketplace image Copy linkLink copied to clipboard!
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes.
Deploying an OpenShift Container Platform cluster using an AWS Marketplace image is not supported in secret regions.
Prerequisites
- You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Procedure
- Complete the OpenShift Container Platform subscription from the AWS Marketplace.
-
Record the AMI ID for your specific region. As part of the installation process, you must update the
install-config.yamlfile with this value before deploying the cluster.
Sample install-config.yaml file with AWS Marketplace worker nodes
apiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
amiID: ami-06c4d345f7c207239
type: m5.4xlarge
replicas: 3
metadata:
name: test-cluster
platform:
aws:
region: us-east-2
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
4.5.5. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.5.6. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.5.6.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
4.5.6.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.5.6.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.5.6.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
4.5.6.1.4. Optional AWS configuration parameters Copy linkLink copied to clipboard!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
4.5.6.2. Supported AWS machine types Copy linkLink copied to clipboard!
The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.
Example 4.16. Instance types for machines
| Instance type | Bootstrap | Control plane | Compute |
|---|---|---|---|
|
| x | ||
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x |
4.5.6.3. Sample customized install-config.yaml file for AWS Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
credentialsMode: Mint
controlPlane:
hyperthreading: Enabled
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2
userTags:
adminContact: jdoe
costCenter: 7536
amiID: ami-96c6f8f7
serviceEndpoints:
- name: ec2
url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
fips: false
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
- 1 10 11 16
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Platform Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 13
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 14
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 15
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
4.5.6.4. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.5.7. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
4.5.8. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.5.9. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.5.10. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.5.11. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.5.12. Next steps Copy linkLink copied to clipboard!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.6. Installing a cluster on AWS with network customizations Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Amazon Web Services (AWS) with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.
4.6.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.6.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.6.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.6.4. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.6.5. Network configuration phases Copy linkLink copied to clipboard!
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork networking.machineNetworkFor more information on these fields, refer to Installation configuration parameters.
NoteSet the
networking.machineNetworkto match the CIDR that the preferred NIC resides in.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.
4.6.6. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.6.6.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
4.6.6.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.6.6.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.6.6.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
4.6.6.1.4. Optional AWS configuration parameters Copy linkLink copied to clipboard!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
4.6.6.2. Supported AWS machine types Copy linkLink copied to clipboard!
The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.
Example 4.17. Instance types for machines
| Instance type | Bootstrap | Control plane | Compute |
|---|---|---|---|
|
| x | ||
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x |
4.6.6.3. Sample customized install-config.yaml file for AWS Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
credentialsMode: Mint
controlPlane:
hyperthreading: Enabled
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2
userTags:
adminContact: jdoe
costCenter: 7536
amiID: ami-96c6f8f7
serviceEndpoints:
- name: ec2
url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
fips: false
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
- 1 10 12 17
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Platform Operators reference content.
- 3 7 11
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 16
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
4.6.6.4. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.6.7. Cluster Network Operator configuration Copy linkLink copied to clipboard!
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:
clusterNetwork- IP address pools from which pod IP addresses are allocated.
serviceNetwork- IP address pool for services.
defaultNetwork.type- Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
4.6.7.1. Cluster Network Operator configuration object Copy linkLink copied to clipboard!
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The name of the CNO object. This name is always |
|
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
You can customize this field only in the |
|
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
| Configures the Container Network Interface (CNI) cluster network provider for the cluster network. |
|
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
Either Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. |
|
|
| This object is only valid for the OpenShift SDN cluster network provider. |
|
|
| This object is only valid for the OVN-Kubernetes cluster network provider. |
Configuration for the OpenShift SDN CNI cluster network provider
The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
|
Configures the network isolation mode for OpenShift SDN. The default value is
The values |
|
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
Configuration for the OVN-Kubernetes CNI cluster network provider
The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all Geneve packets. The default value is |
|
|
| Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. |
|
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
| Field | Type | Description |
|---|---|---|
|
| integer |
The maximum number of messages to generate every second per node. The default value is |
|
| integer |
The maximum size for the audit log in bytes. The default value is |
|
| string | One of the following additional audit log targets:
|
|
| string |
The syslog facility, such as |
Example OVN-Kubernetes configuration
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig: {}
kubeProxyConfig object configuration
The values for the kubeProxyConfig object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The refresh period for Note
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the |
|
|
|
The minimum duration before refreshing
|
4.6.8. Specifying advanced network configuration Copy linkLink copied to clipboard!
You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
Prerequisites
-
You have created the
install-config.yamlfile and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
<installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}-
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files.
For more information on using a Network Load Balancer (NLB) on AWS, see Configuring Ingress cluster traffic on AWS using a Network Load Balancer.
4.6.9. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster Copy linkLink copied to clipboard!
You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster.
Prerequisites
-
Create the
install-config.yamlfile and complete any modifications to it.
Procedure
Create an Ingress Controller backed by an AWS NLB on a new cluster.
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a file that is named
cluster-ingress-default-ingresscontroller.yamlin the<installation_directory>/manifests/directory:$ touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml1 - 1
- For
<installation_directory>, specify the directory name that contains themanifests/directory for your cluster.
After creating the file, several network configuration files are in the
manifests/directory, as shown:$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yamlExample output
cluster-ingress-default-ingresscontroller.yamlOpen the
cluster-ingress-default-ingresscontroller.yamlfile in an editor and enter a custom resource (CR) that describes the Operator configuration you want:apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService-
Save the
cluster-ingress-default-ingresscontroller.yamlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-ingress-default-ingresscontroller.yamlfile. The installation program deletes themanifests/directory when creating the cluster.
4.6.10. Configuring hybrid networking with OVN-Kubernetes Copy linkLink copied to clipboard!
You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.
You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process.
Prerequisites
-
You defined
OVNKubernetesfor thenetworking.networkTypeparameter in theinstall-config.yamlfile. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>where:
<installation_directory>-
Specifies the name of the directory that contains the
install-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOFwhere:
<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
Open the
cluster-network-03-config.ymlfile in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:Specify a hybrid networking configuration
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork:1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 98982 - 1
- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetworkCIDR cannot overlap with theclusterNetworkCIDR. - 2
- Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPortvalue because this Windows server version does not support selecting a custom VXLAN port.-
Save the
cluster-network-03-config.ymlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program deletes themanifests/directory when creating the cluster.
For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
4.6.11. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
4.6.12. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.6.13. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.6.14. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.6.15. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.6.16. Next steps Copy linkLink copied to clipboard!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.7. Installing a cluster on AWS in a restricted network Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC).
4.7.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You mirrored the images for a disconnected installation to your registry and obtained the
imageContentSourcesdata for your version of OpenShift Container Platform.ImportantBecause the installation media is on the mirror host, you can use that computer to complete all installation steps.
You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements:
- Contains the mirror registry
- Has firewall rules or a peering connection to access the mirror registry hosted elsewhere
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
NoteIf you are configuring a proxy, be sure to also review this site list.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.7.2. About installations in restricted networks Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
4.7.2.1. Additional limits Copy linkLink copied to clipboard!
Clusters in restricted networks have the following additional limitations and restrictions:
-
The
ClusterVersionstatus includes anUnable to retrieve available updateserror. - By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
4.7.3. About using a custom VPC Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
4.7.3.1. Requirements for using your VPC Copy linkLink copied to clipboard!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.- If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
Government regions
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Top secret region
-
ec2.<region>.c2s.ic.gov -
elasticloadbalancing.<region>.c2s.ic.gov -
s3.<region>.c2s.ic.gov
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
4.7.3.2. VPC validation Copy linkLink copied to clipboard!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
4.7.3.3. Division of permissions Copy linkLink copied to clipboard!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
4.7.3.4. Isolation between clusters Copy linkLink copied to clipboard!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
4.7.4. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to obtain the images that are necessary to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.7.5. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.7.6. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
-
Have the
imageContentSourcesvalues that were generated during mirror registry creation. - Obtain the contents of the certificate for your mirror registry.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Edit the
install-config.yamlfile to provide the additional information that is required for an installation in a restricted network.Update the
pullSecretvalue to contain the authentication information for your registry:pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'For
<mirror_host_name>, specify the registry domain name that you specified in the certificate for your mirror registry, and for<credentials>, specify the base64-encoded user name and password for your mirror registry.Add the
additionalTrustBundleparameter and value.additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry.
Define the subnets for the VPC to install the cluster in:
subnets: - subnet-1 - subnet-2 - subnet-3Add the image content resources, which look like this excerpt:
imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/releaseTo complete these values, use the
imageContentSourcesthat you recorded during mirror registry creation.
-
Make any other modifications to the
install-config.yamlfile that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.7.6.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
4.7.6.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.7.6.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.7.6.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
4.7.6.1.4. Optional AWS configuration parameters Copy linkLink copied to clipboard!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
4.7.6.2. Sample customized install-config.yaml file for AWS Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
credentialsMode: Mint
controlPlane:
hyperthreading: Enabled
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2
userTags:
adminContact: jdoe
costCenter: 7536
subnets:
- subnet-1
- subnet-2
- subnet-3
amiID: ami-96c6f8f7
serviceEndpoints:
- name: ec2
url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
hostedZone: Z3URY6TWQ91KVV
fips: false
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}'
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
- 1 10 11
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Platform Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 18
- For
<local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For exampleregistry.example.comorregistry.example.com:5000. For<credentials>, specify the base64-encoded user name and password for your mirror registry. - 19
- Provide the contents of the certificate file that you used for your mirror registry.
- 20
- Provide the
imageContentSourcessection from the output of the command to mirror the repository.
4.7.6.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.7.7. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
4.7.8. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.7.9. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.7.10. Disabling the default OperatorHub sources Copy linkLink copied to clipboard!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: trueto theOperatorHubobject:$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Global Configuration → OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources.
4.7.11. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.7.12. Next steps Copy linkLink copied to clipboard!
- Validate an installation.
- Customize your cluster.
-
Configure image streams for the Cluster Samples Operator and the
must-gathertool. - Learn how to use Operator Lifecycle Manager (OLM) on restricted networks.
- If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores.
- If necessary, you can opt out of remote health reporting.
4.8. Installing a cluster on AWS into an existing VPC Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
4.8.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.8.2. About using a custom VPC Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
4.8.2.1. Requirements for using your VPC Copy linkLink copied to clipboard!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
Create a public and private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation.
Record each subnet ID. Completing the installation requires that you enter these values in the
platformsection of theinstall-config.yamlfile. See Finding a subnet ID in the AWS documentation.-
The VPC’s CIDR block must contain the
Networking.MachineCIDRrange, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone:
- The public subnet requires a route to the internet gateway.
- The public subnet requires a NAT gateway with an EIP address.
- The private subnet requires a route to the NAT gateway in public subnet.
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
Government regions
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Top secret region
-
ec2.<region>.c2s.ic.gov -
elasticloadbalancing.<region>.c2s.ic.gov -
s3.<region>.c2s.ic.gov
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
4.8.2.2. VPC validation Copy linkLink copied to clipboard!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
4.8.2.3. Division of permissions Copy linkLink copied to clipboard!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
4.8.2.4. Isolation between clusters Copy linkLink copied to clipboard!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
4.8.3. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.8.4. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.8.5. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.8.6. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select AWS as the platform to target.
- If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.8.6.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
4.8.6.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.8.6.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.8.6.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
4.8.6.1.4. Optional AWS configuration parameters Copy linkLink copied to clipboard!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
4.8.6.2. Supported AWS machine types Copy linkLink copied to clipboard!
The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.
Example 4.18. Instance types for machines
| Instance type | Bootstrap | Control plane | Compute |
|---|---|---|---|
|
| x | ||
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x |
4.8.6.3. Sample customized install-config.yaml file for AWS Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
credentialsMode: Mint
controlPlane:
hyperthreading: Enabled
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2
userTags:
adminContact: jdoe
costCenter: 7536
subnets:
- subnet-1
- subnet-2
- subnet-3
amiID: ami-96c6f8f7
serviceEndpoints:
- name: ec2
url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
hostedZone: Z3URY6TWQ91KVV
fips: false
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
- 1 10 11 18
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Platform Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
4.8.6.4. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.8.7. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
4.8.8. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.8.9. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.8.10. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.8.11. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.8.12. Next steps Copy linkLink copied to clipboard!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.9. Installing a private cluster on AWS Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a private cluster into an existing VPC on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
4.9.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.9.2. Private clusters Copy linkLink copied to clipboard!
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
<<<<<<< HEAD By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
4.9.2.1. Private clusters in AWS Copy linkLink copied to clipboard!
To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network.
The cluster still requires access to internet to access the AWS APIs.
The following items are not required or created when you install a private cluster:
- Public subnets
- Public load balancers, which support public ingress
-
A public Route 53 zone that matches the
baseDomainfor the cluster
The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
4.9.2.1.1. Limitations Copy linkLink copied to clipboard!
The ability to add public functionality to a private cluster is limited.
- You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port).
-
If you use a public Service type load balancer, you must tag a public subnet in each availability zone with
kubernetes.io/cluster/<cluster-infra-id>: sharedso that AWS can use them to create public load balancers.
4.9.3. About using a custom VPC Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
4.9.3.1. Requirements for using your VPC Copy linkLink copied to clipboard!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.- If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
Government regions
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Top secret region
-
ec2.<region>.c2s.ic.gov -
elasticloadbalancing.<region>.c2s.ic.gov -
s3.<region>.c2s.ic.gov
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
4.9.3.2. VPC validation Copy linkLink copied to clipboard!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
4.9.3.3. Division of permissions Copy linkLink copied to clipboard!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
4.9.3.4. Isolation between clusters Copy linkLink copied to clipboard!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
4.9.4. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.9.5. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.9.6. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.9.7. Manually creating the installation configuration file Copy linkLink copied to clipboard!
For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the internet, you must manually generate your installation configuration file.
Prerequisites
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yamlfile template that is provided and save it in the<installation_directory>.NoteYou must name this configuration file
install-config.yaml.NoteFor some platform types, you can alternatively run
./openshift-install create install-config --dir <installation_directory>to generate aninstall-config.yamlfile. You can provide details about your cluster configuration at the prompts.Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the next step of the installation process. You must back it up now.
4.9.7.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
4.9.7.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.9.7.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.9.7.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
4.9.7.1.4. Optional AWS configuration parameters Copy linkLink copied to clipboard!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
4.9.7.2. Supported AWS machine types Copy linkLink copied to clipboard!
The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.
Example 4.19. Instance types for machines
| Instance type | Bootstrap | Control plane | Compute |
|---|---|---|---|
|
| x | ||
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x |
4.9.7.3. Sample customized install-config.yaml file for AWS Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
credentialsMode: Mint
controlPlane:
hyperthreading: Enabled
name: master
platform:
aws:
zones:
- us-west-2a
- us-west-2b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1
type: c5.4xlarge
zones:
- us-west-2c
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-west-2
userTags:
adminContact: jdoe
costCenter: 7536
subnets:
- subnet-1
- subnet-2
- subnet-3
amiID: ami-96c6f8f7
serviceEndpoints:
- name: ec2
url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
hostedZone: Z3URY6TWQ91KVV
fips: false
sshKey: ssh-ed25519 AAAA...
publish: Internal
pullSecret: '{"auths": ...}'
- 1 10 11 19
- Required. The installation program prompts you for this value.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Platform Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 12
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 13
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 14
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 15
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 18
- How to publish the user-facing endpoints of your cluster. Set
publishtoInternalto deploy a private cluster, which cannot be accessed from the internet. The default value isExternal.
4.9.7.4. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.9.8. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
4.9.9. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.9.10. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.9.11. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.9.12. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.9.13. Next steps Copy linkLink copied to clipboard!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.10. Installing a cluster on AWS into a government or secret region Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Amazon Web Services (AWS) into a government or secret region. To configure the region, modify parameters in the install-config.yaml file before you install the cluster.
4.10.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.10.2. AWS government and secret regions Copy linkLink copied to clipboard!
OpenShift Container Platform supports deploying a cluster to AWS GovCloud (US) regions and the AWS Commercial Cloud Services (C2S) Top Secret Region. These regions are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads in the cloud.
These regions do not have published Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMI) to select, so you must upload a custom AMI that belongs to that region.
The following AWS GovCloud partitions are supported:
-
us-gov-west-1 -
us-gov-east-1
The following AWS Top Secret Region partition is supported:
-
us-iso-east-1
The maximum supported MTU in an AWS Top Secret Region is not the same as AWS commercial. For more information about configuring MTU during installation, see the Cluster Network Operator configuration object section in Installing a cluster on AWS with network customizations
The AWS government or secret region, and accompanying custom AMI, must be manually configured in the install-config.yaml file since RHCOS AMIs are not provided by Red Hat for those regions.
If you are deploying to the C2S Secret Region, you must also define a custom CA certificate in the additionalTrustBundle field of the install-config.yaml file because the AWS API requires a custom CA trust bundle. To allow the installation program to access the AWS API, the CA certificates must also be defined on the machine that runs the installation program. You must add the CA bundle to the trust store on the machine, use the AWS_CA_BUNDLE environment variable, or define the CA bundle in the ca_bundle field of the AWS config file.
4.10.3. Private clusters Copy linkLink copied to clipboard!
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
Public zones are not supported in Route 53 in AWS GovCloud or Top Secret Regions. Therefore, clusters must be private if they are deployed to an AWS government or secret region.
<<<<<<< HEAD By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
4.10.3.1. Private clusters in AWS Copy linkLink copied to clipboard!
To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network.
The cluster still requires access to internet to access the AWS APIs.
The following items are not required or created when you install a private cluster:
- Public subnets
- Public load balancers, which support public ingress
-
A public Route 53 zone that matches the
baseDomainfor the cluster
The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
4.10.3.1.1. Limitations Copy linkLink copied to clipboard!
The ability to add public functionality to a private cluster is limited.
- You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port).
-
If you use a public Service type load balancer, you must tag a public subnet in each availability zone with
kubernetes.io/cluster/<cluster-infra-id>: sharedso that AWS can use them to create public load balancers.
4.10.4. About using a custom VPC Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
4.10.4.1. Requirements for using your VPC Copy linkLink copied to clipboard!
The installation program no longer creates the following components:
- Internet gateways
- NAT gateways
- Subnets
- Route tables
- VPCs
- VPC DHCP options
- VPC endpoints
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC.
The installation program cannot:
- Subdivide network ranges for the cluster to use.
- Set route tables for the subnets.
- Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
The VPC must not use the
kubernetes.io/cluster/.*: ownedtag.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify.You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZonefield in theinstall-config.yamlfile.- If you use a cluster with public access, you must create a public and a private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2 and ELB endpoints. To resolve this, you must create a VPC endpoint and attach it to the subnet that the clusters are using. The endpoints should be named as follows:
Government regions
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Top secret region
-
ec2.<region>.c2s.ic.gov -
elasticloadbalancing.<region>.c2s.ic.gov -
s3.<region>.c2s.ic.gov
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
4.10.4.2. VPC validation Copy linkLink copied to clipboard!
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the subnets that you specify exist.
- You provide private subnets.
- The subnet CIDRs belong to the machine CIDR that you specified.
- You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
- You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
4.10.4.3. Division of permissions Copy linkLink copied to clipboard!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
4.10.4.4. Isolation between clusters Copy linkLink copied to clipboard!
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
- You can install multiple OpenShift Container Platform clusters in the same VPC.
- ICMP ingress is allowed from the entire network.
- TCP 22 ingress (SSH) is allowed to the entire network.
- Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
- Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
4.10.5. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.10.6. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
4.10.7. Obtaining an AWS Marketplace image Copy linkLink copied to clipboard!
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes.
Deploying an OpenShift Container Platform cluster using an AWS Marketplace image is not supported in secret regions.
Prerequisites
- You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Procedure
- Complete the OpenShift Container Platform subscription from the AWS Marketplace.
-
Record the AMI ID for your specific region. As part of the installation process, you must update the
install-config.yamlfile with this value before deploying the cluster.
Sample install-config.yaml file with AWS Marketplace worker nodes
apiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
amiID: ami-06c4d345f7c207239
type: m5.4xlarge
replicas: 3
metadata:
name: test-cluster
platform:
aws:
region: us-gov-west-1
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
4.10.8. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.10.9. Manually creating the installation configuration file Copy linkLink copied to clipboard!
When installing OpenShift Container Platform on Amazon Web Services (AWS) into a region requiring a custom Red Hat Enterprise Linux CoreOS (RHCOS) AMI, you must manually generate your installation configuration file.
Prerequisites
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yamlfile template that is provided and save it in the<installation_directory>.NoteYou must name this configuration file
install-config.yaml.NoteFor some platform types, you can alternatively run
./openshift-install create install-config --dir <installation_directory>to generate aninstall-config.yamlfile. You can provide details about your cluster configuration at the prompts.Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the next step of the installation process. You must back it up now.
4.10.9.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
4.10.9.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
4.10.9.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
4.10.9.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
4.10.9.1.4. Optional AWS configuration parameters Copy linkLink copied to clipboard!
Optional AWS configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. |
Integer, for example |
|
| The size in GiB of the root volume. |
Integer, for example |
|
| The type of the root volume. |
Valid AWS EBS volume type, such as |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of worker nodes with a specific KMS key. | Valid key ID or the key ARN. |
|
| The EC2 instance type for the compute machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates compute resources in. |
Any valid AWS region, such as |
|
| The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. | The name of a valid AWS IAM role. |
|
| The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt OS volumes of control plane nodes with a specific KMS key. | Valid key ID and the key ARN. |
|
| The EC2 instance type for the control plane machines. |
Valid AWS instance type, such as |
|
| The availability zones where the installation program creates machines for the control plane machine pool. |
A list of valid AWS availability zones, such as |
|
| The AWS region that the installation program creates control plane resources in. |
Valid AWS region, such as |
|
| The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. | Any published or custom RHCOS AMI that belongs to the set AWS region. |
|
| An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. |
String, for example |
|
| The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. | Valid AWS service endpoint name. |
|
|
The AWS service endpoint URL. The URL must use the | Valid AWS service endpoint URL. |
|
| A map of keys and values that the installation program adds as tags to all resources that it creates. |
Any valid YAML map, such as key value pairs in the |
|
|
If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same | Valid subnet IDs. |
4.10.9.2. Supported AWS machine types Copy linkLink copied to clipboard!
The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.
Example 4.20. Instance types for machines
| Instance type | Bootstrap | Control plane | Compute |
|---|---|---|---|
|
| x | ||
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x |
4.10.9.3. Sample customized install-config.yaml file for AWS Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
credentialsMode: Mint
controlPlane:
hyperthreading: Enabled
name: master
platform:
aws:
zones:
- us-gov-west-1a
- us-gov-west-1b
rootVolume:
iops: 4000
size: 500
type: io1
type: m5.xlarge
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
aws:
rootVolume:
iops: 2000
size: 500
type: io1
type: c5.4xlarge
zones:
- us-gov-west-1c
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
region: us-gov-west-1
userTags:
adminContact: jdoe
costCenter: 7536
subnets:
- subnet-1
- subnet-2
- subnet-3
amiID: ami-96c6f8f7
serviceEndpoints:
- name: ec2
url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com
hostedZone: Z3URY6TWQ91KVV
fips: false
sshKey: ssh-ed25519 AAAA...
publish: Internal
pullSecret: '{"auths": ...}'
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
- 1 10 18
- Required.
- 2
- Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Platform Operators reference content.
- 3 7
- If you do not provide these parameters and values, the installation program provides the default value.
- 4
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 5 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as
m4.2xlargeorm5.2xlarge, for your machines if you disable simultaneous multithreading. - 6 9
- To configure faster storage for etcd, especially for larger clusters, set the storage type as
io1and setiopsto2000. - 11
- If you provide your own VPC, specify subnets for each availability zone that your cluster uses.
- 12
- The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster.
- 13
- The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the
httpsprotocol and the host must trust the certificate. - 14
- The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone.
- 15
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 16
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 17
- How to publish the user-facing endpoints of your cluster. Set
publishtoInternalto deploy a private cluster, which cannot be accessed from the internet. The default value isExternal. - 19
- The custom CA certificate. This is required when deploying to the AWS C2S Top Secret Region because the AWS API requires a custom CA trust bundle.
4.10.9.4. AWS regions without a published RHCOS AMI Copy linkLink copied to clipboard!
You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. This is required if you are deploying your cluster to an AWS government or secret region. AWS government and secret regions are supported by the AWS SDK.
If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs.
A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file.
4.10.9.5. Uploading a custom RHCOS AMI in AWS Copy linkLink copied to clipboard!
If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region.
Prerequisites
- You configured an AWS account.
- You created an Amazon S3 bucket with the required IAM service role.
- You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer.
Procedure
Export your AWS profile as an environment variable:
$ export AWS_PROFILE=<aws_profile>1 - 1
- The AWS profile name that holds your AWS credentials, like
govcloud.
Export the region to associate with your custom AMI as an environment variable:
$ export AWS_DEFAULT_REGION=<aws_region>1 - 1
- The AWS region, like
us-gov-east-1.
Export the version of RHCOS you uploaded to Amazon S3 as an environment variable:
$ export RHCOS_VERSION=<version>1 - 1
- The RHCOS VMDK version, like
4.8.0.
Export the Amazon S3 bucket name as an environment variable:
$ export VMIMPORT_BUCKET_NAME=<s3_bucket_name>Create the
containers.jsonfile and define your RHCOS VMDK file:$ cat <<EOF > containers.json { "Description": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "${VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOFImport the RHCOS disk as an Amazon EBS snapshot:
$ aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \ --description "<description>" \1 --disk-container "file://<file_path>/containers.json"2 Check the status of the image import:
$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}Example output
{ "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] }Copy the
SnapshotIdto register the image.Create a custom RHCOS AMI from the RHCOS snapshot:
$ aws ec2 register-image \ --region ${AWS_DEFAULT_REGION} \ --architecture x86_64 \1 --description "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \2 --ena-support \ --name "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}'4
To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs.
4.10.9.6. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.10.10. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.NoteThe elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
4.10.11. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.10.12. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.10.13. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.10.14. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.10.15. Next steps Copy linkLink copied to clipboard!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.11. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide.
One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.
The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.
4.11.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation.
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
NoteBe sure to also review this site list if you are configuring a proxy.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.11.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.11.3. Required AWS infrastructure components Copy linkLink copied to clipboard!
To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure.
For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page.
By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components:
- An AWS Virtual Private Cloud (VPC)
- Networking and load balancing components
- Security groups and roles
- An OpenShift Container Platform bootstrap node
- OpenShift Container Platform control plane nodes
- An OpenShift Container Platform compute node
Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate.
4.11.3.1. Other infrastructure components Copy linkLink copied to clipboard!
- A VPC
- DNS entries
- Load balancers (classic or network) and listeners
- A public and a private Route 53 zone
- Security groups
- IAM roles
- S3 buckets
If you are working in a disconnected environment or use a proxy, you cannot reach the public IP addresses for EC2 and ELB endpoints. To reach these endpoints, you must create a VPC endpoint and attach it to the subnet that the clusters are using. Create the following endpoints:
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
Required DNS and load balancing components
Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster’s infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer.
The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes (also known as the master nodes). Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster.
| Component | AWS type | Description |
|---|---|---|
| DNS |
| The hosted zone for your internal DNS. |
| etcd record sets |
| The registration records for etcd for your control plane machines. |
| Public load balancer |
| The load balancer for your public subnets. |
| External API server record |
| Alias records for the external API server. |
| External listener |
| A listener on port 6443 for the external load balancer. |
| External target group |
| The target group for the external load balancer. |
| Private load balancer |
| The load balancer for your private subnets. |
| Internal API server record |
| Alias records for the internal API server. |
| Internal listener |
| A listener on port 22623 for the internal load balancer. |
| Internal target group |
| The target group for the internal load balancer. |
| Internal listener |
| A listener on port 6443 for the internal load balancer. |
| Internal target group |
| The target group for the internal load balancer. |
Security groups
The control plane and worker machines require access to the following ports:
| Group | Type | IP Protocol | Port range |
|---|---|---|---|
|
|
|
|
|
|
|
| ||
|
|
| ||
|
|
| ||
|
|
|
|
|
|
|
| ||
|
|
|
|
|
|
|
|
Control plane Ingress
The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.
| Ingress group | Description | IP protocol | Port range |
|---|---|---|---|
|
| etcd |
|
|
|
| Vxlan packets |
|
|
|
| Vxlan packets |
|
|
|
| Internal cluster communication and Kubernetes proxy metrics |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Geneve packets |
|
|
|
| Geneve packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec ESP packets |
|
|
|
| IPsec ESP packets |
|
|
|
| Internal cluster communication |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
Worker Ingress
The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.
| Ingress group | Description | IP protocol | Port range |
|---|---|---|---|
|
| Vxlan packets |
|
|
|
| Vxlan packets |
|
|
|
| Internal cluster communication |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes kubelet, scheduler, and controller manager |
|
|
|
| Kubernetes kubelet, scheduler, and controller manager |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Geneve packets |
|
|
|
| Geneve packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec ESP packets |
|
|
|
| IPsec ESP packets |
|
|
|
| Internal cluster communication |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
Roles and instance profiles
You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions.
| Role | Effect | Action | Resource |
|---|---|---|---|
| Master |
|
|
|
|
|
|
| |
|
|
|
| |
|
|
|
| |
| Worker |
|
|
|
| Bootstrap |
|
|
|
|
|
|
| |
|
|
|
|
4.11.3.2. Cluster machines Copy linkLink copied to clipboard!
You need AWS::EC2::Instance objects for the following machines:
- A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys.
- Three control plane machines. The control plane machines are not governed by a machine set.
- Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a machine set.
4.11.3.3. Certificate signing requests management Copy linkLink copied to clipboard!
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
4.11.3.4. Supported AWS machine types Copy linkLink copied to clipboard!
The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.
Example 4.21. Instance types for machines
| Instance type | Bootstrap | Control plane | Compute |
|---|---|---|---|
|
| x | ||
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x |
4.11.3.5. Required AWS permissions for the IAM user Copy linkLink copied to clipboard!
Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region.
When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions:
Example 4.22. Required EC2 permissions for installation
-
ec2:AuthorizeSecurityGroupEgress -
ec2:AuthorizeSecurityGroupIngress -
ec2:CopyImage -
ec2:CreateNetworkInterface -
ec2:AttachNetworkInterface -
ec2:CreateSecurityGroup -
ec2:CreateTags -
ec2:CreateVolume -
ec2:DeleteSecurityGroup -
ec2:DeleteSnapshot -
ec2:DeleteTags -
ec2:DeregisterImage -
ec2:DescribeAccountAttributes -
ec2:DescribeAddresses -
ec2:DescribeAvailabilityZones -
ec2:DescribeDhcpOptions -
ec2:DescribeImages -
ec2:DescribeInstanceAttribute -
ec2:DescribeInstanceCreditSpecifications -
ec2:DescribeInstances -
ec2:DescribeInstanceTypes -
ec2:DescribeInternetGateways -
ec2:DescribeKeyPairs -
ec2:DescribeNatGateways -
ec2:DescribeNetworkAcls -
ec2:DescribeNetworkInterfaces -
ec2:DescribePrefixLists -
ec2:DescribeRegions -
ec2:DescribeRouteTables -
ec2:DescribeSecurityGroups -
ec2:DescribeSubnets -
ec2:DescribeTags -
ec2:DescribeVolumes -
ec2:DescribeVpcAttribute -
ec2:DescribeVpcClassicLink -
ec2:DescribeVpcClassicLinkDnsSupport -
ec2:DescribeVpcEndpoints -
ec2:DescribeVpcs -
ec2:GetEbsDefaultKmsKeyId -
ec2:ModifyInstanceAttribute -
ec2:ModifyNetworkInterfaceAttribute -
ec2:RevokeSecurityGroupEgress -
ec2:RevokeSecurityGroupIngress -
ec2:RunInstances -
ec2:TerminateInstances
Example 4.23. Required permissions for creating network resources during installation
-
ec2:AllocateAddress -
ec2:AssociateAddress -
ec2:AssociateDhcpOptions -
ec2:AssociateRouteTable -
ec2:AttachInternetGateway -
ec2:CreateDhcpOptions -
ec2:CreateInternetGateway -
ec2:CreateNatGateway -
ec2:CreateRoute -
ec2:CreateRouteTable -
ec2:CreateSubnet -
ec2:CreateVpc -
ec2:CreateVpcEndpoint -
ec2:ModifySubnetAttribute -
ec2:ModifyVpcAttribute
If you use an existing VPC, your account does not require these permissions for creating network resources.
Example 4.24. Required Elastic Load Balancing permissions (ELB) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer -
elasticloadbalancing:AttachLoadBalancerToSubnets -
elasticloadbalancing:ConfigureHealthCheck -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateLoadBalancerListeners -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterInstancesFromLoadBalancer -
elasticloadbalancing:DescribeInstanceHealth -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTags -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:RegisterInstancesWithLoadBalancer -
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
Example 4.25. Required Elastic Load Balancing permissions (ELBv2) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:CreateListener -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateTargetGroup -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterTargets -
elasticloadbalancing:DescribeListeners -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTargetGroupAttributes -
elasticloadbalancing:DescribeTargetHealth -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:ModifyTargetGroup -
elasticloadbalancing:ModifyTargetGroupAttributes -
elasticloadbalancing:RegisterTargets
Example 4.26. Required IAM permissions for installation
-
iam:AddRoleToInstanceProfile -
iam:CreateInstanceProfile -
iam:CreateRole -
iam:DeleteInstanceProfile -
iam:DeleteRole -
iam:DeleteRolePolicy -
iam:GetInstanceProfile -
iam:GetRole -
iam:GetRolePolicy -
iam:GetUser -
iam:ListInstanceProfilesForRole -
iam:ListRoles -
iam:ListUsers -
iam:PassRole -
iam:PutRolePolicy -
iam:RemoveRoleFromInstanceProfile -
iam:SimulatePrincipalPolicy -
iam:TagRole
If you have not created an elastic load balancer (ELB) in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission.
Example 4.27. Required Route 53 permissions for installation
-
route53:ChangeResourceRecordSets -
route53:ChangeTagsForResource -
route53:CreateHostedZone -
route53:DeleteHostedZone -
route53:GetChange -
route53:GetHostedZone -
route53:ListHostedZones -
route53:ListHostedZonesByName -
route53:ListResourceRecordSets -
route53:ListTagsForResource -
route53:UpdateHostedZoneComment
Example 4.28. Required S3 permissions for installation
-
s3:CreateBucket -
s3:DeleteBucket -
s3:GetAccelerateConfiguration -
s3:GetBucketAcl -
s3:GetBucketCors -
s3:GetBucketLocation -
s3:GetBucketLogging -
s3:GetBucketObjectLockConfiguration -
s3:GetBucketReplication -
s3:GetBucketRequestPayment -
s3:GetBucketTagging -
s3:GetBucketVersioning -
s3:GetBucketWebsite -
s3:GetEncryptionConfiguration -
s3:GetLifecycleConfiguration -
s3:GetReplicationConfiguration -
s3:ListBucket -
s3:PutBucketAcl -
s3:PutBucketTagging -
s3:PutEncryptionConfiguration
Example 4.29. S3 permissions that cluster Operators require
-
s3:DeleteObject -
s3:GetObject -
s3:GetObjectAcl -
s3:GetObjectTagging -
s3:GetObjectVersion -
s3:PutObject -
s3:PutObjectAcl -
s3:PutObjectTagging
Example 4.30. Required permissions to delete base cluster resources
-
autoscaling:DescribeAutoScalingGroups -
ec2:DeleteNetworkInterface -
ec2:DeleteVolume -
elasticloadbalancing:DeleteTargetGroup -
elasticloadbalancing:DescribeTargetGroups -
iam:DeleteAccessKey -
iam:DeleteUser -
iam:ListAttachedRolePolicies -
iam:ListInstanceProfiles -
iam:ListRolePolicies -
iam:ListUserPolicies -
s3:DeleteObject -
s3:ListBucketVersions -
tag:GetResources
Example 4.31. Required permissions to delete network resources
-
ec2:DeleteDhcpOptions -
ec2:DeleteInternetGateway -
ec2:DeleteNatGateway -
ec2:DeleteRoute -
ec2:DeleteRouteTable -
ec2:DeleteSubnet -
ec2:DeleteVpc -
ec2:DeleteVpcEndpoints -
ec2:DetachInternetGateway -
ec2:DisassociateRouteTable -
ec2:ReleaseAddress -
ec2:ReplaceRouteTableAssociation
If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources.
Example 4.32. Required permissions to delete a cluster with shared instance roles
-
iam:UntagRole
Example 4.33. Additional IAM and S3 permissions that are required to create manifests
-
iam:DeleteAccessKey -
iam:DeleteUser -
iam:DeleteUserPolicy -
iam:GetUserPolicy -
iam:ListAccessKeys -
iam:PutUserPolicy -
iam:TagUser -
iam:GetUserPolicy -
iam:ListAccessKeys -
s3:PutBucketPublicAccessBlock -
s3:GetBucketPublicAccessBlock -
s3:PutLifecycleConfiguration -
s3:HeadBucket -
s3:ListBucketMultipartUploads -
s3:AbortMultipartUpload
If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions.
Example 4.34. Optional permissions for instance and quota checks for installation
-
ec2:DescribeInstanceTypeOfferings -
servicequotas:ListAWSDefaultServiceQuotas
4.11.4. Obtaining an AWS Marketplace image Copy linkLink copied to clipboard!
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes.
Deploying an OpenShift Container Platform cluster using an AWS Marketplace image is not supported in secret regions.
Prerequisites
- You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
Procedure
- Complete the OpenShift Container Platform subscription from the AWS Marketplace.
-
Record the AMI ID for your specific region. If you use the CloudFormation template to deploy your worker nodes, you must update the
worker0.type.properties.ImageIDparameter with this value.
4.11.5. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
4.11.6. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program.
4.11.7. Creating the installation files for AWS Copy linkLink copied to clipboard!
To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.
4.11.7.1. Optional: Creating a separate /var partition Copy linkLink copied to clipboard!
It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:
-
/var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system. -
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. -
/var: Holds data that you might want to keep separate for purposes such as auditing.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.
Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.
If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section.
Procedure
Create a directory to hold the OpenShift Container Platform installation files:
$ mkdir $HOME/clusterconfigRun
openshift-installto create a set of files in themanifestandopenshiftsubdirectories. Answer the system questions as you are prompted:$ openshift-install create manifests --dir $HOME/clusterconfigExample output
? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshiftOptional: Confirm that the installation program created manifests in the
clusterconfig/openshiftdirectory:$ ls $HOME/clusterconfig/openshift/Example output
99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ...Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the storage device on theworkersystems, and set the storage size as appropriate. This example places the/vardirectory on a separate partition:variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name>1 partitions: - label: var start_mib: <partition_start_offset>2 size_mib: <partition_size>3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota]4 with_mount_unit: true- 1
- The storage device name of the disk that you want to partition.
- 2
- When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
- 3
- The size of the data partition in mebibytes.
- 4
- The
prjquotamount option must be enabled for filesystems used for container storage.
NoteWhen creating a separate
/varpartition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.Create a manifest from the Butane config and save it to the
clusterconfig/openshiftdirectory. For example, run the following command:$ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshift/98-var-partition.yamlRun
openshift-installagain to create Ignition configs from a set of files in themanifestandopenshiftsubdirectories:$ openshift-install create ignition-configs --dir $HOME/clusterconfig $ ls $HOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign
Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.
4.11.7.2. Creating the installation configuration file Copy linkLink copied to clipboard!
Generate and customize the installation configuration file that the installation program needs to deploy your cluster.
Prerequisites
- You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster.
-
You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the
install-config.yamlfile manually.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select aws as the platform to target.
If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
NoteThe AWS access key ID and secret access key are stored in
~/.aws/credentialsin the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Optional: Back up the
install-config.yamlfile.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.11.7.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.11.7.4. Creating the Kubernetes manifest and Ignition config files Copy linkLink copied to clipboard!
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.
-
The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Prerequisites
- You obtained the OpenShift Container Platform installation program.
-
You created the
install-config.yamlinstallation configuration file.
Procedure
Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the installation directory that contains theinstall-config.yamlfile you created.
Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yamlBy removing these files, you prevent the cluster from automatically generating control plane machines.
Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yamlBecause you create and manage the worker machines yourself, you do not need to initialize these machines.
Check that the
mastersSchedulableparameter in the<installation_directory>/manifests/cluster-scheduler-02-config.ymlKubernetes manifest file is set tofalse. This setting prevents pods from being scheduled on the control plane machines:-
Open the
<installation_directory>/manifests/cluster-scheduler-02-config.ymlfile. -
Locate the
mastersSchedulableparameter and ensure that it is set tofalse. - Save and exit the file.
-
Open the
Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the
privateZoneandpublicZonesections from the<installation_directory>/manifests/cluster-dns-02-config.ymlDNS configuration file:apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone:1 id: mycluster-100419-private-zone publicZone:2 id: example.openshift.com status: {}If you do so, you must add ingress DNS records manually in a later step.
To create the Ignition configuration files, run the following command from the directory that contains the installation program:
$ ./openshift-install create ignition-configs --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the same installation directory.
Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The
kubeadmin-passwordandkubeconfigfiles are created in the./<installation_directory>/authdirectory:. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
4.11.8. Extracting the infrastructure name Copy linkLink copied to clipboard!
The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it.
Prerequisites
- You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
- You generated the Ignition config files for your cluster.
-
You installed the
jqpackage.
Procedure
To extract and view the infrastructure name from the Ignition config file metadata, run the following command:
$ jq -r .infraID <installation_directory>/metadata.json1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Example output
openshift-vw9j61 - 1
- The output of this command is your cluster name and a random string.
4.11.9. Creating a VPC in AWS Copy linkLink copied to clipboard!
You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "VpcCidr",1 "ParameterValue": "10.0.0.0/16"2 }, { "ParameterKey": "AvailabilityZoneCount",3 "ParameterValue": "1"4 }, { "ParameterKey": "SubnetBits",5 "ParameterValue": "12"6 } ]- Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the VPC:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 - 1
<name>is the name for the CloudFormation stack, such ascluster-vpc. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849fConfirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:VpcIdThe ID of your VPC.
PublicSubnetIdsThe IDs of the new public subnets.
PrivateSubnetIdsThe IDs of the new private subnets.
4.11.9.1. CloudFormation template for the VPC Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.
Example 4.35. CloudFormation template for the VPC
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs
Parameters:
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
AvailabilityZoneCount:
ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
MinValue: 1
MaxValue: 3
Default: 1
Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
Type: Number
SubnetBits:
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
MinValue: 5
MaxValue: 13
Default: 12
Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)"
Type: Number
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Network Configuration"
Parameters:
- VpcCidr
- SubnetBits
- Label:
default: "Availability Zones"
Parameters:
- AvailabilityZoneCount
ParameterLabels:
AvailabilityZoneCount:
default: "Availability Zone Count"
VpcCidr:
default: "VPC CIDR"
SubnetBits:
default: "Bits Per Subnet"
Conditions:
DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
EnableDnsSupport: "true"
EnableDnsHostnames: "true"
CidrBlock: !Ref VpcCidr
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
InternetGateway:
Type: "AWS::EC2::InternetGateway"
GatewayToInternet:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: "AWS::EC2::Route"
DependsOn: GatewayToInternet
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation3:
Condition: DoAz3
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet3
RouteTableId: !Ref PublicRouteTable
PrivateSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PrivateSubnet
RouteTableId: !Ref PrivateRouteTable
NAT:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Properties:
AllocationId:
"Fn::GetAtt":
- EIP
- AllocationId
SubnetId: !Ref PublicSubnet
EIP:
Type: "AWS::EC2::EIP"
Properties:
Domain: vpc
Route:
Type: "AWS::EC2::Route"
Properties:
RouteTableId:
Ref: PrivateRouteTable
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT
PrivateSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable2:
Type: "AWS::EC2::RouteTable"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateRouteTable2
NAT2:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz2
Properties:
AllocationId:
"Fn::GetAtt":
- EIP2
- AllocationId
SubnetId: !Ref PublicSubnet2
EIP2:
Type: "AWS::EC2::EIP"
Condition: DoAz2
Properties:
Domain: vpc
Route2:
Type: "AWS::EC2::Route"
Condition: DoAz2
Properties:
RouteTableId:
Ref: PrivateRouteTable2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT2
PrivateSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable3:
Type: "AWS::EC2::RouteTable"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation3:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz3
Properties:
SubnetId: !Ref PrivateSubnet3
RouteTableId: !Ref PrivateRouteTable3
NAT3:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz3
Properties:
AllocationId:
"Fn::GetAtt":
- EIP3
- AllocationId
SubnetId: !Ref PublicSubnet3
EIP3:
Type: "AWS::EC2::EIP"
Condition: DoAz3
Properties:
Domain: vpc
Route3:
Type: "AWS::EC2::Route"
Condition: DoAz3
Properties:
RouteTableId:
Ref: PrivateRouteTable3
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT3
S3Endpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: '*'
Action:
- '*'
Resource:
- '*'
RouteTableIds:
- !Ref PublicRouteTable
- !Ref PrivateRouteTable
- !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"]
- !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"]
ServiceName: !Join
- ''
- - com.amazonaws.
- !Ref 'AWS::Region'
- .s3
VpcId: !Ref VPC
Outputs:
VpcId:
Description: ID of the new VPC.
Value: !Ref VPC
PublicSubnetIds:
Description: Subnet IDs of the public subnets.
Value:
!Join [
",",
[!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]]
]
PrivateSubnetIds:
Description: Subnet IDs of the private subnets.
Value:
!Join [
",",
[!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]]
]
4.11.10. Creating networking and load balancing components in AWS Copy linkLink copied to clipboard!
You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags.
You can run the template multiple times within a single Virtual Private Cloud (VPC).
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
Procedure
Obtain the hosted zone ID for the Route 53 base domain that you specified in the
install-config.yamlfile for your cluster. You can obtain details about your hosted zone by running the following command:$ aws route53 list-hosted-zones-by-name --dns-name <route53_domain>1 - 1
- For the
<route53_domain>, specify the Route 53 base domain that you used when you generated theinstall-config.yamlfile for the cluster.
Example output
mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10In the example output, the hosted zone ID is
Z21IXYZABCZ2A4.Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "ClusterName",1 "ParameterValue": "mycluster"2 }, { "ParameterKey": "InfrastructureName",3 "ParameterValue": "mycluster-<random_string>"4 }, { "ParameterKey": "HostedZoneId",5 "ParameterValue": "<random_string>"6 }, { "ParameterKey": "HostedZoneName",7 "ParameterValue": "example.com"8 }, { "ParameterKey": "PublicSubnets",9 "ParameterValue": "subnet-<random_string>"10 }, { "ParameterKey": "PrivateSubnets",11 "ParameterValue": "subnet-<random_string>"12 }, { "ParameterKey": "VpcId",13 "ParameterValue": "vpc-<random_string>"14 } ]- 1
- A short, representative cluster name to use for hostnames, etc.
- 2
- Specify the cluster name that you used when you generated the
install-config.yamlfile for the cluster. - 3
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 4
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 5
- The Route 53 public zone ID to register the targets with.
- 6
- Specify the Route 53 public zone ID, which as a format similar to
Z21IXYZABCZ2A4. You can obtain this value from the AWS console. - 7
- The Route 53 zone to register the targets with.
- 8
- Specify the Route 53 base domain that you used when you generated the
install-config.yamlfile for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - 9
- The public subnets that you created for your VPC.
- 10
- Specify the
PublicSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 11
- The private subnets that you created for your VPC.
- 12
- Specify the
PrivateSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 13
- The VPC that you created for the cluster.
- 14
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC.
Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires.
ImportantIf you are deploying your cluster to an AWS government or secret region, you must update the
InternalApiServerRecordin the CloudFormation template to useCNAMErecords. Records of typeALIASare not supported for AWS government regions.Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 --capabilities CAPABILITY_NAMED_IAM4 - 1
<name>is the name for the CloudFormation stack, such ascluster-dns. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- 4
- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::Roleresources.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:PrivateHostedZoneIdHosted zone ID for the private DNS.
ExternalApiLoadBalancerNameFull name of the external API load balancer.
InternalApiLoadBalancerNameFull name of the internal API load balancer.
ApiServerDnsNameFull hostname of the API server.
RegisterNlbIpTargetsLambdaLambda ARN useful to help register/deregister IP targets for these load balancers.
ExternalApiTargetGroupArnARN of external API target group.
InternalApiTargetGroupArnARN of internal API target group.
InternalServiceTargetGroupArnARN of internal service target group.
4.11.10.1. CloudFormation template for the network and load balancers Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster.
Example 4.36. CloudFormation template for the network and load balancers
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Network Elements (Route53 & LBs)
Parameters:
ClusterName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, representative cluster name to use for host names and other identifying names.
Type: String
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
Type: String
HostedZoneId:
Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4.
Type: String
HostedZoneName:
Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period.
Type: String
Default: "example.com"
PublicSubnets:
Description: The internet-facing subnets.
Type: List<AWS::EC2::Subnet::Id>
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- ClusterName
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- PublicSubnets
- PrivateSubnets
- Label:
default: "DNS"
Parameters:
- HostedZoneName
- HostedZoneId
ParameterLabels:
ClusterName:
default: "Cluster Name"
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
PublicSubnets:
default: "Public Subnets"
PrivateSubnets:
default: "Private Subnets"
HostedZoneName:
default: "Public Hosted Zone Name"
HostedZoneId:
default: "Public Hosted Zone ID"
Resources:
ExtApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "ext"]]
IpAddressType: ipv4
Subnets: !Ref PublicSubnets
Type: network
IntApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "int"]]
Scheme: internal
IpAddressType: ipv4
Subnets: !Ref PrivateSubnets
Type: network
IntDns:
Type: "AWS::Route53::HostedZone"
Properties:
HostedZoneConfig:
Comment: "Managed by CloudFormation"
Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]]
HostedZoneTags:
- Key: Name
Value: !Join ["-", [!Ref InfrastructureName, "int"]]
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "owned"
VPCs:
- VPCId: !Ref VpcId
VPCRegion: !Ref "AWS::Region"
ExternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref HostedZoneId
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID
DNSName: !GetAtt ExtApiElb.DNSName
InternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref IntDns
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
- Name:
!Join [
".",
["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
ExternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: ExternalApiTargetGroup
LoadBalancerArn:
Ref: ExtApiElb
Port: 6443
Protocol: TCP
ExternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 10
HealthCheckPath: "/readyz"
HealthCheckPort: 6443
HealthCheckProtocol: HTTPS
HealthyThresholdCount: 2
UnhealthyThresholdCount: 2
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalApiTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 6443
Protocol: TCP
InternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 10
HealthCheckPath: "/readyz"
HealthCheckPort: 6443
HealthCheckProtocol: HTTPS
HealthyThresholdCount: 2
UnhealthyThresholdCount: 2
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalServiceInternalListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalServiceTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 22623
Protocol: TCP
InternalServiceTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 10
HealthCheckPath: "/healthz"
HealthCheckPort: 22623
HealthCheckProtocol: HTTPS
HealthyThresholdCount: 2
UnhealthyThresholdCount: 2
Port: 22623
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
RegisterTargetLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalApiTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalServiceTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref ExternalApiTargetGroup
RegisterNlbIpTargets:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterTargetLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
elb = boto3.client('elbv2')
if event['RequestType'] == 'Delete':
elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
elif event['RequestType'] == 'Create':
elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp'])
Runtime: "python3.7"
Timeout: 120
RegisterSubnetTagsLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"ec2:DeleteTags",
"ec2:CreateTags"
]
Resource: "arn:aws:ec2:*:*:subnet/*"
- Effect: "Allow"
Action:
[
"ec2:DescribeSubnets",
"ec2:DescribeTags"
]
Resource: "*"
RegisterSubnetTags:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterSubnetTagsLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
ec2_client = boto3.client('ec2')
if event['RequestType'] == 'Delete':
for subnet_id in event['ResourceProperties']['Subnets']:
ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]);
elif event['RequestType'] == 'Create':
for subnet_id in event['ResourceProperties']['Subnets']:
ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]);
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0])
Runtime: "python3.7"
Timeout: 120
RegisterPublicSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PublicSubnets
RegisterPrivateSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PrivateSubnets
Outputs:
PrivateHostedZoneId:
Description: Hosted zone ID for the private DNS, which is required for private records.
Value: !Ref IntDns
ExternalApiLoadBalancerName:
Description: Full name of the external API load balancer.
Value: !GetAtt ExtApiElb.LoadBalancerFullName
InternalApiLoadBalancerName:
Description: Full name of the internal API load balancer.
Value: !GetAtt IntApiElb.LoadBalancerFullName
ApiServerDnsName:
Description: Full hostname of the API server, which is required for the Ignition config files.
Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]]
RegisterNlbIpTargetsLambda:
Description: Lambda ARN useful to help register or deregister IP targets for these load balancers.
Value: !GetAtt RegisterNlbIpTargets.Arn
ExternalApiTargetGroupArn:
Description: ARN of the external API target group.
Value: !Ref ExternalApiTargetGroup
InternalApiTargetGroupArn:
Description: ARN of the internal API target group.
Value: !Ref InternalApiTargetGroup
InternalServiceTargetGroupArn:
Description: ARN of the internal service target group.
Value: !Ref InternalServiceTargetGroup
If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example:
Type: CNAME
TTL: 10
ResourceRecords:
- !GetAtt IntApiElb.DNSName
4.11.11. Creating security group and roles in AWS Copy linkLink copied to clipboard!
You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "VpcCidr",3 "ParameterValue": "10.0.0.0/16"4 }, { "ParameterKey": "PrivateSubnets",5 "ParameterValue": "subnet-<random_string>"6 }, { "ParameterKey": "VpcId",7 "ParameterValue": "vpc-<random_string>"8 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- The CIDR block for the VPC.
- 4
- Specify the CIDR block parameter that you used for the VPC that you defined in the form
x.x.x.x/16-24. - 5
- The private subnets that you created for your VPC.
- 6
- Specify the
PrivateSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 7
- The VPC that you created for the cluster.
- 8
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC.
- Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 --capabilities CAPABILITY_NAMED_IAM4 - 1
<name>is the name for the CloudFormation stack, such ascluster-sec. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- 4
- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::RoleandAWS::IAM::InstanceProfileresources.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9dbConfirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:MasterSecurityGroupIdMaster Security Group ID
WorkerSecurityGroupIdWorker Security Group ID
MasterInstanceProfileMaster IAM Instance Profile
WorkerInstanceProfileWorker IAM Instance Profile
4.11.11.1. CloudFormation template for security objects Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster.
Example 4.37. CloudFormation template for security objects
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
Type: String
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- VpcCidr
- PrivateSubnets
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
VpcCidr:
default: "VPC CIDR"
PrivateSubnets:
default: "Private Subnets"
Resources:
MasterSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Master Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
ToPort: 6443
FromPort: 6443
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22623
ToPort: 22623
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
WorkerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Worker Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
MasterIngressEtcd:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: etcd
FromPort: 2379
ToPort: 2380
IpProtocol: tcp
MasterIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
MasterIngressWorkerVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
MasterIngressGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
MasterIngressWorkerGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
MasterIngressIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
MasterIngressIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
MasterIngressIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
MasterIngressWorkerIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
MasterIngressWorkerIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
MasterIngressWorkerIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
MasterIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressWorkerInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
MasterIngressWorkerInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
MasterIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
MasterIngressWorkerIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
MasterIngressIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
MasterIngressWorkerIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
WorkerIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressMasterVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
WorkerIngressMasterGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
WorkerIngressIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
WorkerIngressIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
WorkerIngressIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
WorkerIngressMasterIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
WorkerIngressMasterIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
WorkerIngressMasterIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
WorkerIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
WorkerIngressMasterInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
WorkerIngressInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
WorkerIngressMasterInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
WorkerIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes secure kubelet port
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal Kubernetes communication
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressMasterIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
WorkerIngressMasterIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
MasterIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "ec2:AttachVolume"
- "ec2:AuthorizeSecurityGroupIngress"
- "ec2:CreateSecurityGroup"
- "ec2:CreateTags"
- "ec2:CreateVolume"
- "ec2:DeleteSecurityGroup"
- "ec2:DeleteVolume"
- "ec2:Describe*"
- "ec2:DetachVolume"
- "ec2:ModifyInstanceAttribute"
- "ec2:ModifyVolume"
- "ec2:RevokeSecurityGroupIngress"
- "elasticloadbalancing:AddTags"
- "elasticloadbalancing:AttachLoadBalancerToSubnets"
- "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer"
- "elasticloadbalancing:CreateListener"
- "elasticloadbalancing:CreateLoadBalancer"
- "elasticloadbalancing:CreateLoadBalancerPolicy"
- "elasticloadbalancing:CreateLoadBalancerListeners"
- "elasticloadbalancing:CreateTargetGroup"
- "elasticloadbalancing:ConfigureHealthCheck"
- "elasticloadbalancing:DeleteListener"
- "elasticloadbalancing:DeleteLoadBalancer"
- "elasticloadbalancing:DeleteLoadBalancerListeners"
- "elasticloadbalancing:DeleteTargetGroup"
- "elasticloadbalancing:DeregisterInstancesFromLoadBalancer"
- "elasticloadbalancing:DeregisterTargets"
- "elasticloadbalancing:Describe*"
- "elasticloadbalancing:DetachLoadBalancerFromSubnets"
- "elasticloadbalancing:ModifyListener"
- "elasticloadbalancing:ModifyLoadBalancerAttributes"
- "elasticloadbalancing:ModifyTargetGroup"
- "elasticloadbalancing:ModifyTargetGroupAttributes"
- "elasticloadbalancing:RegisterInstancesWithLoadBalancer"
- "elasticloadbalancing:RegisterTargets"
- "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer"
- "elasticloadbalancing:SetLoadBalancerPoliciesOfListener"
- "kms:DescribeKey"
Resource: "*"
MasterInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "MasterIamRole"
WorkerIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "ec2:DescribeInstances"
- "ec2:DescribeRegions"
Resource: "*"
WorkerInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "WorkerIamRole"
Outputs:
MasterSecurityGroupId:
Description: Master Security Group ID
Value: !GetAtt MasterSecurityGroup.GroupId
WorkerSecurityGroupId:
Description: Worker Security Group ID
Value: !GetAtt WorkerSecurityGroup.GroupId
MasterInstanceProfile:
Description: Master IAM Instance Profile
Value: !Ref MasterInstanceProfile
WorkerInstanceProfile:
Description: Worker IAM Instance Profile
Value: !Ref WorkerInstanceProfile
4.11.12. Accessing RHCOS AMIs with stream metadata Copy linkLink copied to clipboard!
In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation.
You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format.
For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI.
Procedure
To parse the stream metadata, use one of the following methods:
-
From a Go program, use the official
stream-metadata-golibrary at https://github.com/coreos/stream-metadata-go. You can also view example code in the library. - From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language.
From a command-line utility that handles JSON data, such as
jq:Print the current
x86_64AMI for an AWS region, such asus-west-1:$ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image'Example output
ami-0d3e625f84626bbdaThe output of this command is the AWS AMI ID for the
us-west-1region. The AMI must belong to the same region as the cluster.
4.11.13. RHCOS AMIs for the AWS infrastructure Copy linkLink copied to clipboard!
Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions that you can manually specify for your OpenShift Container Platform nodes.
By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI.
| AWS zone | AWS AMI |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4.11.13.1. AWS regions without a published RHCOS AMI Copy linkLink copied to clipboard!
You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. This is required if you are deploying your cluster to an AWS government or secret region. AWS government and secret regions are supported by the AWS SDK.
If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs.
A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file.
4.11.13.2. Uploading a custom RHCOS AMI in AWS Copy linkLink copied to clipboard!
If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region.
Prerequisites
- You configured an AWS account.
- You created an Amazon S3 bucket with the required IAM service role.
- You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer.
Procedure
Export your AWS profile as an environment variable:
$ export AWS_PROFILE=<aws_profile>1 - 1
- The AWS profile name that holds your AWS credentials, like
govcloud.
Export the region to associate with your custom AMI as an environment variable:
$ export AWS_DEFAULT_REGION=<aws_region>1 - 1
- The AWS region, like
us-gov-east-1.
Export the version of RHCOS you uploaded to Amazon S3 as an environment variable:
$ export RHCOS_VERSION=<version>1 - 1
- The RHCOS VMDK version, like
4.8.0.
Export the Amazon S3 bucket name as an environment variable:
$ export VMIMPORT_BUCKET_NAME=<s3_bucket_name>Create the
containers.jsonfile and define your RHCOS VMDK file:$ cat <<EOF > containers.json { "Description": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "${VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOFImport the RHCOS disk as an Amazon EBS snapshot:
$ aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \ --description "<description>" \1 --disk-container "file://<file_path>/containers.json"2 Check the status of the image import:
$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}Example output
{ "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] }Copy the
SnapshotIdto register the image.Create a custom RHCOS AMI from the RHCOS snapshot:
$ aws ec2 register-image \ --region ${AWS_DEFAULT_REGION} \ --architecture x86_64 \1 --description "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \2 --ena-support \ --name "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}'4
To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs.
4.11.14. Creating the bootstrap node in AWS Copy linkLink copied to clipboard!
You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires.
If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
Procedure
Provide a location to serve the
bootstrap.ignIgnition config file to your cluster. This file is located in your installation directory. One way to do this is to create an S3 bucket in your cluster’s region and upload the Ignition config file to it.ImportantThe provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates.
ImportantIf you are deploying to a region that has endpoints that differ from the AWS SDK, or you are providing your own custom endpoints, you must use a presigned URL for your S3 bucket instead of the
s3://schema.NoteThe bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach.
Create the bucket:
$ aws s3 mb s3://<cluster-name>-infra1 - 1
<cluster-name>-infrais the bucket name. When creating theinstall-config.yamlfile, replace<cluster-name>with the name specified for the cluster.
Upload the
bootstrap.ignIgnition config file to the bucket:$ aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify that the file uploaded:
$ aws s3 ls s3://<cluster-name>-infra/Example output
2019-04-03 16:15:16 314878 bootstrap.ign
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "RhcosAmi",3 "ParameterValue": "ami-<random_string>"4 }, { "ParameterKey": "AllowedBootstrapSshCidr",5 "ParameterValue": "0.0.0.0/0"6 }, { "ParameterKey": "PublicSubnet",7 "ParameterValue": "subnet-<random_string>"8 }, { "ParameterKey": "MasterSecurityGroupId",9 "ParameterValue": "sg-<random_string>"10 }, { "ParameterKey": "VpcId",11 "ParameterValue": "vpc-<random_string>"12 }, { "ParameterKey": "BootstrapIgnitionLocation",13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign"14 }, { "ParameterKey": "AutoRegisterELB",15 "ParameterValue": "yes"16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn",17 "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>"18 }, { "ParameterKey": "ExternalApiTargetGroupArn",19 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>"20 }, { "ParameterKey": "InternalApiTargetGroupArn",21 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"22 }, { "ParameterKey": "InternalServiceTargetGroupArn",23 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"24 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node.
- 4
- Specify a valid
AWS::EC2::Image::Idvalue. - 5
- CIDR block to allow SSH access to the bootstrap node.
- 6
- Specify a CIDR block in the format
x.x.x.x/16-24. - 7
- The public subnet that is associated with your VPC to launch the bootstrap node into.
- 8
- Specify the
PublicSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 9
- The master security group ID (for registering temporary rules)
- 10
- Specify the
MasterSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - 11
- The VPC created resources will belong to.
- 12
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC. - 13
- Location to fetch bootstrap Ignition config file from.
- 14
- Specify the S3 bucket and file name in the form
s3://<bucket_name>/bootstrap.ign. - 15
- Whether or not to register a network load balancer (NLB).
- 16
- Specify
yesorno. If you specifyyes, you must provide a Lambda Amazon Resource Name (ARN) value. - 17
- The ARN for NLB IP target registration lambda group.
- 18
- Specify the
RegisterNlbIpTargetsLambdavalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 19
- The ARN for external API load balancer target group.
- 20
- Specify the
ExternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 21
- The ARN for internal API load balancer target group.
- 22
- Specify the
InternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 23
- The ARN for internal service load balancer target group.
- 24
- Specify the
InternalServiceTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region.
- Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 --capabilities CAPABILITY_NAMED_IAM4 - 1
<name>is the name for the CloudFormation stack, such ascluster-bootstrap. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- 4
- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::RoleandAWS::IAM::InstanceProfileresources.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:BootstrapInstanceIdThe bootstrap Instance ID.
BootstrapPublicIpThe bootstrap node public IP address.
BootstrapPrivateIpThe bootstrap node private IP address.
4.11.14.1. CloudFormation template for the bootstrap machine Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster.
Example 4.38. CloudFormation template for the bootstrap machine
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AllowedBootstrapSshCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32.
Default: 0.0.0.0/0
Description: CIDR block to allow SSH access to the bootstrap node.
Type: String
PublicSubnet:
Description: The public subnet to launch the bootstrap node into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID for registering temporary rules.
Type: AWS::EC2::SecurityGroup::Id
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
BootstrapIgnitionLocation:
Default: s3://my-s3-bucket/bootstrap.ign
Description: Ignition config file location.
Type: String
AutoRegisterELB:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- RhcosAmi
- BootstrapIgnitionLocation
- MasterSecurityGroupId
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- PublicSubnet
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
AllowedBootstrapSshCidr:
default: "Allowed SSH Source"
PublicSubnet:
default: "Public Subnet"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
default: "Bootstrap Ignition Source"
MasterSecurityGroupId:
default: "Master Security Group ID"
AutoRegisterELB:
default: "Use Provided ELB Automation"
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
Resources:
BootstrapIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "ec2:Describe*"
Resource: "*"
- Effect: "Allow"
Action: "ec2:AttachVolume"
Resource: "*"
- Effect: "Allow"
Action: "ec2:DetachVolume"
Resource: "*"
- Effect: "Allow"
Action: "s3:GetObject"
Resource: "*"
BootstrapInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: "/"
Roles:
- Ref: "BootstrapIamRole"
BootstrapSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Bootstrap Security Group
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref AllowedBootstrapSshCidr
- IpProtocol: tcp
ToPort: 19531
FromPort: 19531
CidrIp: 0.0.0.0/0
VpcId: !Ref VpcId
BootstrapInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
IamInstanceProfile: !Ref BootstrapInstanceProfile
InstanceType: "i3.large"
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
GroupSet:
- !Ref "BootstrapSecurityGroup"
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "PublicSubnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"replace":{"source":"${S3Loc}"}},"version":"3.1.0"}}'
- {
S3Loc: !Ref BootstrapIgnitionLocation
}
RegisterBootstrapApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
Outputs:
BootstrapInstanceId:
Description: Bootstrap Instance ID.
Value: !Ref BootstrapInstance
BootstrapPublicIp:
Description: The bootstrap node public IP address.
Value: !GetAtt BootstrapInstance.PublicIp
BootstrapPrivateIp:
Description: The bootstrap node private IP address.
Value: !GetAtt BootstrapInstance.PrivateIp
4.11.15. Creating the control plane machines in AWS Copy linkLink copied to clipboard!
You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes.
The CloudFormation template creates a stack that represents three control plane nodes.
If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
- You created the bootstrap machine.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "RhcosAmi",3 "ParameterValue": "ami-<random_string>"4 }, { "ParameterKey": "AutoRegisterDNS",5 "ParameterValue": "yes"6 }, { "ParameterKey": "PrivateHostedZoneId",7 "ParameterValue": "<random_string>"8 }, { "ParameterKey": "PrivateHostedZoneName",9 "ParameterValue": "mycluster.example.com"10 }, { "ParameterKey": "Master0Subnet",11 "ParameterValue": "subnet-<random_string>"12 }, { "ParameterKey": "Master1Subnet",13 "ParameterValue": "subnet-<random_string>"14 }, { "ParameterKey": "Master2Subnet",15 "ParameterValue": "subnet-<random_string>"16 }, { "ParameterKey": "MasterSecurityGroupId",17 "ParameterValue": "sg-<random_string>"18 }, { "ParameterKey": "IgnitionLocation",19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master"20 }, { "ParameterKey": "CertificateAuthorities",21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz=="22 }, { "ParameterKey": "MasterInstanceProfileName",23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>"24 }, { "ParameterKey": "MasterInstanceType",25 "ParameterValue": "m5.xlarge"26 }, { "ParameterKey": "AutoRegisterELB",27 "ParameterValue": "yes"28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn",29 "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>"30 }, { "ParameterKey": "ExternalApiTargetGroupArn",31 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>"32 }, { "ParameterKey": "InternalApiTargetGroupArn",33 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"34 }, { "ParameterKey": "InternalServiceTargetGroupArn",35 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"36 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines.
- 4
- Specify an
AWS::EC2::Image::Idvalue. - 5
- Whether or not to perform DNS etcd registration.
- 6
- Specify
yesorno. If you specifyyes, you must provide hosted zone information. - 7
- The Route 53 private zone ID to register the etcd targets with.
- 8
- Specify the
PrivateHostedZoneIdvalue from the output of the CloudFormation template for DNS and load balancing. - 9
- The Route 53 zone to register the targets with.
- 10
- Specify
<cluster_name>.<domain_name>where<domain_name>is the Route 53 base domain that you used when you generatedinstall-config.yamlfile for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - 11 13 15
- A subnet, preferably private, to launch the control plane machines on.
- 12 14 16
- Specify a subnet from the
PrivateSubnetsvalue from the output of the CloudFormation template for DNS and load balancing. - 17
- The master security group ID to associate with control plane nodes (also known as the master nodes).
- 18
- Specify the
MasterSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - 19
- The location to fetch control plane Ignition config file from.
- 20
- Specify the generated Ignition config file location,
https://api-int.<cluster_name>.<domain_name>:22623/config/master. - 21
- The base64 encoded certificate authority string to use.
- 22
- Specify the value from the
master.ignfile that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==. - 23
- The IAM profile to associate with control plane nodes.
- 24
- Specify the
MasterInstanceProfileparameter value from the output of the CloudFormation template for the security group and roles. - 25
- The type of AWS instance to use for the control plane machines.
- 26
- Allowed values:
-
m4.xlarge -
m4.2xlarge -
m4.4xlarge -
m4.10xlarge -
m4.16xlarge -
m5.xlarge -
m5.2xlarge -
m5.4xlarge -
m5.8xlarge -
m5.12xlarge -
m5.16xlarge -
m5a.xlarge -
m5a.2xlarge -
m5a.4xlarge -
m5a.8xlarge -
m5a.12xlarge -
m5a.16xlarge -
c4.2xlarge -
c4.4xlarge -
c4.8xlarge -
c5.2xlarge -
c5.4xlarge -
c5.9xlarge -
c5.12xlarge -
c5.18xlarge -
c5.24xlarge -
c5a.2xlarge -
c5a.4xlarge -
c5a.8xlarge -
c5a.12xlarge -
c5a.16xlarge -
c5a.24xlarge -
r4.xlarge -
r4.2xlarge -
r4.4xlarge -
r4.8xlarge -
r4.16xlarge -
r5.xlarge -
r5.2xlarge -
r5.4xlarge -
r5.8xlarge -
r5.12xlarge -
r5.16xlarge -
r5.24xlarge -
r5a.xlarge -
r5a.2xlarge -
r5a.4xlarge -
r5a.8xlarge -
r5a.12xlarge -
r5a.16xlarge -
r5a.24xlarge
-
- 27
- Whether or not to register a network load balancer (NLB).
- 28
- Specify
yesorno. If you specifyyes, you must provide a Lambda Amazon Resource Name (ARN) value. - 29
- The ARN for NLB IP target registration lambda group.
- 30
- Specify the
RegisterNlbIpTargetsLambdavalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 31
- The ARN for external API load balancer target group.
- 32
- Specify the
ExternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 33
- The ARN for internal API load balancer target group.
- 34
- Specify the
InternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 35
- The ARN for internal service load balancer target group.
- 36
- Specify the
InternalServiceTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region.
- Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires.
-
If you specified an
m5instance type as the value forMasterInstanceType, add that instance type to theMasterInstanceType.AllowedValuesparameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 - 1
<name>is the name for the CloudFormation stack, such ascluster-control-plane. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4bNoteThe CloudFormation template creates a stack that represents three control plane nodes.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
4.11.15.1. CloudFormation template for control plane machines Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster.
Example 4.39. CloudFormation template for control plane machines
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 master instances)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AutoRegisterDNS:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone information?
Type: String
PrivateHostedZoneId:
Description: The Route53 private zone ID to register the etcd targets with, such as Z21IXYZABCZ2A4.
Type: String
PrivateHostedZoneName:
Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the trailing period.
Type: String
Master0Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master1Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master2Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
MasterInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
MasterInstanceType:
Default: m5.xlarge
Type: String
AllowedValues:
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.10xlarge"
- "m4.16xlarge"
- "m5.xlarge"
- "m5.2xlarge"
- "m5.4xlarge"
- "m5.8xlarge"
- "m5.12xlarge"
- "m5.16xlarge"
- "m5a.xlarge"
- "m5a.2xlarge"
- "m5a.4xlarge"
- "m5a.8xlarge"
- "m5a.12xlarge"
- "m5a.16xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "c5.2xlarge"
- "c5.4xlarge"
- "c5.9xlarge"
- "c5.12xlarge"
- "c5.18xlarge"
- "c5.24xlarge"
- "c5a.2xlarge"
- "c5a.4xlarge"
- "c5a.8xlarge"
- "c5a.12xlarge"
- "c5a.16xlarge"
- "c5a.24xlarge"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
- "r5.xlarge"
- "r5.2xlarge"
- "r5.4xlarge"
- "r5.8xlarge"
- "r5.12xlarge"
- "r5.16xlarge"
- "r5.24xlarge"
- "r5a.xlarge"
- "r5a.2xlarge"
- "r5a.4xlarge"
- "r5a.8xlarge"
- "r5a.12xlarge"
- "r5a.16xlarge"
- "r5a.24xlarge"
AutoRegisterELB:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- MasterInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- MasterSecurityGroupId
- MasterInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- Master0Subnet
- Master1Subnet
- Master2Subnet
- Label:
default: "DNS"
Parameters:
- AutoRegisterDNS
- PrivateHostedZoneName
- PrivateHostedZoneId
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
Master0Subnet:
default: "Master-0 Subnet"
Master1Subnet:
default: "Master-1 Subnet"
Master2Subnet:
default: "Master-2 Subnet"
MasterInstanceType:
default: "Master Instance Type"
MasterInstanceProfileName:
default: "Master Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
default: "Master Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
MasterSecurityGroupId:
default: "Master Security Group ID"
AutoRegisterDNS:
default: "Use Provided DNS Automation"
AutoRegisterELB:
default: "Use Provided ELB Automation"
PrivateHostedZoneName:
default: "Private Hosted Zone Name"
PrivateHostedZoneId:
default: "Private Hosted Zone ID"
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
DoDns: !Equals ["yes", !Ref AutoRegisterDNS]
Resources:
Master0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master0Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster0:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
Master1:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master1Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster1:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
Master2:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master2Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster2:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
EtcdSrvRecords:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["_etcd-server-ssl._tcp", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]],
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]],
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]],
]
TTL: 60
Type: SRV
Etcd0Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master0.PrivateIp
TTL: 60
Type: A
Etcd1Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master1.PrivateIp
TTL: 60
Type: A
Etcd2Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master2.PrivateIp
TTL: 60
Type: A
Outputs:
PrivateIPs:
Description: The control-plane node private IP addresses.
Value:
!Join [
",",
[!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp]
]
4.11.16. Creating the worker nodes in AWS Copy linkLink copied to clipboard!
You can create worker nodes in Amazon Web Services (AWS) for your cluster to use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node.
The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node.
If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
- You created the bootstrap machine.
- You created the control plane machines.
Procedure
Create a JSON file that contains the parameter values that the CloudFormation template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "RhcosAmi",3 "ParameterValue": "ami-<random_string>"4 }, { "ParameterKey": "Subnet",5 "ParameterValue": "subnet-<random_string>"6 }, { "ParameterKey": "WorkerSecurityGroupId",7 "ParameterValue": "sg-<random_string>"8 }, { "ParameterKey": "IgnitionLocation",9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker"10 }, { "ParameterKey": "CertificateAuthorities",11 "ParameterValue": ""12 }, { "ParameterKey": "WorkerInstanceProfileName",13 "ParameterValue": ""14 }, { "ParameterKey": "WorkerInstanceType",15 "ParameterValue": "m4.2xlarge"16 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes.
- 4
- Specify an
AWS::EC2::Image::Idvalue. - 5
- A subnet, preferably private, to start the worker nodes on.
- 6
- Specify a subnet from the
PrivateSubnetsvalue from the output of the CloudFormation template for DNS and load balancing. - 7
- The worker security group ID to associate with worker nodes.
- 8
- Specify the
WorkerSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - 9
- The location to fetch the bootstrap Ignition config file from.
- 10
- Specify the generated Ignition config location,
https://api-int.<cluster_name>.<domain_name>:22623/config/worker. - 11
- Base64 encoded certificate authority string to use.
- 12
- Specify the value from the
worker.ignfile that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==. - 13
- The IAM profile to associate with worker nodes.
- 14
- Specify the
WorkerInstanceProfileparameter value from the output of the CloudFormation template for the security group and roles. - 15
- The type of AWS instance to use for the control plane machines.
- 16
- Allowed values:
-
m4.large -
m4.xlarge -
m4.2xlarge -
m4.4xlarge -
m4.10xlarge -
m4.16xlarge -
m5.large -
m5.xlarge -
m5.2xlarge -
m5.4xlarge -
m5.8xlarge -
m5.12xlarge -
m5.16xlarge -
m5a.large -
m5a.xlarge -
m5a.2xlarge -
m5a.4xlarge -
m5a.8xlarge -
m5a.12xlarge -
m5a.16xlarge -
c4.large -
c4.xlarge -
c4.2xlarge -
c4.4xlarge -
c4.8xlarge -
c5.large -
c5.xlarge -
c5.2xlarge -
c5.4xlarge -
c5.9xlarge -
c5.12xlarge -
c5.18xlarge -
c5.24xlarge -
c5a.large -
c5a.xlarge -
c5a.2xlarge -
c5a.4xlarge -
c5a.8xlarge -
c5a.12xlarge -
c5a.16xlarge -
c5a.24xlarge -
r4.large -
r4.xlarge -
r4.2xlarge -
r4.4xlarge -
r4.8xlarge -
r4.16xlarge -
r5.large -
r5.xlarge -
r5.2xlarge -
r5.4xlarge -
r5.8xlarge -
r5.12xlarge -
r5.16xlarge -
r5.24xlarge -
r5a.large -
r5a.xlarge -
r5a.2xlarge -
r5a.4xlarge -
r5a.8xlarge -
r5a.12xlarge -
r5a.16xlarge -
r5a.24xlarge -
t3.large -
t3.xlarge -
t3.2xlarge -
t3a.large -
t3a.xlarge -
t3a.2xlarge
-
- Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires.
-
Optional: If you specified an
m5instance type as the value forWorkerInstanceType, add that instance type to theWorkerInstanceType.AllowedValuesparameter in the CloudFormation template. -
Optional: If you are deploying with an AWS Marketplace image, update the
Worker0.type.properties.ImageIDparameter with the AMI ID that you obtained from your subscription. Launch the CloudFormation template to create a stack of AWS resources that represent a worker node:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml \2 --parameters file://<parameters>.json3 - 1
<name>is the name for the CloudFormation stack, such ascluster-worker-1. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59NoteThe CloudFormation template creates a stack that represents one worker node.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name.
ImportantYou must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template.
4.11.16.1. CloudFormation template for worker machines Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster.
Example 4.40. CloudFormation template for worker machines
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 worker instance)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
WorkerSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
WorkerInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
WorkerInstanceType:
Default: m5.large
Type: String
AllowedValues:
- "m4.large"
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.10xlarge"
- "m4.16xlarge"
- "m5.large"
- "m5.xlarge"
- "m5.2xlarge"
- "m5.4xlarge"
- "m5.8xlarge"
- "m5.12xlarge"
- "m5.16xlarge"
- "m5a.large"
- "m5a.xlarge"
- "m5a.2xlarge"
- "m5a.4xlarge"
- "m5a.8xlarge"
- "m5a.12xlarge"
- "m5a.16xlarge"
- "c4.large"
- "c4.xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "c5.large"
- "c5.xlarge"
- "c5.2xlarge"
- "c5.4xlarge"
- "c5.9xlarge"
- "c5.12xlarge"
- "c5.18xlarge"
- "c5.24xlarge"
- "c5a.large"
- "c5a.xlarge"
- "c5a.2xlarge"
- "c5a.4xlarge"
- "c5a.8xlarge"
- "c5a.12xlarge"
- "c5a.16xlarge"
- "c5a.24xlarge"
- "r4.large"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
- "r5.large"
- "r5.xlarge"
- "r5.2xlarge"
- "r5.4xlarge"
- "r5.8xlarge"
- "r5.12xlarge"
- "r5.16xlarge"
- "r5.24xlarge"
- "r5a.large"
- "r5a.xlarge"
- "r5a.2xlarge"
- "r5a.4xlarge"
- "r5a.8xlarge"
- "r5a.12xlarge"
- "r5a.16xlarge"
- "r5a.24xlarge"
- "t3.large"
- "t3.xlarge"
- "t3.2xlarge"
- "t3a.large"
- "t3a.xlarge"
- "t3a.2xlarge"
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- WorkerInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- WorkerSecurityGroupId
- WorkerInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- Subnet
ParameterLabels:
Subnet:
default: "Subnet"
InfrastructureName:
default: "Infrastructure Name"
WorkerInstanceType:
default: "Worker Instance Type"
WorkerInstanceProfileName:
default: "Worker Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
IgnitionLocation:
default: "Worker Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
WorkerSecurityGroupId:
default: "Worker Security Group ID"
Resources:
Worker0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref WorkerInstanceProfileName
InstanceType: !Ref WorkerInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "WorkerSecurityGroupId"
SubnetId: !Ref "Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
Outputs:
PrivateIP:
Description: The compute node private IP address.
Value: !GetAtt Worker0.PrivateIp
4.11.17. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure Copy linkLink copied to clipboard!
After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
- You created the bootstrap machine.
- You created the control plane machines.
- You created the worker nodes.
Procedure
Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane:
$ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \1 --log-level=info2 Example output
INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.19.0+9f84db3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1sIf the command exits without a
FATALwarning, your OpenShift Container Platform control plane has initialized.NoteAfter the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators.
4.11.18. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
4.11.19. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.11.20. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.21.0 master-1 Ready master 63m v1.21.0 master-2 Ready master 64m v1.21.0The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
PendingorApprovedstatus for each machine that you added to the cluster:$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pendingstatus, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approverif the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec,oc rsh, andoc logscommands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapperservice account in thesystem:nodeorsystem:admingroups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approveNoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...If the remaining CSRs are not approved, and are in the
Pendingstatus, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Readystatus. Verify this by running the following command:$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.21.0 master-1 Ready master 73m v1.21.0 master-2 Ready master 74m v1.21.0 worker-0 Ready worker 11m v1.21.0 worker-1 Ready worker 11m v1.21.0NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Readystatus.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.11.21. Initial Operator configuration Copy linkLink copied to clipboard!
After the control plane initializes, you must immediately configure some Operators so that they all become available.
Prerequisites
- Your control plane has initialized.
Procedure
Watch the cluster components come online:
$ watch -n5 oc get clusteroperatorsExample output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.2 True False False 19m baremetal 4.8.2 True False False 37m cloud-credential 4.8.2 True False False 40m cluster-autoscaler 4.8.2 True False False 37m config-operator 4.8.2 True False False 38m console 4.8.2 True False False 26m csi-snapshot-controller 4.8.2 True False False 37m dns 4.8.2 True False False 37m etcd 4.8.2 True False False 36m image-registry 4.8.2 True False False 31m ingress 4.8.2 True False False 30m insights 4.8.2 True False False 31m kube-apiserver 4.8.2 True False False 26m kube-controller-manager 4.8.2 True False False 36m kube-scheduler 4.8.2 True False False 36m kube-storage-version-migrator 4.8.2 True False False 37m machine-api 4.8.2 True False False 29m machine-approver 4.8.2 True False False 37m machine-config 4.8.2 True False False 36m marketplace 4.8.2 True False False 37m monitoring 4.8.2 True False False 29m network 4.8.2 True False False 38m node-tuning 4.8.2 True False False 37m openshift-apiserver 4.8.2 True False False 32m openshift-controller-manager 4.8.2 True False False 30m openshift-samples 4.8.2 True False False 32m operator-lifecycle-manager 4.8.2 True False False 37m operator-lifecycle-manager-catalog 4.8.2 True False False 37m operator-lifecycle-manager-packageserver 4.8.2 True False False 32m service-ca 4.8.2 True False False 38m storage 4.8.2 True False False 37m- Configure the Operators that are not available.
4.11.21.1. Image registry storage configuration Copy linkLink copied to clipboard!
Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage.
Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.
You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information.
4.11.21.1.1. Configuring registry storage for AWS with user-provisioned infrastructure Copy linkLink copied to clipboard!
During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage.
If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure.
Prerequisites
- You have a cluster on AWS with user-provisioned infrastructure.
For Amazon S3 storage, the secret is expected to contain two keys:
-
REGISTRY_STORAGE_S3_ACCESSKEY -
REGISTRY_STORAGE_S3_SECRETKEY
-
Procedure
Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage.
- Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.
Fill in the storage configuration in
configs.imageregistry.operator.openshift.io/cluster:$ oc edit configs.imageregistry.operator.openshift.io/clusterExample configuration
storage: s3: bucket: <bucket-name> region: <region-name>
To secure your registry images in AWS, block public access to the S3 bucket.
4.11.21.1.2. Configuring storage for the image registry in non-production clusters Copy linkLink copied to clipboard!
You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
To set the image registry storage to an empty directory:
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'WarningConfigure this option for only non-production clusters.
If you run this command before the Image Registry Operator initializes its components, the
oc patchcommand fails with the following error:Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not foundWait a few minutes and run the command again.
4.11.22. Deleting the bootstrap resources Copy linkLink copied to clipboard!
After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS).
Prerequisites
- You completed the initial Operator configuration for your cluster.
Procedure
Delete the bootstrap resources. If you used the CloudFormation template, delete its stack:
Delete the stack by using the AWS CLI:
$ aws cloudformation delete-stack --stack-name <name>1 - 1
<name>is the name of your bootstrap stack.
- Delete the stack by using the AWS CloudFormation console.
4.11.23. Creating the Ingress DNS Records Copy linkLink copied to clipboard!
If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias.
Prerequisites
- You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned.
-
You installed the OpenShift CLI (
oc). -
You installed the
jqpackage. - You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix).
Procedure
Determine the routes to create.
-
To create a wildcard record, use
*.apps.<cluster_name>.<domain_name>, where<cluster_name>is your cluster name, and<domain_name>is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command:
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routesExample output
oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> grafana-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>
-
To create a wildcard record, use
Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the
EXTERNAL-IPcolumn:$ oc -n openshift-ingress get service router-defaultExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5mLocate the hosted zone ID for the load balancer:
$ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID'1 - 1
- For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer that you obtained.
Example output
Z3AADJGX6KTTL2The output of this command is the load balancer hosted zone ID.
Obtain the public hosted zone ID for your cluster’s domain:
$ aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id'2 --output textExample output
/hostedzone/Z3URY6TWQ91KVVThe public hosted zone ID for your domain is shown in the command output. In this example, it is
Z3URY6TWQ91KVV.Add the alias records to your private zone:
$ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>",2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>",3 > "DNSName": "<external_ip>.",4 > "EvaluateTargetHealth": false > } > } > } > ] > }'- 1
- For
<private_hosted_zone_id>, specify the value from the output of the CloudFormation template for DNS and load balancing. - 2
- For
<cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster. - 3
- For
<hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained. - 4
- For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
Add the records to your public zone:
$ aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>",2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>",3 > "DNSName": "<external_ip>.",4 > "EvaluateTargetHealth": false > } > } > } > ] > }'- 1
- For
<public_hosted_zone_id>, specify the public hosted zone for your domain. - 2
- For
<cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster. - 3
- For
<hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained. - 4
- For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
4.11.24. Completing an AWS installation on user-provisioned infrastructure Copy linkLink copied to clipboard!
After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion.
Prerequisites
- You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure.
-
You installed the
ocCLI.
Procedure
From the directory that contains the installation program, complete the cluster installation:
$ ./openshift-install --dir <installation_directory> wait-for install-complete1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Example output
INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Fe5en-ymBEc-Wt6NL" INFO Time elapsed: 1sImportant-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
4.11.25. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.11.26. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.11.28. Next steps Copy linkLink copied to clipboard!
- Validating an installation.
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.12. Installing a cluster on AWS in a restricted network with user-provisioned infrastructure Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content.
While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs.
One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.
The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.
4.12.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You created a mirror registry on your mirror host and obtained the
imageContentSourcesdata for your version of OpenShift Container Platform.ImportantBecause the installation media is on the mirror host, you can use that computer to complete all installation steps.
You configured an AWS account to host the cluster.
ImportantIf you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
- You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation.
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
NoteBe sure to also review this site list if you are configuring a proxy.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
4.12.2. About installations in restricted networks Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network.
4.12.2.1. Additional limits Copy linkLink copied to clipboard!
Clusters in restricted networks have the following additional limitations and restrictions:
-
The
ClusterVersionstatus includes anUnable to retrieve available updateserror. - By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
4.12.3. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to obtain the images that are necessary to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
4.12.4. Required AWS infrastructure components Copy linkLink copied to clipboard!
To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure.
For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page.
By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components:
- An AWS Virtual Private Cloud (VPC)
- Networking and load balancing components
- Security groups and roles
- An OpenShift Container Platform bootstrap node
- OpenShift Container Platform control plane nodes
- An OpenShift Container Platform compute node
Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate.
4.12.4.1. Other infrastructure components Copy linkLink copied to clipboard!
- A VPC
- DNS entries
- Load balancers (classic or network) and listeners
- A public and a private Route 53 zone
- Security groups
- IAM roles
- S3 buckets
If you are working in a disconnected environment or use a proxy, you cannot reach the public IP addresses for EC2 and ELB endpoints. To reach these endpoints, you must create a VPC endpoint and attach it to the subnet that the clusters are using. Create the following endpoints:
-
ec2.<region>.amazonaws.com -
elasticloadbalancing.<region>.amazonaws.com -
s3.<region>.amazonaws.com
Required VPC components
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
| VPC |
| You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. | |
| Public subnets |
| Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. | |
| Internet gateway |
| You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. | |
| Network access control |
| You must allow the VPC to access the following ports: | |
| Port | Reason | ||
|
| Inbound HTTP traffic | ||
|
| Inbound HTTPS traffic | ||
|
| Inbound SSH traffic | ||
|
| Inbound ephemeral traffic | ||
|
| Outbound ephemeral traffic | ||
| Private subnets |
| Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. | |
Required DNS and load balancing components
Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster’s infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer.
The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes (also known as the master nodes). Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster.
| Component | AWS type | Description |
|---|---|---|
| DNS |
| The hosted zone for your internal DNS. |
| etcd record sets |
| The registration records for etcd for your control plane machines. |
| Public load balancer |
| The load balancer for your public subnets. |
| External API server record |
| Alias records for the external API server. |
| External listener |
| A listener on port 6443 for the external load balancer. |
| External target group |
| The target group for the external load balancer. |
| Private load balancer |
| The load balancer for your private subnets. |
| Internal API server record |
| Alias records for the internal API server. |
| Internal listener |
| A listener on port 22623 for the internal load balancer. |
| Internal target group |
| The target group for the internal load balancer. |
| Internal listener |
| A listener on port 6443 for the internal load balancer. |
| Internal target group |
| The target group for the internal load balancer. |
Security groups
The control plane and worker machines require access to the following ports:
| Group | Type | IP Protocol | Port range |
|---|---|---|---|
|
|
|
|
|
|
|
| ||
|
|
| ||
|
|
| ||
|
|
|
|
|
|
|
| ||
|
|
|
|
|
|
|
|
Control plane Ingress
The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.
| Ingress group | Description | IP protocol | Port range |
|---|---|---|---|
|
| etcd |
|
|
|
| Vxlan packets |
|
|
|
| Vxlan packets |
|
|
|
| Internal cluster communication and Kubernetes proxy metrics |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
|
| Kubernetes kubelet, scheduler and controller manager |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Geneve packets |
|
|
|
| Geneve packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec ESP packets |
|
|
|
| IPsec ESP packets |
|
|
|
| Internal cluster communication |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
Worker Ingress
The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource.
| Ingress group | Description | IP protocol | Port range |
|---|---|---|---|
|
| Vxlan packets |
|
|
|
| Vxlan packets |
|
|
|
| Internal cluster communication |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes kubelet, scheduler, and controller manager |
|
|
|
| Kubernetes kubelet, scheduler, and controller manager |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Geneve packets |
|
|
|
| Geneve packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec IKE packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec NAT-T packets |
|
|
|
| IPsec ESP packets |
|
|
|
| IPsec ESP packets |
|
|
|
| Internal cluster communication |
|
|
|
| Internal cluster communication |
|
|
|
| Kubernetes Ingress services |
|
|
|
| Kubernetes Ingress services |
|
|
Roles and instance profiles
You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions.
| Role | Effect | Action | Resource |
|---|---|---|---|
| Master |
|
|
|
|
|
|
| |
|
|
|
| |
|
|
|
| |
| Worker |
|
|
|
| Bootstrap |
|
|
|
|
|
|
| |
|
|
|
|
4.12.4.2. Cluster machines Copy linkLink copied to clipboard!
You need AWS::EC2::Instance objects for the following machines:
- A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys.
- Three control plane machines. The control plane machines are not governed by a machine set.
- Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a machine set.
4.12.4.3. Certificate signing requests management Copy linkLink copied to clipboard!
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
4.12.4.4. Supported AWS machine types Copy linkLink copied to clipboard!
The following Amazon Web Services (AWS) instance types are supported with OpenShift Container Platform.
Example 4.41. Instance types for machines
| Instance type | Bootstrap | Control plane | Compute |
|---|---|---|---|
|
| x | ||
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | x | |
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x | ||
|
| x |
4.12.4.5. Required AWS permissions for the IAM user Copy linkLink copied to clipboard!
Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region.
When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions:
Example 4.42. Required EC2 permissions for installation
-
ec2:AuthorizeSecurityGroupEgress -
ec2:AuthorizeSecurityGroupIngress -
ec2:CopyImage -
ec2:CreateNetworkInterface -
ec2:AttachNetworkInterface -
ec2:CreateSecurityGroup -
ec2:CreateTags -
ec2:CreateVolume -
ec2:DeleteSecurityGroup -
ec2:DeleteSnapshot -
ec2:DeleteTags -
ec2:DeregisterImage -
ec2:DescribeAccountAttributes -
ec2:DescribeAddresses -
ec2:DescribeAvailabilityZones -
ec2:DescribeDhcpOptions -
ec2:DescribeImages -
ec2:DescribeInstanceAttribute -
ec2:DescribeInstanceCreditSpecifications -
ec2:DescribeInstances -
ec2:DescribeInstanceTypes -
ec2:DescribeInternetGateways -
ec2:DescribeKeyPairs -
ec2:DescribeNatGateways -
ec2:DescribeNetworkAcls -
ec2:DescribeNetworkInterfaces -
ec2:DescribePrefixLists -
ec2:DescribeRegions -
ec2:DescribeRouteTables -
ec2:DescribeSecurityGroups -
ec2:DescribeSubnets -
ec2:DescribeTags -
ec2:DescribeVolumes -
ec2:DescribeVpcAttribute -
ec2:DescribeVpcClassicLink -
ec2:DescribeVpcClassicLinkDnsSupport -
ec2:DescribeVpcEndpoints -
ec2:DescribeVpcs -
ec2:GetEbsDefaultKmsKeyId -
ec2:ModifyInstanceAttribute -
ec2:ModifyNetworkInterfaceAttribute -
ec2:RevokeSecurityGroupEgress -
ec2:RevokeSecurityGroupIngress -
ec2:RunInstances -
ec2:TerminateInstances
Example 4.43. Required permissions for creating network resources during installation
-
ec2:AllocateAddress -
ec2:AssociateAddress -
ec2:AssociateDhcpOptions -
ec2:AssociateRouteTable -
ec2:AttachInternetGateway -
ec2:CreateDhcpOptions -
ec2:CreateInternetGateway -
ec2:CreateNatGateway -
ec2:CreateRoute -
ec2:CreateRouteTable -
ec2:CreateSubnet -
ec2:CreateVpc -
ec2:CreateVpcEndpoint -
ec2:ModifySubnetAttribute -
ec2:ModifyVpcAttribute
If you use an existing VPC, your account does not require these permissions for creating network resources.
Example 4.44. Required Elastic Load Balancing permissions (ELB) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:ApplySecurityGroupsToLoadBalancer -
elasticloadbalancing:AttachLoadBalancerToSubnets -
elasticloadbalancing:ConfigureHealthCheck -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateLoadBalancerListeners -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterInstancesFromLoadBalancer -
elasticloadbalancing:DescribeInstanceHealth -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTags -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:RegisterInstancesWithLoadBalancer -
elasticloadbalancing:SetLoadBalancerPoliciesOfListener
Example 4.45. Required Elastic Load Balancing permissions (ELBv2) for installation
-
elasticloadbalancing:AddTags -
elasticloadbalancing:CreateListener -
elasticloadbalancing:CreateLoadBalancer -
elasticloadbalancing:CreateTargetGroup -
elasticloadbalancing:DeleteLoadBalancer -
elasticloadbalancing:DeregisterTargets -
elasticloadbalancing:DescribeListeners -
elasticloadbalancing:DescribeLoadBalancerAttributes -
elasticloadbalancing:DescribeLoadBalancers -
elasticloadbalancing:DescribeTargetGroupAttributes -
elasticloadbalancing:DescribeTargetHealth -
elasticloadbalancing:ModifyLoadBalancerAttributes -
elasticloadbalancing:ModifyTargetGroup -
elasticloadbalancing:ModifyTargetGroupAttributes -
elasticloadbalancing:RegisterTargets
Example 4.46. Required IAM permissions for installation
-
iam:AddRoleToInstanceProfile -
iam:CreateInstanceProfile -
iam:CreateRole -
iam:DeleteInstanceProfile -
iam:DeleteRole -
iam:DeleteRolePolicy -
iam:GetInstanceProfile -
iam:GetRole -
iam:GetRolePolicy -
iam:GetUser -
iam:ListInstanceProfilesForRole -
iam:ListRoles -
iam:ListUsers -
iam:PassRole -
iam:PutRolePolicy -
iam:RemoveRoleFromInstanceProfile -
iam:SimulatePrincipalPolicy -
iam:TagRole
If you have not created an elastic load balancer (ELB) in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission.
Example 4.47. Required Route 53 permissions for installation
-
route53:ChangeResourceRecordSets -
route53:ChangeTagsForResource -
route53:CreateHostedZone -
route53:DeleteHostedZone -
route53:GetChange -
route53:GetHostedZone -
route53:ListHostedZones -
route53:ListHostedZonesByName -
route53:ListResourceRecordSets -
route53:ListTagsForResource -
route53:UpdateHostedZoneComment
Example 4.48. Required S3 permissions for installation
-
s3:CreateBucket -
s3:DeleteBucket -
s3:GetAccelerateConfiguration -
s3:GetBucketAcl -
s3:GetBucketCors -
s3:GetBucketLocation -
s3:GetBucketLogging -
s3:GetBucketObjectLockConfiguration -
s3:GetBucketReplication -
s3:GetBucketRequestPayment -
s3:GetBucketTagging -
s3:GetBucketVersioning -
s3:GetBucketWebsite -
s3:GetEncryptionConfiguration -
s3:GetLifecycleConfiguration -
s3:GetReplicationConfiguration -
s3:ListBucket -
s3:PutBucketAcl -
s3:PutBucketTagging -
s3:PutEncryptionConfiguration
Example 4.49. S3 permissions that cluster Operators require
-
s3:DeleteObject -
s3:GetObject -
s3:GetObjectAcl -
s3:GetObjectTagging -
s3:GetObjectVersion -
s3:PutObject -
s3:PutObjectAcl -
s3:PutObjectTagging
Example 4.50. Required permissions to delete base cluster resources
-
autoscaling:DescribeAutoScalingGroups -
ec2:DeleteNetworkInterface -
ec2:DeleteVolume -
elasticloadbalancing:DeleteTargetGroup -
elasticloadbalancing:DescribeTargetGroups -
iam:DeleteAccessKey -
iam:DeleteUser -
iam:ListAttachedRolePolicies -
iam:ListInstanceProfiles -
iam:ListRolePolicies -
iam:ListUserPolicies -
s3:DeleteObject -
s3:ListBucketVersions -
tag:GetResources
Example 4.51. Required permissions to delete network resources
-
ec2:DeleteDhcpOptions -
ec2:DeleteInternetGateway -
ec2:DeleteNatGateway -
ec2:DeleteRoute -
ec2:DeleteRouteTable -
ec2:DeleteSubnet -
ec2:DeleteVpc -
ec2:DeleteVpcEndpoints -
ec2:DetachInternetGateway -
ec2:DisassociateRouteTable -
ec2:ReleaseAddress -
ec2:ReplaceRouteTableAssociation
If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources.
Example 4.52. Required permissions to delete a cluster with shared instance roles
-
iam:UntagRole
Example 4.53. Additional IAM and S3 permissions that are required to create manifests
-
iam:DeleteAccessKey -
iam:DeleteUser -
iam:DeleteUserPolicy -
iam:GetUserPolicy -
iam:ListAccessKeys -
iam:PutUserPolicy -
iam:TagUser -
iam:GetUserPolicy -
iam:ListAccessKeys -
s3:PutBucketPublicAccessBlock -
s3:GetBucketPublicAccessBlock -
s3:PutLifecycleConfiguration -
s3:HeadBucket -
s3:ListBucketMultipartUploads -
s3:AbortMultipartUpload
If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions.
Example 4.54. Optional permissions for instance and quota checks for installation
-
ec2:DescribeInstanceTypeOfferings -
servicequotas:ListAWSDefaultServiceQuotas
4.12.5. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program.
4.12.6. Creating the installation files for AWS Copy linkLink copied to clipboard!
To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.
4.12.6.1. Optional: Creating a separate /var partition Copy linkLink copied to clipboard!
It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:
-
/var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system. -
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. -
/var: Holds data that you might want to keep separate for purposes such as auditing.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.
Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.
If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section.
Procedure
Create a directory to hold the OpenShift Container Platform installation files:
$ mkdir $HOME/clusterconfigRun
openshift-installto create a set of files in themanifestandopenshiftsubdirectories. Answer the system questions as you are prompted:$ openshift-install create manifests --dir $HOME/clusterconfigExample output
? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshiftOptional: Confirm that the installation program created manifests in the
clusterconfig/openshiftdirectory:$ ls $HOME/clusterconfig/openshift/Example output
99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ...Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the storage device on theworkersystems, and set the storage size as appropriate. This example places the/vardirectory on a separate partition:variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name>1 partitions: - label: var start_mib: <partition_start_offset>2 size_mib: <partition_size>3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota]4 with_mount_unit: true- 1
- The storage device name of the disk that you want to partition.
- 2
- When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
- 3
- The size of the data partition in mebibytes.
- 4
- The
prjquotamount option must be enabled for filesystems used for container storage.
NoteWhen creating a separate
/varpartition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.Create a manifest from the Butane config and save it to the
clusterconfig/openshiftdirectory. For example, run the following command:$ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshift/98-var-partition.yamlRun
openshift-installagain to create Ignition configs from a set of files in themanifestandopenshiftsubdirectories:$ openshift-install create ignition-configs --dir $HOME/clusterconfig $ ls $HOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign
Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.
4.12.6.2. Creating the installation configuration file Copy linkLink copied to clipboard!
Generate and customize the installation configuration file that the installation program needs to deploy your cluster.
Prerequisites
- You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
-
You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the
install-config.yamlfile manually.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select aws as the platform to target.
If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
NoteThe AWS access key ID and secret access key are stored in
~/.aws/credentialsin the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.- Select the AWS region to deploy the cluster to.
- Select the base domain for the Route 53 service that you configured for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Edit the
install-config.yamlfile to provide the additional information that is required for an installation in a restricted network.Update the
pullSecretvalue to contain the authentication information for your registry:pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}'For
<local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For exampleregistry.example.comorregistry.example.com:5000. For<credentials>, specify the base64-encoded user name and password for your mirror registry.Add the
additionalTrustBundleparameter and value. The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry.additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----Add the image content resources:
imageContentSources: - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-devUse the
imageContentSourcessection from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network.Optional: Set the publishing strategy to
Internal:publish: InternalBy setting this option, you create an internal Ingress Controller and a private load balancer.
Optional: Back up the
install-config.yamlfile.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
4.12.6.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).-
If your cluster is on AWS, you added the
ec2.<region>.amazonaws.com,elasticloadbalancing.<region>.amazonaws.com, ands3.<region>.amazonaws.comendpoints to your VPC endpoint. These endpoints are required to complete requests from the nodes to the AWS EC2 API. Because the proxy works on the container level, not the node level, you must route these requests to the AWS EC2 API through the AWS private network. Adding the public IP address of the EC2 API to your allowlist in your proxy server is not sufficient.
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
4.12.6.4. Creating the Kubernetes manifest and Ignition config files Copy linkLink copied to clipboard!
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.
-
The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Prerequisites
- You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host.
-
You created the
install-config.yamlinstallation configuration file.
Procedure
Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the installation directory that contains theinstall-config.yamlfile you created.
Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yamlBy removing these files, you prevent the cluster from automatically generating control plane machines.
Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yamlBecause you create and manage the worker machines yourself, you do not need to initialize these machines.
Check that the
mastersSchedulableparameter in the<installation_directory>/manifests/cluster-scheduler-02-config.ymlKubernetes manifest file is set tofalse. This setting prevents pods from being scheduled on the control plane machines:-
Open the
<installation_directory>/manifests/cluster-scheduler-02-config.ymlfile. -
Locate the
mastersSchedulableparameter and ensure that it is set tofalse. - Save and exit the file.
-
Open the
Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the
privateZoneandpublicZonesections from the<installation_directory>/manifests/cluster-dns-02-config.ymlDNS configuration file:apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone:1 id: mycluster-100419-private-zone publicZone:2 id: example.openshift.com status: {}If you do so, you must add ingress DNS records manually in a later step.
To create the Ignition configuration files, run the following command from the directory that contains the installation program:
$ ./openshift-install create ignition-configs --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the same installation directory.
Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The
kubeadmin-passwordandkubeconfigfiles are created in the./<installation_directory>/authdirectory:. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
4.12.7. Extracting the infrastructure name Copy linkLink copied to clipboard!
The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it.
Prerequisites
- You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
- You generated the Ignition config files for your cluster.
-
You installed the
jqpackage.
Procedure
To extract and view the infrastructure name from the Ignition config file metadata, run the following command:
$ jq -r .infraID <installation_directory>/metadata.json1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Example output
openshift-vw9j61 - 1
- The output of this command is your cluster name and a random string.
4.12.8. Creating a VPC in AWS Copy linkLink copied to clipboard!
You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "VpcCidr",1 "ParameterValue": "10.0.0.0/16"2 }, { "ParameterKey": "AvailabilityZoneCount",3 "ParameterValue": "1"4 }, { "ParameterKey": "SubnetBits",5 "ParameterValue": "12"6 } ]- Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the VPC:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 - 1
<name>is the name for the CloudFormation stack, such ascluster-vpc. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849fConfirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:VpcIdThe ID of your VPC.
PublicSubnetIdsThe IDs of the new public subnets.
PrivateSubnetIdsThe IDs of the new private subnets.
4.12.8.1. CloudFormation template for the VPC Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.
Example 4.55. CloudFormation template for the VPC
AWSTemplateFormatVersion: 2010-09-09
Description: Template for Best Practice VPC with 1-3 AZs
Parameters:
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
AvailabilityZoneCount:
ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)"
MinValue: 1
MaxValue: 3
Default: 1
Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)"
Type: Number
SubnetBits:
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27.
MinValue: 5
MaxValue: 13
Default: 12
Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)"
Type: Number
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Network Configuration"
Parameters:
- VpcCidr
- SubnetBits
- Label:
default: "Availability Zones"
Parameters:
- AvailabilityZoneCount
ParameterLabels:
AvailabilityZoneCount:
default: "Availability Zone Count"
VpcCidr:
default: "VPC CIDR"
SubnetBits:
default: "Bits Per Subnet"
Conditions:
DoAz3: !Equals [3, !Ref AvailabilityZoneCount]
DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3]
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
EnableDnsSupport: "true"
EnableDnsHostnames: "true"
CidrBlock: !Ref VpcCidr
PublicSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
PublicSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
InternetGateway:
Type: "AWS::EC2::InternetGateway"
GatewayToInternet:
Type: "AWS::EC2::VPCGatewayAttachment"
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: "AWS::EC2::Route"
DependsOn: GatewayToInternet
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicRouteTable
PublicSubnetRouteTableAssociation3:
Condition: DoAz3
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PublicSubnet3
RouteTableId: !Ref PublicRouteTable
PrivateSubnet:
Type: "AWS::EC2::Subnet"
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable:
Type: "AWS::EC2::RouteTable"
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Properties:
SubnetId: !Ref PrivateSubnet
RouteTableId: !Ref PrivateRouteTable
NAT:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Properties:
AllocationId:
"Fn::GetAtt":
- EIP
- AllocationId
SubnetId: !Ref PublicSubnet
EIP:
Type: "AWS::EC2::EIP"
Properties:
Domain: vpc
Route:
Type: "AWS::EC2::Route"
Properties:
RouteTableId:
Ref: PrivateRouteTable
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT
PrivateSubnet2:
Type: "AWS::EC2::Subnet"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable2:
Type: "AWS::EC2::RouteTable"
Condition: DoAz2
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation2:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz2
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateRouteTable2
NAT2:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz2
Properties:
AllocationId:
"Fn::GetAtt":
- EIP2
- AllocationId
SubnetId: !Ref PublicSubnet2
EIP2:
Type: "AWS::EC2::EIP"
Condition: DoAz2
Properties:
Domain: vpc
Route2:
Type: "AWS::EC2::Route"
Condition: DoAz2
Properties:
RouteTableId:
Ref: PrivateRouteTable2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT2
PrivateSubnet3:
Type: "AWS::EC2::Subnet"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]]
AvailabilityZone: !Select
- 2
- Fn::GetAZs: !Ref "AWS::Region"
PrivateRouteTable3:
Type: "AWS::EC2::RouteTable"
Condition: DoAz3
Properties:
VpcId: !Ref VPC
PrivateSubnetRouteTableAssociation3:
Type: "AWS::EC2::SubnetRouteTableAssociation"
Condition: DoAz3
Properties:
SubnetId: !Ref PrivateSubnet3
RouteTableId: !Ref PrivateRouteTable3
NAT3:
DependsOn:
- GatewayToInternet
Type: "AWS::EC2::NatGateway"
Condition: DoAz3
Properties:
AllocationId:
"Fn::GetAtt":
- EIP3
- AllocationId
SubnetId: !Ref PublicSubnet3
EIP3:
Type: "AWS::EC2::EIP"
Condition: DoAz3
Properties:
Domain: vpc
Route3:
Type: "AWS::EC2::Route"
Condition: DoAz3
Properties:
RouteTableId:
Ref: PrivateRouteTable3
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NAT3
S3Endpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal: '*'
Action:
- '*'
Resource:
- '*'
RouteTableIds:
- !Ref PublicRouteTable
- !Ref PrivateRouteTable
- !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"]
- !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"]
ServiceName: !Join
- ''
- - com.amazonaws.
- !Ref 'AWS::Region'
- .s3
VpcId: !Ref VPC
Outputs:
VpcId:
Description: ID of the new VPC.
Value: !Ref VPC
PublicSubnetIds:
Description: Subnet IDs of the public subnets.
Value:
!Join [
",",
[!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]]
]
PrivateSubnetIds:
Description: Subnet IDs of the private subnets.
Value:
!Join [
",",
[!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]]
]
4.12.9. Creating networking and load balancing components in AWS Copy linkLink copied to clipboard!
You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags.
You can run the template multiple times within a single Virtual Private Cloud (VPC).
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
Procedure
Obtain the hosted zone ID for the Route 53 base domain that you specified in the
install-config.yamlfile for your cluster. You can obtain details about your hosted zone by running the following command:$ aws route53 list-hosted-zones-by-name --dns-name <route53_domain>1 - 1
- For the
<route53_domain>, specify the Route 53 base domain that you used when you generated theinstall-config.yamlfile for the cluster.
Example output
mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10In the example output, the hosted zone ID is
Z21IXYZABCZ2A4.Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "ClusterName",1 "ParameterValue": "mycluster"2 }, { "ParameterKey": "InfrastructureName",3 "ParameterValue": "mycluster-<random_string>"4 }, { "ParameterKey": "HostedZoneId",5 "ParameterValue": "<random_string>"6 }, { "ParameterKey": "HostedZoneName",7 "ParameterValue": "example.com"8 }, { "ParameterKey": "PublicSubnets",9 "ParameterValue": "subnet-<random_string>"10 }, { "ParameterKey": "PrivateSubnets",11 "ParameterValue": "subnet-<random_string>"12 }, { "ParameterKey": "VpcId",13 "ParameterValue": "vpc-<random_string>"14 } ]- 1
- A short, representative cluster name to use for hostnames, etc.
- 2
- Specify the cluster name that you used when you generated the
install-config.yamlfile for the cluster. - 3
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 4
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 5
- The Route 53 public zone ID to register the targets with.
- 6
- Specify the Route 53 public zone ID, which as a format similar to
Z21IXYZABCZ2A4. You can obtain this value from the AWS console. - 7
- The Route 53 zone to register the targets with.
- 8
- Specify the Route 53 base domain that you used when you generated the
install-config.yamlfile for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - 9
- The public subnets that you created for your VPC.
- 10
- Specify the
PublicSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 11
- The private subnets that you created for your VPC.
- 12
- Specify the
PrivateSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 13
- The VPC that you created for the cluster.
- 14
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC.
Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires.
ImportantIf you are deploying your cluster to an AWS government or secret region, you must update the
InternalApiServerRecordin the CloudFormation template to useCNAMErecords. Records of typeALIASare not supported for AWS government regions.Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 --capabilities CAPABILITY_NAMED_IAM4 - 1
<name>is the name for the CloudFormation stack, such ascluster-dns. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- 4
- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::Roleresources.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:PrivateHostedZoneIdHosted zone ID for the private DNS.
ExternalApiLoadBalancerNameFull name of the external API load balancer.
InternalApiLoadBalancerNameFull name of the internal API load balancer.
ApiServerDnsNameFull hostname of the API server.
RegisterNlbIpTargetsLambdaLambda ARN useful to help register/deregister IP targets for these load balancers.
ExternalApiTargetGroupArnARN of external API target group.
InternalApiTargetGroupArnARN of internal API target group.
InternalServiceTargetGroupArnARN of internal service target group.
4.12.9.1. CloudFormation template for the network and load balancers Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster.
Example 4.56. CloudFormation template for the network and load balancers
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Network Elements (Route53 & LBs)
Parameters:
ClusterName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, representative cluster name to use for host names and other identifying names.
Type: String
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
Type: String
HostedZoneId:
Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4.
Type: String
HostedZoneName:
Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period.
Type: String
Default: "example.com"
PublicSubnets:
Description: The internet-facing subnets.
Type: List<AWS::EC2::Subnet::Id>
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- ClusterName
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- PublicSubnets
- PrivateSubnets
- Label:
default: "DNS"
Parameters:
- HostedZoneName
- HostedZoneId
ParameterLabels:
ClusterName:
default: "Cluster Name"
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
PublicSubnets:
default: "Public Subnets"
PrivateSubnets:
default: "Private Subnets"
HostedZoneName:
default: "Public Hosted Zone Name"
HostedZoneId:
default: "Public Hosted Zone ID"
Resources:
ExtApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "ext"]]
IpAddressType: ipv4
Subnets: !Ref PublicSubnets
Type: network
IntApiElb:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Name: !Join ["-", [!Ref InfrastructureName, "int"]]
Scheme: internal
IpAddressType: ipv4
Subnets: !Ref PrivateSubnets
Type: network
IntDns:
Type: "AWS::Route53::HostedZone"
Properties:
HostedZoneConfig:
Comment: "Managed by CloudFormation"
Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]]
HostedZoneTags:
- Key: Name
Value: !Join ["-", [!Ref InfrastructureName, "int"]]
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "owned"
VPCs:
- VPCId: !Ref VpcId
VPCRegion: !Ref "AWS::Region"
ExternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref HostedZoneId
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID
DNSName: !GetAtt ExtApiElb.DNSName
InternalApiServerRecord:
Type: AWS::Route53::RecordSetGroup
Properties:
Comment: Alias record for the API server
HostedZoneId: !Ref IntDns
RecordSets:
- Name:
!Join [
".",
["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
- Name:
!Join [
".",
["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]],
]
Type: A
AliasTarget:
HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID
DNSName: !GetAtt IntApiElb.DNSName
ExternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: ExternalApiTargetGroup
LoadBalancerArn:
Ref: ExtApiElb
Port: 6443
Protocol: TCP
ExternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 10
HealthCheckPath: "/readyz"
HealthCheckPort: 6443
HealthCheckProtocol: HTTPS
HealthyThresholdCount: 2
UnhealthyThresholdCount: 2
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalApiListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalApiTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 6443
Protocol: TCP
InternalApiTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 10
HealthCheckPath: "/readyz"
HealthCheckPort: 6443
HealthCheckProtocol: HTTPS
HealthyThresholdCount: 2
UnhealthyThresholdCount: 2
Port: 6443
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
InternalServiceInternalListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn:
Ref: InternalServiceTargetGroup
LoadBalancerArn:
Ref: IntApiElb
Port: 22623
Protocol: TCP
InternalServiceTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
HealthCheckIntervalSeconds: 10
HealthCheckPath: "/healthz"
HealthCheckPort: 22623
HealthCheckProtocol: HTTPS
HealthyThresholdCount: 2
UnhealthyThresholdCount: 2
Port: 22623
Protocol: TCP
TargetType: ip
VpcId:
Ref: VpcId
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: 60
RegisterTargetLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalApiTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref InternalServiceTargetGroup
- Effect: "Allow"
Action:
[
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets",
]
Resource: !Ref ExternalApiTargetGroup
RegisterNlbIpTargets:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterTargetLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
elb = boto3.client('elbv2')
if event['RequestType'] == 'Delete':
elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
elif event['RequestType'] == 'Create':
elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}])
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp'])
Runtime: "python3.7"
Timeout: 120
RegisterSubnetTagsLambdaIamRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]]
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
[
"ec2:DeleteTags",
"ec2:CreateTags"
]
Resource: "arn:aws:ec2:*:*:subnet/*"
- Effect: "Allow"
Action:
[
"ec2:DescribeSubnets",
"ec2:DescribeTags"
]
Resource: "*"
RegisterSubnetTags:
Type: "AWS::Lambda::Function"
Properties:
Handler: "index.handler"
Role:
Fn::GetAtt:
- "RegisterSubnetTagsLambdaIamRole"
- "Arn"
Code:
ZipFile: |
import json
import boto3
import cfnresponse
def handler(event, context):
ec2_client = boto3.client('ec2')
if event['RequestType'] == 'Delete':
for subnet_id in event['ResourceProperties']['Subnets']:
ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]);
elif event['RequestType'] == 'Create':
for subnet_id in event['ResourceProperties']['Subnets']:
ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]);
responseData = {}
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0])
Runtime: "python3.7"
Timeout: 120
RegisterPublicSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PublicSubnets
RegisterPrivateSubnetTags:
Type: Custom::SubnetRegister
Properties:
ServiceToken: !GetAtt RegisterSubnetTags.Arn
InfrastructureName: !Ref InfrastructureName
Subnets: !Ref PrivateSubnets
Outputs:
PrivateHostedZoneId:
Description: Hosted zone ID for the private DNS, which is required for private records.
Value: !Ref IntDns
ExternalApiLoadBalancerName:
Description: Full name of the external API load balancer.
Value: !GetAtt ExtApiElb.LoadBalancerFullName
InternalApiLoadBalancerName:
Description: Full name of the internal API load balancer.
Value: !GetAtt IntApiElb.LoadBalancerFullName
ApiServerDnsName:
Description: Full hostname of the API server, which is required for the Ignition config files.
Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]]
RegisterNlbIpTargetsLambda:
Description: Lambda ARN useful to help register or deregister IP targets for these load balancers.
Value: !GetAtt RegisterNlbIpTargets.Arn
ExternalApiTargetGroupArn:
Description: ARN of the external API target group.
Value: !Ref ExternalApiTargetGroup
InternalApiTargetGroupArn:
Description: ARN of the internal API target group.
Value: !Ref InternalApiTargetGroup
InternalServiceTargetGroupArn:
Description: ARN of the internal service target group.
Value: !Ref InternalServiceTargetGroup
If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example:
Type: CNAME
TTL: 10
ResourceRecords:
- !GetAtt IntApiElb.DNSName
4.12.10. Creating security group and roles in AWS Copy linkLink copied to clipboard!
You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires.
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "VpcCidr",3 "ParameterValue": "10.0.0.0/16"4 }, { "ParameterKey": "PrivateSubnets",5 "ParameterValue": "subnet-<random_string>"6 }, { "ParameterKey": "VpcId",7 "ParameterValue": "vpc-<random_string>"8 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- The CIDR block for the VPC.
- 4
- Specify the CIDR block parameter that you used for the VPC that you defined in the form
x.x.x.x/16-24. - 5
- The private subnets that you created for your VPC.
- 6
- Specify the
PrivateSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 7
- The VPC that you created for the cluster.
- 8
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC.
- Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 --capabilities CAPABILITY_NAMED_IAM4 - 1
<name>is the name for the CloudFormation stack, such ascluster-sec. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- 4
- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::RoleandAWS::IAM::InstanceProfileresources.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9dbConfirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:MasterSecurityGroupIdMaster Security Group ID
WorkerSecurityGroupIdWorker Security Group ID
MasterInstanceProfileMaster IAM Instance Profile
WorkerInstanceProfileWorker IAM Instance Profile
4.12.10.1. CloudFormation template for security objects Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster.
Example 4.57. CloudFormation template for security objects
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
Type: String
VpcCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24.
Default: 10.0.0.0/16
Description: CIDR block for VPC.
Type: String
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
PrivateSubnets:
Description: The internal subnets.
Type: List<AWS::EC2::Subnet::Id>
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- VpcCidr
- PrivateSubnets
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
VpcCidr:
default: "VPC CIDR"
PrivateSubnets:
default: "Private Subnets"
Resources:
MasterSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Master Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
ToPort: 6443
FromPort: 6443
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22623
ToPort: 22623
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
WorkerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Worker Security Group
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: 0
CidrIp: !Ref VpcCidr
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref VpcCidr
VpcId: !Ref VpcId
MasterIngressEtcd:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: etcd
FromPort: 2379
ToPort: 2380
IpProtocol: tcp
MasterIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
MasterIngressWorkerVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
MasterIngressGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
MasterIngressWorkerGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
MasterIngressIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
MasterIngressIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
MasterIngressIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
MasterIngressWorkerIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
MasterIngressWorkerIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
MasterIngressWorkerIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
MasterIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressWorkerInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
MasterIngressInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
MasterIngressWorkerInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
MasterIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes kubelet, scheduler and controller manager
FromPort: 10250
ToPort: 10259
IpProtocol: tcp
MasterIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
MasterIngressWorkerIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
MasterIngressIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
MasterIngressWorkerIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt MasterSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
WorkerIngressVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressMasterVxlan:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Vxlan packets
FromPort: 4789
ToPort: 4789
IpProtocol: udp
WorkerIngressGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
WorkerIngressMasterGeneve:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Geneve packets
FromPort: 6081
ToPort: 6081
IpProtocol: udp
WorkerIngressIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
WorkerIngressIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
WorkerIngressIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
WorkerIngressMasterIpsecIke:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec IKE packets
FromPort: 500
ToPort: 500
IpProtocol: udp
WorkerIngressMasterIpsecNat:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec NAT-T packets
FromPort: 4500
ToPort: 4500
IpProtocol: udp
WorkerIngressMasterIpsecEsp:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: IPsec ESP packets
IpProtocol: 50
WorkerIngressInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
WorkerIngressMasterInternal:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: tcp
WorkerIngressInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
WorkerIngressMasterInternalUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal cluster communication
FromPort: 9000
ToPort: 9999
IpProtocol: udp
WorkerIngressKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes secure kubelet port
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressWorkerKube:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Internal Kubernetes communication
FromPort: 10250
ToPort: 10250
IpProtocol: tcp
WorkerIngressIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressMasterIngressServices:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: tcp
WorkerIngressIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
WorkerIngressMasterIngressServicesUDP:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !GetAtt WorkerSecurityGroup.GroupId
SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId
Description: Kubernetes ingress services
FromPort: 30000
ToPort: 32767
IpProtocol: udp
MasterIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "ec2:AttachVolume"
- "ec2:AuthorizeSecurityGroupIngress"
- "ec2:CreateSecurityGroup"
- "ec2:CreateTags"
- "ec2:CreateVolume"
- "ec2:DeleteSecurityGroup"
- "ec2:DeleteVolume"
- "ec2:Describe*"
- "ec2:DetachVolume"
- "ec2:ModifyInstanceAttribute"
- "ec2:ModifyVolume"
- "ec2:RevokeSecurityGroupIngress"
- "elasticloadbalancing:AddTags"
- "elasticloadbalancing:AttachLoadBalancerToSubnets"
- "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer"
- "elasticloadbalancing:CreateListener"
- "elasticloadbalancing:CreateLoadBalancer"
- "elasticloadbalancing:CreateLoadBalancerPolicy"
- "elasticloadbalancing:CreateLoadBalancerListeners"
- "elasticloadbalancing:CreateTargetGroup"
- "elasticloadbalancing:ConfigureHealthCheck"
- "elasticloadbalancing:DeleteListener"
- "elasticloadbalancing:DeleteLoadBalancer"
- "elasticloadbalancing:DeleteLoadBalancerListeners"
- "elasticloadbalancing:DeleteTargetGroup"
- "elasticloadbalancing:DeregisterInstancesFromLoadBalancer"
- "elasticloadbalancing:DeregisterTargets"
- "elasticloadbalancing:Describe*"
- "elasticloadbalancing:DetachLoadBalancerFromSubnets"
- "elasticloadbalancing:ModifyListener"
- "elasticloadbalancing:ModifyLoadBalancerAttributes"
- "elasticloadbalancing:ModifyTargetGroup"
- "elasticloadbalancing:ModifyTargetGroupAttributes"
- "elasticloadbalancing:RegisterInstancesWithLoadBalancer"
- "elasticloadbalancing:RegisterTargets"
- "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer"
- "elasticloadbalancing:SetLoadBalancerPoliciesOfListener"
- "kms:DescribeKey"
Resource: "*"
MasterInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "MasterIamRole"
WorkerIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "ec2:DescribeInstances"
- "ec2:DescribeRegions"
Resource: "*"
WorkerInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: "WorkerIamRole"
Outputs:
MasterSecurityGroupId:
Description: Master Security Group ID
Value: !GetAtt MasterSecurityGroup.GroupId
WorkerSecurityGroupId:
Description: Worker Security Group ID
Value: !GetAtt WorkerSecurityGroup.GroupId
MasterInstanceProfile:
Description: Master IAM Instance Profile
Value: !Ref MasterInstanceProfile
WorkerInstanceProfile:
Description: Worker IAM Instance Profile
Value: !Ref WorkerInstanceProfile
4.12.11. Accessing RHCOS AMIs with stream metadata Copy linkLink copied to clipboard!
In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation.
You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format.
For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI.
Procedure
To parse the stream metadata, use one of the following methods:
-
From a Go program, use the official
stream-metadata-golibrary at https://github.com/coreos/stream-metadata-go. You can also view example code in the library. - From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language.
From a command-line utility that handles JSON data, such as
jq:Print the current
x86_64AMI for an AWS region, such asus-west-1:$ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image'Example output
ami-0d3e625f84626bbdaThe output of this command is the AWS AMI ID for the
us-west-1region. The AMI must belong to the same region as the cluster.
4.12.12. RHCOS AMIs for the AWS infrastructure Copy linkLink copied to clipboard!
Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions that you can manually specify for your OpenShift Container Platform nodes.
By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI.
| AWS zone | AWS AMI |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4.12.13. Creating the bootstrap node in AWS Copy linkLink copied to clipboard!
You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires.
If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
Procedure
Provide a location to serve the
bootstrap.ignIgnition config file to your cluster. This file is located in your installation directory. One way to do this is to create an S3 bucket in your cluster’s region and upload the Ignition config file to it.ImportantThe provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates.
ImportantIf you are deploying to a region that has endpoints that differ from the AWS SDK, or you are providing your own custom endpoints, you must use a presigned URL for your S3 bucket instead of the
s3://schema.NoteThe bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach.
Create the bucket:
$ aws s3 mb s3://<cluster-name>-infra1 - 1
<cluster-name>-infrais the bucket name. When creating theinstall-config.yamlfile, replace<cluster-name>with the name specified for the cluster.
Upload the
bootstrap.ignIgnition config file to the bucket:$ aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify that the file uploaded:
$ aws s3 ls s3://<cluster-name>-infra/Example output
2019-04-03 16:15:16 314878 bootstrap.ign
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "RhcosAmi",3 "ParameterValue": "ami-<random_string>"4 }, { "ParameterKey": "AllowedBootstrapSshCidr",5 "ParameterValue": "0.0.0.0/0"6 }, { "ParameterKey": "PublicSubnet",7 "ParameterValue": "subnet-<random_string>"8 }, { "ParameterKey": "MasterSecurityGroupId",9 "ParameterValue": "sg-<random_string>"10 }, { "ParameterKey": "VpcId",11 "ParameterValue": "vpc-<random_string>"12 }, { "ParameterKey": "BootstrapIgnitionLocation",13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign"14 }, { "ParameterKey": "AutoRegisterELB",15 "ParameterValue": "yes"16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn",17 "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>"18 }, { "ParameterKey": "ExternalApiTargetGroupArn",19 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>"20 }, { "ParameterKey": "InternalApiTargetGroupArn",21 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"22 }, { "ParameterKey": "InternalServiceTargetGroupArn",23 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"24 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node.
- 4
- Specify a valid
AWS::EC2::Image::Idvalue. - 5
- CIDR block to allow SSH access to the bootstrap node.
- 6
- Specify a CIDR block in the format
x.x.x.x/16-24. - 7
- The public subnet that is associated with your VPC to launch the bootstrap node into.
- 8
- Specify the
PublicSubnetIdsvalue from the output of the CloudFormation template for the VPC. - 9
- The master security group ID (for registering temporary rules)
- 10
- Specify the
MasterSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - 11
- The VPC created resources will belong to.
- 12
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC. - 13
- Location to fetch bootstrap Ignition config file from.
- 14
- Specify the S3 bucket and file name in the form
s3://<bucket_name>/bootstrap.ign. - 15
- Whether or not to register a network load balancer (NLB).
- 16
- Specify
yesorno. If you specifyyes, you must provide a Lambda Amazon Resource Name (ARN) value. - 17
- The ARN for NLB IP target registration lambda group.
- 18
- Specify the
RegisterNlbIpTargetsLambdavalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 19
- The ARN for external API load balancer target group.
- 20
- Specify the
ExternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 21
- The ARN for internal API load balancer target group.
- 22
- Specify the
InternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 23
- The ARN for internal service load balancer target group.
- 24
- Specify the
InternalServiceTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region.
- Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires.
Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 --capabilities CAPABILITY_NAMED_IAM4 - 1
<name>is the name for the CloudFormation stack, such ascluster-bootstrap. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- 4
- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::RoleandAWS::IAM::InstanceProfileresources.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:BootstrapInstanceIdThe bootstrap Instance ID.
BootstrapPublicIpThe bootstrap node public IP address.
BootstrapPrivateIpThe bootstrap node private IP address.
4.12.13.1. CloudFormation template for the bootstrap machine Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster.
Example 4.58. CloudFormation template for the bootstrap machine
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AllowedBootstrapSshCidr:
AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))$
ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32.
Default: 0.0.0.0/0
Description: CIDR block to allow SSH access to the bootstrap node.
Type: String
PublicSubnet:
Description: The public subnet to launch the bootstrap node into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID for registering temporary rules.
Type: AWS::EC2::SecurityGroup::Id
VpcId:
Description: The VPC-scoped resources will belong to this VPC.
Type: AWS::EC2::VPC::Id
BootstrapIgnitionLocation:
Default: s3://my-s3-bucket/bootstrap.ign
Description: Ignition config file location.
Type: String
AutoRegisterELB:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- RhcosAmi
- BootstrapIgnitionLocation
- MasterSecurityGroupId
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- PublicSubnet
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
AllowedBootstrapSshCidr:
default: "Allowed SSH Source"
PublicSubnet:
default: "Public Subnet"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
default: "Bootstrap Ignition Source"
MasterSecurityGroupId:
default: "Master Security Group ID"
AutoRegisterELB:
default: "Use Provided ELB Automation"
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
Resources:
BootstrapIamRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "ec2.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]]
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action: "ec2:Describe*"
Resource: "*"
- Effect: "Allow"
Action: "ec2:AttachVolume"
Resource: "*"
- Effect: "Allow"
Action: "ec2:DetachVolume"
Resource: "*"
- Effect: "Allow"
Action: "s3:GetObject"
Resource: "*"
BootstrapInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: "/"
Roles:
- Ref: "BootstrapIamRole"
BootstrapSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Cluster Bootstrap Security Group
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref AllowedBootstrapSshCidr
- IpProtocol: tcp
ToPort: 19531
FromPort: 19531
CidrIp: 0.0.0.0/0
VpcId: !Ref VpcId
BootstrapInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
IamInstanceProfile: !Ref BootstrapInstanceProfile
InstanceType: "i3.large"
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
GroupSet:
- !Ref "BootstrapSecurityGroup"
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "PublicSubnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"replace":{"source":"${S3Loc}"}},"version":"3.1.0"}}'
- {
S3Loc: !Ref BootstrapIgnitionLocation
}
RegisterBootstrapApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
RegisterBootstrapInternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt BootstrapInstance.PrivateIp
Outputs:
BootstrapInstanceId:
Description: Bootstrap Instance ID.
Value: !Ref BootstrapInstance
BootstrapPublicIp:
Description: The bootstrap node public IP address.
Value: !GetAtt BootstrapInstance.PublicIp
BootstrapPrivateIp:
Description: The bootstrap node private IP address.
Value: !GetAtt BootstrapInstance.PrivateIp
4.12.14. Creating the control plane machines in AWS Copy linkLink copied to clipboard!
You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes.
The CloudFormation template creates a stack that represents three control plane nodes.
If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
- You created the bootstrap machine.
Procedure
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "RhcosAmi",3 "ParameterValue": "ami-<random_string>"4 }, { "ParameterKey": "AutoRegisterDNS",5 "ParameterValue": "yes"6 }, { "ParameterKey": "PrivateHostedZoneId",7 "ParameterValue": "<random_string>"8 }, { "ParameterKey": "PrivateHostedZoneName",9 "ParameterValue": "mycluster.example.com"10 }, { "ParameterKey": "Master0Subnet",11 "ParameterValue": "subnet-<random_string>"12 }, { "ParameterKey": "Master1Subnet",13 "ParameterValue": "subnet-<random_string>"14 }, { "ParameterKey": "Master2Subnet",15 "ParameterValue": "subnet-<random_string>"16 }, { "ParameterKey": "MasterSecurityGroupId",17 "ParameterValue": "sg-<random_string>"18 }, { "ParameterKey": "IgnitionLocation",19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master"20 }, { "ParameterKey": "CertificateAuthorities",21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz=="22 }, { "ParameterKey": "MasterInstanceProfileName",23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>"24 }, { "ParameterKey": "MasterInstanceType",25 "ParameterValue": "m5.xlarge"26 }, { "ParameterKey": "AutoRegisterELB",27 "ParameterValue": "yes"28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn",29 "ParameterValue": "arn:aws:lambda:<region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>"30 }, { "ParameterKey": "ExternalApiTargetGroupArn",31 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>"32 }, { "ParameterKey": "InternalApiTargetGroupArn",33 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"34 }, { "ParameterKey": "InternalServiceTargetGroupArn",35 "ParameterValue": "arn:aws:elasticloadbalancing:<region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>"36 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- CurrentRed Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines.
- 4
- Specify an
AWS::EC2::Image::Idvalue. - 5
- Whether or not to perform DNS etcd registration.
- 6
- Specify
yesorno. If you specifyyes, you must provide hosted zone information. - 7
- The Route 53 private zone ID to register the etcd targets with.
- 8
- Specify the
PrivateHostedZoneIdvalue from the output of the CloudFormation template for DNS and load balancing. - 9
- The Route 53 zone to register the targets with.
- 10
- Specify
<cluster_name>.<domain_name>where<domain_name>is the Route 53 base domain that you used when you generatedinstall-config.yamlfile for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - 11 13 15
- A subnet, preferably private, to launch the control plane machines on.
- 12 14 16
- Specify a subnet from the
PrivateSubnetsvalue from the output of the CloudFormation template for DNS and load balancing. - 17
- The master security group ID to associate with control plane nodes (also known as the master nodes).
- 18
- Specify the
MasterSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - 19
- The location to fetch control plane Ignition config file from.
- 20
- Specify the generated Ignition config file location,
https://api-int.<cluster_name>.<domain_name>:22623/config/master. - 21
- The base64 encoded certificate authority string to use.
- 22
- Specify the value from the
master.ignfile that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==. - 23
- The IAM profile to associate with control plane nodes.
- 24
- Specify the
MasterInstanceProfileparameter value from the output of the CloudFormation template for the security group and roles. - 25
- The type of AWS instance to use for the control plane machines.
- 26
- Allowed values:
-
m4.xlarge -
m4.2xlarge -
m4.4xlarge -
m4.10xlarge -
m4.16xlarge -
m5.xlarge -
m5.2xlarge -
m5.4xlarge -
m5.8xlarge -
m5.12xlarge -
m5.16xlarge -
m5a.xlarge -
m5a.2xlarge -
m5a.4xlarge -
m5a.8xlarge -
m5a.12xlarge -
m5a.16xlarge -
c4.2xlarge -
c4.4xlarge -
c4.8xlarge -
c5.2xlarge -
c5.4xlarge -
c5.9xlarge -
c5.12xlarge -
c5.18xlarge -
c5.24xlarge -
c5a.2xlarge -
c5a.4xlarge -
c5a.8xlarge -
c5a.12xlarge -
c5a.16xlarge -
c5a.24xlarge -
r4.xlarge -
r4.2xlarge -
r4.4xlarge -
r4.8xlarge -
r4.16xlarge -
r5.xlarge -
r5.2xlarge -
r5.4xlarge -
r5.8xlarge -
r5.12xlarge -
r5.16xlarge -
r5.24xlarge -
r5a.xlarge -
r5a.2xlarge -
r5a.4xlarge -
r5a.8xlarge -
r5a.12xlarge -
r5a.16xlarge -
r5a.24xlarge
-
- 27
- Whether or not to register a network load balancer (NLB).
- 28
- Specify
yesorno. If you specifyyes, you must provide a Lambda Amazon Resource Name (ARN) value. - 29
- The ARN for NLB IP target registration lambda group.
- 30
- Specify the
RegisterNlbIpTargetsLambdavalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 31
- The ARN for external API load balancer target group.
- 32
- Specify the
ExternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 33
- The ARN for internal API load balancer target group.
- 34
- Specify the
InternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - 35
- The ARN for internal service load balancer target group.
- 36
- Specify the
InternalServiceTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region.
- Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires.
-
If you specified an
m5instance type as the value forMasterInstanceType, add that instance type to theMasterInstanceType.AllowedValuesparameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml2 --parameters file://<parameters>.json3 - 1
<name>is the name for the CloudFormation stack, such ascluster-control-plane. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4bNoteThe CloudFormation template creates a stack that represents three control plane nodes.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
4.12.14.1. CloudFormation template for control plane machines Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster.
Example 4.59. CloudFormation template for control plane machines
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 master instances)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
AutoRegisterDNS:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke DNS etcd registration, which requires Hosted Zone information?
Type: String
PrivateHostedZoneId:
Description: The Route53 private zone ID to register the etcd targets with, such as Z21IXYZABCZ2A4.
Type: String
PrivateHostedZoneName:
Description: The Route53 zone to register the targets with, such as cluster.example.com. Omit the trailing period.
Type: String
Master0Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master1Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
Master2Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
MasterSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/master
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
MasterInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
MasterInstanceType:
Default: m5.xlarge
Type: String
AllowedValues:
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.10xlarge"
- "m4.16xlarge"
- "m5.xlarge"
- "m5.2xlarge"
- "m5.4xlarge"
- "m5.8xlarge"
- "m5.12xlarge"
- "m5.16xlarge"
- "m5a.xlarge"
- "m5a.2xlarge"
- "m5a.4xlarge"
- "m5a.8xlarge"
- "m5a.12xlarge"
- "m5a.16xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "c5.2xlarge"
- "c5.4xlarge"
- "c5.9xlarge"
- "c5.12xlarge"
- "c5.18xlarge"
- "c5.24xlarge"
- "c5a.2xlarge"
- "c5a.4xlarge"
- "c5a.8xlarge"
- "c5a.12xlarge"
- "c5a.16xlarge"
- "c5a.24xlarge"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
- "r5.xlarge"
- "r5.2xlarge"
- "r5.4xlarge"
- "r5.8xlarge"
- "r5.12xlarge"
- "r5.16xlarge"
- "r5.24xlarge"
- "r5a.xlarge"
- "r5a.2xlarge"
- "r5a.4xlarge"
- "r5a.8xlarge"
- "r5a.12xlarge"
- "r5a.16xlarge"
- "r5a.24xlarge"
AutoRegisterELB:
Default: "yes"
AllowedValues:
- "yes"
- "no"
Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter?
Type: String
RegisterNlbIpTargetsLambdaArn:
Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
ExternalApiTargetGroupArn:
Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalApiTargetGroupArn:
Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
InternalServiceTargetGroupArn:
Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB.
Type: String
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- MasterInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- MasterSecurityGroupId
- MasterInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- VpcId
- AllowedBootstrapSshCidr
- Master0Subnet
- Master1Subnet
- Master2Subnet
- Label:
default: "DNS"
Parameters:
- AutoRegisterDNS
- PrivateHostedZoneName
- PrivateHostedZoneId
- Label:
default: "Load Balancer Automation"
Parameters:
- AutoRegisterELB
- RegisterNlbIpTargetsLambdaArn
- ExternalApiTargetGroupArn
- InternalApiTargetGroupArn
- InternalServiceTargetGroupArn
ParameterLabels:
InfrastructureName:
default: "Infrastructure Name"
VpcId:
default: "VPC ID"
Master0Subnet:
default: "Master-0 Subnet"
Master1Subnet:
default: "Master-1 Subnet"
Master2Subnet:
default: "Master-2 Subnet"
MasterInstanceType:
default: "Master Instance Type"
MasterInstanceProfileName:
default: "Master Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
BootstrapIgnitionLocation:
default: "Master Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
MasterSecurityGroupId:
default: "Master Security Group ID"
AutoRegisterDNS:
default: "Use Provided DNS Automation"
AutoRegisterELB:
default: "Use Provided ELB Automation"
PrivateHostedZoneName:
default: "Private Hosted Zone Name"
PrivateHostedZoneId:
default: "Private Hosted Zone ID"
Conditions:
DoRegistration: !Equals ["yes", !Ref AutoRegisterELB]
DoDns: !Equals ["yes", !Ref AutoRegisterDNS]
Resources:
Master0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master0Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster0:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
RegisterMaster0InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master0.PrivateIp
Master1:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master1Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster1:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
RegisterMaster1InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master1.PrivateIp
Master2:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref MasterInstanceProfileName
InstanceType: !Ref MasterInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "MasterSecurityGroupId"
SubnetId: !Ref "Master2Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
RegisterMaster2:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref ExternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalApiTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalApiTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
RegisterMaster2InternalServiceTarget:
Condition: DoRegistration
Type: Custom::NLBRegister
Properties:
ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn
TargetArn: !Ref InternalServiceTargetGroupArn
TargetIp: !GetAtt Master2.PrivateIp
EtcdSrvRecords:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["_etcd-server-ssl._tcp", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]],
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]],
]
- !Join [
" ",
["0 10 2380", !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]],
]
TTL: 60
Type: SRV
Etcd0Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-0", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master0.PrivateIp
TTL: 60
Type: A
Etcd1Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-1", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master1.PrivateIp
TTL: 60
Type: A
Etcd2Record:
Condition: DoDns
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: !Ref PrivateHostedZoneId
Name: !Join [".", ["etcd-2", !Ref PrivateHostedZoneName]]
ResourceRecords:
- !GetAtt Master2.PrivateIp
TTL: 60
Type: A
Outputs:
PrivateIPs:
Description: The control-plane node private IP addresses.
Value:
!Join [
",",
[!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp]
]
4.12.15. Creating the worker nodes in AWS Copy linkLink copied to clipboard!
You can create worker nodes in Amazon Web Services (AWS) for your cluster to use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node.
The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node.
If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
- You created the bootstrap machine.
- You created the control plane machines.
Procedure
Create a JSON file that contains the parameter values that the CloudFormation template requires:
[ { "ParameterKey": "InfrastructureName",1 "ParameterValue": "mycluster-<random_string>"2 }, { "ParameterKey": "RhcosAmi",3 "ParameterValue": "ami-<random_string>"4 }, { "ParameterKey": "Subnet",5 "ParameterValue": "subnet-<random_string>"6 }, { "ParameterKey": "WorkerSecurityGroupId",7 "ParameterValue": "sg-<random_string>"8 }, { "ParameterKey": "IgnitionLocation",9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker"10 }, { "ParameterKey": "CertificateAuthorities",11 "ParameterValue": ""12 }, { "ParameterKey": "WorkerInstanceProfileName",13 "ParameterValue": ""14 }, { "ParameterKey": "WorkerInstanceType",15 "ParameterValue": "m4.2xlarge"16 } ]- 1
- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- 2
- Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format
<cluster-name>-<random-string>. - 3
- Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes.
- 4
- Specify an
AWS::EC2::Image::Idvalue. - 5
- A subnet, preferably private, to start the worker nodes on.
- 6
- Specify a subnet from the
PrivateSubnetsvalue from the output of the CloudFormation template for DNS and load balancing. - 7
- The worker security group ID to associate with worker nodes.
- 8
- Specify the
WorkerSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - 9
- The location to fetch the bootstrap Ignition config file from.
- 10
- Specify the generated Ignition config location,
https://api-int.<cluster_name>.<domain_name>:22623/config/worker. - 11
- Base64 encoded certificate authority string to use.
- 12
- Specify the value from the
worker.ignfile that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==. - 13
- The IAM profile to associate with worker nodes.
- 14
- Specify the
WorkerInstanceProfileparameter value from the output of the CloudFormation template for the security group and roles. - 15
- The type of AWS instance to use for the control plane machines.
- 16
- Allowed values:
-
m4.large -
m4.xlarge -
m4.2xlarge -
m4.4xlarge -
m4.10xlarge -
m4.16xlarge -
m5.large -
m5.xlarge -
m5.2xlarge -
m5.4xlarge -
m5.8xlarge -
m5.12xlarge -
m5.16xlarge -
m5a.large -
m5a.xlarge -
m5a.2xlarge -
m5a.4xlarge -
m5a.8xlarge -
m5a.12xlarge -
m5a.16xlarge -
c4.large -
c4.xlarge -
c4.2xlarge -
c4.4xlarge -
c4.8xlarge -
c5.large -
c5.xlarge -
c5.2xlarge -
c5.4xlarge -
c5.9xlarge -
c5.12xlarge -
c5.18xlarge -
c5.24xlarge -
c5a.large -
c5a.xlarge -
c5a.2xlarge -
c5a.4xlarge -
c5a.8xlarge -
c5a.12xlarge -
c5a.16xlarge -
c5a.24xlarge -
r4.large -
r4.xlarge -
r4.2xlarge -
r4.4xlarge -
r4.8xlarge -
r4.16xlarge -
r5.large -
r5.xlarge -
r5.2xlarge -
r5.4xlarge -
r5.8xlarge -
r5.12xlarge -
r5.16xlarge -
r5.24xlarge -
r5a.large -
r5a.xlarge -
r5a.2xlarge -
r5a.4xlarge -
r5a.8xlarge -
r5a.12xlarge -
r5a.16xlarge -
r5a.24xlarge -
t3.large -
t3.xlarge -
t3.2xlarge -
t3a.large -
t3a.xlarge -
t3a.2xlarge
-
- Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires.
-
Optional: If you specified an
m5instance type as the value forWorkerInstanceType, add that instance type to theWorkerInstanceType.AllowedValuesparameter in the CloudFormation template. -
Optional: If you are deploying with an AWS Marketplace image, update the
Worker0.type.properties.ImageIDparameter with the AMI ID that you obtained from your subscription. Launch the CloudFormation template to create a stack of AWS resources that represent a worker node:
ImportantYou must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name>1 --template-body file://<template>.yaml \2 --parameters file://<parameters>.json3 - 1
<name>is the name for the CloudFormation stack, such ascluster-worker-1. You need the name of this stack if you remove the cluster.- 2
<template>is the relative path to and name of the CloudFormation template YAML file that you saved.- 3
<parameters>is the relative path to and name of the CloudFormation parameters JSON file.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59NoteThe CloudFormation template creates a stack that represents one worker node.
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name.
ImportantYou must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template.
4.12.15.1. CloudFormation template for worker machines Copy linkLink copied to clipboard!
You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster.
Example 4.60. CloudFormation template for worker machines
AWSTemplateFormatVersion: 2010-09-09
Description: Template for OpenShift Cluster Node Launch (EC2 worker instance)
Parameters:
InfrastructureName:
AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})$
MaxLength: 27
MinLength: 1
ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters.
Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider.
Type: String
RhcosAmi:
Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap.
Type: AWS::EC2::Image::Id
Subnet:
Description: The subnets, recommend private, to launch the master nodes into.
Type: AWS::EC2::Subnet::Id
WorkerSecurityGroupId:
Description: The master security group ID to associate with master nodes.
Type: AWS::EC2::SecurityGroup::Id
IgnitionLocation:
Default: https://api-int.$CLUSTER_NAME.$DOMAIN:22623/config/worker
Description: Ignition config file location.
Type: String
CertificateAuthorities:
Default: data:text/plain;charset=utf-8;base64,ABC...xYz==
Description: Base64 encoded certificate authority string to use.
Type: String
WorkerInstanceProfileName:
Description: IAM profile to associate with master nodes.
Type: String
WorkerInstanceType:
Default: m5.large
Type: String
AllowedValues:
- "m4.large"
- "m4.xlarge"
- "m4.2xlarge"
- "m4.4xlarge"
- "m4.10xlarge"
- "m4.16xlarge"
- "m5.large"
- "m5.xlarge"
- "m5.2xlarge"
- "m5.4xlarge"
- "m5.8xlarge"
- "m5.12xlarge"
- "m5.16xlarge"
- "m5a.large"
- "m5a.xlarge"
- "m5a.2xlarge"
- "m5a.4xlarge"
- "m5a.8xlarge"
- "m5a.12xlarge"
- "m5a.16xlarge"
- "c4.large"
- "c4.xlarge"
- "c4.2xlarge"
- "c4.4xlarge"
- "c4.8xlarge"
- "c5.large"
- "c5.xlarge"
- "c5.2xlarge"
- "c5.4xlarge"
- "c5.9xlarge"
- "c5.12xlarge"
- "c5.18xlarge"
- "c5.24xlarge"
- "c5a.large"
- "c5a.xlarge"
- "c5a.2xlarge"
- "c5a.4xlarge"
- "c5a.8xlarge"
- "c5a.12xlarge"
- "c5a.16xlarge"
- "c5a.24xlarge"
- "r4.large"
- "r4.xlarge"
- "r4.2xlarge"
- "r4.4xlarge"
- "r4.8xlarge"
- "r4.16xlarge"
- "r5.large"
- "r5.xlarge"
- "r5.2xlarge"
- "r5.4xlarge"
- "r5.8xlarge"
- "r5.12xlarge"
- "r5.16xlarge"
- "r5.24xlarge"
- "r5a.large"
- "r5a.xlarge"
- "r5a.2xlarge"
- "r5a.4xlarge"
- "r5a.8xlarge"
- "r5a.12xlarge"
- "r5a.16xlarge"
- "r5a.24xlarge"
- "t3.large"
- "t3.xlarge"
- "t3.2xlarge"
- "t3a.large"
- "t3a.xlarge"
- "t3a.2xlarge"
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "Cluster Information"
Parameters:
- InfrastructureName
- Label:
default: "Host Information"
Parameters:
- WorkerInstanceType
- RhcosAmi
- IgnitionLocation
- CertificateAuthorities
- WorkerSecurityGroupId
- WorkerInstanceProfileName
- Label:
default: "Network Configuration"
Parameters:
- Subnet
ParameterLabels:
Subnet:
default: "Subnet"
InfrastructureName:
default: "Infrastructure Name"
WorkerInstanceType:
default: "Worker Instance Type"
WorkerInstanceProfileName:
default: "Worker Instance Profile Name"
RhcosAmi:
default: "Red Hat Enterprise Linux CoreOS AMI ID"
IgnitionLocation:
default: "Worker Ignition Source"
CertificateAuthorities:
default: "Ignition CA String"
WorkerSecurityGroupId:
default: "Worker Security Group ID"
Resources:
Worker0:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref RhcosAmi
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeSize: "120"
VolumeType: "gp2"
IamInstanceProfile: !Ref WorkerInstanceProfileName
InstanceType: !Ref WorkerInstanceType
NetworkInterfaces:
- AssociatePublicIpAddress: "false"
DeviceIndex: "0"
GroupSet:
- !Ref "WorkerSecurityGroupId"
SubnetId: !Ref "Subnet"
UserData:
Fn::Base64: !Sub
- '{"ignition":{"config":{"merge":[{"source":"${SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"${CA_BUNDLE}"}]}},"version":"3.1.0"}}'
- {
SOURCE: !Ref IgnitionLocation,
CA_BUNDLE: !Ref CertificateAuthorities,
}
Tags:
- Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]]
Value: "shared"
Outputs:
PrivateIP:
Description: The compute node private IP address.
Value: !GetAtt Worker0.PrivateIp
4.12.16. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure Copy linkLink copied to clipboard!
After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane.
Prerequisites
- You configured an AWS account.
-
You added your AWS keys and region to your local AWS profile by running
aws configure. - You generated the Ignition config files for your cluster.
- You created and configured a VPC and associated subnets in AWS.
- You created and configured DNS, load balancers, and listeners in AWS.
- You created the security groups and roles required for your cluster in AWS.
- You created the bootstrap machine.
- You created the control plane machines.
- You created the worker nodes.
Procedure
Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane:
$ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \1 --log-level=info2 Example output
INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.19.0+9f84db3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1sIf the command exits without a
FATALwarning, your OpenShift Container Platform control plane has initialized.NoteAfter the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators.
4.12.17. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
4.12.18. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.21.0 master-1 Ready master 63m v1.21.0 master-2 Ready master 64m v1.21.0The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
PendingorApprovedstatus for each machine that you added to the cluster:$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pendingstatus, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approverif the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec,oc rsh, andoc logscommands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapperservice account in thesystem:nodeorsystem:admingroups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approveNoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...If the remaining CSRs are not approved, and are in the
Pendingstatus, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Readystatus. Verify this by running the following command:$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.21.0 master-1 Ready master 73m v1.21.0 master-2 Ready master 74m v1.21.0 worker-0 Ready worker 11m v1.21.0 worker-1 Ready worker 11m v1.21.0NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Readystatus.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
4.12.19. Initial Operator configuration Copy linkLink copied to clipboard!
After the control plane initializes, you must immediately configure some Operators so that they all become available.
Prerequisites
- Your control plane has initialized.
Procedure
Watch the cluster components come online:
$ watch -n5 oc get clusteroperatorsExample output
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.2 True False False 19m baremetal 4.8.2 True False False 37m cloud-credential 4.8.2 True False False 40m cluster-autoscaler 4.8.2 True False False 37m config-operator 4.8.2 True False False 38m console 4.8.2 True False False 26m csi-snapshot-controller 4.8.2 True False False 37m dns 4.8.2 True False False 37m etcd 4.8.2 True False False 36m image-registry 4.8.2 True False False 31m ingress 4.8.2 True False False 30m insights 4.8.2 True False False 31m kube-apiserver 4.8.2 True False False 26m kube-controller-manager 4.8.2 True False False 36m kube-scheduler 4.8.2 True False False 36m kube-storage-version-migrator 4.8.2 True False False 37m machine-api 4.8.2 True False False 29m machine-approver 4.8.2 True False False 37m machine-config 4.8.2 True False False 36m marketplace 4.8.2 True False False 37m monitoring 4.8.2 True False False 29m network 4.8.2 True False False 38m node-tuning 4.8.2 True False False 37m openshift-apiserver 4.8.2 True False False 32m openshift-controller-manager 4.8.2 True False False 30m openshift-samples 4.8.2 True False False 32m operator-lifecycle-manager 4.8.2 True False False 37m operator-lifecycle-manager-catalog 4.8.2 True False False 37m operator-lifecycle-manager-packageserver 4.8.2 True False False 32m service-ca 4.8.2 True False False 38m storage 4.8.2 True False False 37m- Configure the Operators that are not available.
4.12.19.1. Disabling the default OperatorHub sources Copy linkLink copied to clipboard!
Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
Procedure
Disable the sources for the default catalogs by adding
disableAllDefaultSources: trueto theOperatorHubobject:$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Global Configuration → OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources.
4.12.19.2. Image registry storage configuration Copy linkLink copied to clipboard!
Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage.
Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters.
Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades.
4.12.19.2.1. Configuring registry storage for AWS with user-provisioned infrastructure Copy linkLink copied to clipboard!
During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage.
If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure.
Prerequisites
- You have a cluster on AWS with user-provisioned infrastructure.
For Amazon S3 storage, the secret is expected to contain two keys:
-
REGISTRY_STORAGE_S3_ACCESSKEY -
REGISTRY_STORAGE_S3_SECRETKEY
-
Procedure
Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage.
- Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.
Fill in the storage configuration in
configs.imageregistry.operator.openshift.io/cluster:$ oc edit configs.imageregistry.operator.openshift.io/clusterExample configuration
storage: s3: bucket: <bucket-name> region: <region-name>
To secure your registry images in AWS, block public access to the S3 bucket.
4.12.19.2.2. Configuring storage for the image registry in non-production clusters Copy linkLink copied to clipboard!
You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
Procedure
To set the image registry storage to an empty directory:
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'WarningConfigure this option for only non-production clusters.
If you run this command before the Image Registry Operator initializes its components, the
oc patchcommand fails with the following error:Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not foundWait a few minutes and run the command again.
4.12.20. Deleting the bootstrap resources Copy linkLink copied to clipboard!
After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS).
Prerequisites
- You completed the initial Operator configuration for your cluster.
Procedure
Delete the bootstrap resources. If you used the CloudFormation template, delete its stack:
Delete the stack by using the AWS CLI:
$ aws cloudformation delete-stack --stack-name <name>1 - 1
<name>is the name of your bootstrap stack.
- Delete the stack by using the AWS CloudFormation console.
4.12.21. Creating the Ingress DNS Records Copy linkLink copied to clipboard!
If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias.
Prerequisites
- You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned.
-
You installed the OpenShift CLI (
oc). -
You installed the
jqpackage. - You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix).
Procedure
Determine the routes to create.
-
To create a wildcard record, use
*.apps.<cluster_name>.<domain_name>, where<cluster_name>is your cluster name, and<domain_name>is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command:
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routesExample output
oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> grafana-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>
-
To create a wildcard record, use
Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the
EXTERNAL-IPcolumn:$ oc -n openshift-ingress get service router-defaultExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5mLocate the hosted zone ID for the load balancer:
$ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID'1 - 1
- For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer that you obtained.
Example output
Z3AADJGX6KTTL2The output of this command is the load balancer hosted zone ID.
Obtain the public hosted zone ID for your cluster’s domain:
$ aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id'2 --output textExample output
/hostedzone/Z3URY6TWQ91KVVThe public hosted zone ID for your domain is shown in the command output. In this example, it is
Z3URY6TWQ91KVV.Add the alias records to your private zone:
$ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>",2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>",3 > "DNSName": "<external_ip>.",4 > "EvaluateTargetHealth": false > } > } > } > ] > }'- 1
- For
<private_hosted_zone_id>, specify the value from the output of the CloudFormation template for DNS and load balancing. - 2
- For
<cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster. - 3
- For
<hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained. - 4
- For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
Add the records to your public zone:
$ aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>",2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>",3 > "DNSName": "<external_ip>.",4 > "EvaluateTargetHealth": false > } > } > } > ] > }'- 1
- For
<public_hosted_zone_id>, specify the public hosted zone for your domain. - 2
- For
<cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster. - 3
- For
<hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained. - 4
- For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
4.12.22. Completing an AWS installation on user-provisioned infrastructure Copy linkLink copied to clipboard!
After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion.
Prerequisites
- You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure.
-
You installed the
ocCLI.
Procedure
From the directory that contains the installation program, complete the cluster installation:
$ ./openshift-install --dir <installation_directory> wait-for install-complete1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Example output
INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Fe5en-ymBEc-Wt6NL" INFO Time elapsed: 1sImportant-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
- Register your cluster on the Cluster registration page.
4.12.23. Logging in to the cluster by using the web console Copy linkLink copied to clipboard!
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
Prerequisites
- You have access to the installation host.
- You completed a cluster installation and all cluster Operators are available.
Procedure
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNoteAlternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host.List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'NoteAlternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example output
console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None-
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
4.12.24. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
4.12.26. Next steps Copy linkLink copied to clipboard!
- Validate an installation.
- Customize your cluster.
-
Configure image streams for the Cluster Samples Operator and the
must-gathertool. - Learn how to use Operator Lifecycle Manager (OLM) on restricted networks.
- If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores.
- If necessary, you can opt out of remote health reporting.
- If necessary, you can remove cloud provider credentials.
4.13. Uninstalling a cluster on AWS Copy linkLink copied to clipboard!
You can remove a cluster that you deployed to Amazon Web Services (AWS).
4.13.1. Removing a cluster that uses installer-provisioned infrastructure Copy linkLink copied to clipboard!
You can remove a cluster that uses installer-provisioned infrastructure from your cloud.
After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access.
Prerequisites
- Have a copy of the installation program that you used to deploy the cluster.
- Have the files that the installation program generated when you created your cluster.
Procedure
From the directory that contains the installation program on the computer that you used to install the cluster, run the following command:
$ ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info1 2 NoteYou must specify the directory that contains the cluster definition files for your cluster. The installation program requires the
metadata.jsonfile in this directory to delete the cluster.-
Optional: Delete the
<installation_directory>directory and the OpenShift Container Platform installation program.
4.13.2. Deleting AWS resources with the Cloud Credential Operator utility Copy linkLink copied to clipboard!
To clean up resources after uninstalling an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in manual mode with STS, you can use the CCO utility (ccoctl) to remove the AWS resources that ccoctl created during installation.
Prerequisites
-
Extract and prepare the
ccoctlbinary. - Install an OpenShift Container Platform cluster with the CCO in manual mode with STS.
Procedure
Delete the AWS resources that
ccoctlcreated:$ ccoctl aws delete --name=<name> --region=<aws_region>where:
-
<name>matches the name used to originally create and tag the cloud resources. <aws-region>is the AWS region in which cloud resources will be deleted.Example output:
2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted
-
Verification
You can verify that the resources are deleted by querying AWS. For more information, refer to AWS documentation.
Chapter 5. Installing on Azure Copy linkLink copied to clipboard!
5.1. Preparing to install on Azure Copy linkLink copied to clipboard!
5.1.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
5.1.2. Requirements for installing OpenShift Container Platform on Azure Copy linkLink copied to clipboard!
Before installing OpenShift Container Platform on Microsoft Azure, you must configure an Azure account. See Configuring an Azure account for details about account configuration, account limits, public DNS zone configuration, required roles, creating service principals, and supported Azure regions.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for Azure for other options.
5.1.3. Choosing a method to install OpenShift Container Platform on Azure Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself.
See Installation process for more information about installer-provisioned and user-provisioned installation processes.
5.1.3.1. Installing a cluster on installer-provisioned infrastructure Copy linkLink copied to clipboard!
You can install a cluster on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:
- Installing a cluster quickly on Azure: You can install OpenShift Container Platform on Azure infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options.
- Installing a customized cluster on Azure: You can install a customized cluster on Azure infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
- Installing a cluster on Azure with network customizations: You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements.
- Installing a cluster on Azure into an existing VNet: You can install OpenShift Container Platform on an existing Azure Virtual Network (VNet) on Azure. You can use this installation method if you have constraints set by the guidelines of your company, such as limits when creating new accounts or infrastructure.
- Installing a private cluster on Azure: You can install a private cluster into an existing Azure Virtual Network (VNet) on Azure. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet.
- Installing a cluster on Azure into a government region: OpenShift Container Platform can be deployed into Microsoft Azure Government (MAG) regions that are specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure.
5.1.3.2. Installing a cluster on user-provisioned infrastructure Copy linkLink copied to clipboard!
You can install a cluster on Azure infrastructure that you provision, by using the following method:
- Installing a cluster on Azure using ARM templates: You can install OpenShift Container Platform on Azure by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation.
5.1.4. Next steps Copy linkLink copied to clipboard!
5.2. Configuring an Azure account Copy linkLink copied to clipboard!
Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account.
All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
5.2.1. Azure account limits Copy linkLink copied to clipboard!
The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters.
Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores.
Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure.
The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters.
| Component | Number of components required by default | Default Azure limit | Description | ||||||
|---|---|---|---|---|---|---|---|---|---|
| vCPU | 40 | 20 per region | A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances:
Because the bootstrap machine uses To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. By default, the installation program distributes control plane and compute machines across all availability zones within a region. To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. | ||||||
| OS Disk | 7 |
VM OS disk must be able to sustain a minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by
Host caching must be set to | |||||||
| VNet | 1 | 1000 per region | Each default cluster requires one Virtual Network (VNet), which contains two subnets. | ||||||
| Network interfaces | 6 | 65,536 per region | Each default cluster requires six network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. | ||||||
| Network security groups | 2 | 5000 | Each default cluster Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets:
| ||||||
| Network load balancers | 3 | 1000 per region | Each cluster creates the following load balancers:
If your applications create more Kubernetes | ||||||
| Public IP addresses | 3 | Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. | |||||||
| Private IP addresses | 7 | The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. | |||||||
| Spot VM vCPUs (optional) | 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. | 20 per region | This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. |
5.2.2. Configuring a public DNS zone in Azure Copy linkLink copied to clipboard!
To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source.
NoteFor more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation.
- If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation.
Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses.
Use an appropriate root domain, such as
openshiftcorp.com, or subdomain, such asclusters.openshiftcorp.com.- If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain.
5.2.3. Increasing Azure account limits Copy linkLink copied to clipboard!
To increase an account limit, file a support request on the Azure portal.
You can increase only one type of quota per support request.
Procedure
- From the Azure portal, click Help + support in the lower left corner.
Click New support request and then select the required values:
- From the Issue type list, select Service and subscription limits (quotas).
- From the Subscription list, select the subscription to modify.
- From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster.
- Click Next: Solutions.
On the Problem Details page, provide the required information for your quota increase:
- Click Provide details and provide the required details in the Quota details window.
- In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details.
- Click Next: Review + create and then click Create.
5.2.4. Required Azure roles Copy linkLink copied to clipboard!
OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles:
-
User Access Administrator -
Owner
To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation.
5.2.5. Creating a service principal Copy linkLink copied to clipboard!
Because OpenShift Container Platform and its installation program must create Microsoft Azure resources through Azure Resource Manager, you must create a service principal to represent it.
Prerequisites
- Install or update the Azure CLI.
-
Install the
jqpackage. - Your Azure account has the required roles for the subscription that you use.
Procedure
Log in to the Azure CLI:
$ az loginLog in to Azure in the web console by using your credentials.
If your Azure account uses subscriptions, ensure that you are using the right subscription.
View the list of available accounts and record the
tenantIdvalue for the subscription you want to use for your cluster:$ az account list --refreshExample output
[ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "you@example.com", "type": "user" } } ]View your active account details and confirm that the
tenantIdvalue matches the subscription you want to use:$ az account showExample output
{ "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",1 "user": { "name": "you@example.com", "type": "user" } }- 1
- Ensure that the value of the
tenantIdparameter is the UUID of the correct subscription.
If you are not using the right subscription, change the active subscription:
$ az account set -s <id>1 - 1
- Substitute the value of the
idfor the subscription that you want to use for<id>.
If you changed the active subscription, display your account information again:
$ az account showExample output
{ "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "you@example.com", "type": "user" } }
-
Record the values of the
tenantIdandidparameters from the previous output. You need these values during OpenShift Container Platform installation. Create the service principal for your account:
$ az ad sp create-for-rbac --role Contributor --name <service_principal>1 - 1
- Replace
<service_principal>with the name to assign to the service principal.
Example output
Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the required format used for service principal names Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 Retrying role assignment creation: 4/36 { "appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956", "displayName": "<service_principal>", "name": "http://<service_principal>", "password": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee" }-
Record the values of the
appIdandpasswordparameters from the previous output. You need these values during OpenShift Container Platform installation. Grant additional permissions to the service principal.
-
You must always add the
ContributorandUser Access Administratorroles to the app registration service principal so the cluster can assign credentials for its components. -
To operate the Cloud Credential Operator (CCO) in mint mode, the app registration service principal also requires the
Azure Active Directory Graph/Application.ReadWrite.OwnedByAPI permission. - To operate the CCO in passthrough mode, the app registration service principal does not require additional API permissions.
For more information about CCO modes, see "About the Cloud Credential Operator" in the "Managing cloud provider credentials" section of the Authentication and authorization guide.
NoteIf you limit the service principal scope of the OpenShift Container Platform installation program to an already existing Azure resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying a cluster using the installation program deletes this resource group.
To assign the
User Access Administratorrole, run the following command:$ az role assignment create --role "User Access Administrator" \ --assignee-object-id $(az ad sp list --filter "appId eq '<appId>'" \ | jq '.[0].id' -r)1 - 1
- Replace
<appId>with theappIdparameter value for your service principal.
To assign the
Azure Active Directory Graphpermission, run the following command:$ az ad app permission add --id <appId> \1 --api 00000002-0000-0000-c000-000000000000 \ --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role- 1
- Replace
<appId>with theappIdparameter value for your service principal.
Example output
Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000" is needed to make the change effectiveFor more information about the specific permissions that you grant with this command, see the GUID Table for Windows Azure Active Directory Permissions.
Approve the permissions request. If your account does not have the Azure Active Directory tenant administrator role, follow the guidelines for your organization to request that the tenant administrator approve your permissions request.
$ az ad app permission grant --id <appId> \1 --api 00000002-0000-0000-c000-000000000000- 1
- Replace
<appId>with theappIdparameter value for your service principal.
-
You must always add the
5.2.6. Supported Azure Marketplace regions Copy linkLink copied to clipboard!
Installing a cluster by using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA.
While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports.
Deploying a cluster by using the Azure Marketplace image is not supported for the Azure Government regions.
5.2.7. Supported Azure regions Copy linkLink copied to clipboard!
The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription.
Supported Azure public regions
-
australiacentral(Australia Central) -
australiaeast(Australia East) -
australiasoutheast(Australia South East) -
brazilsouth(Brazil South) -
canadacentral(Canada Central) -
canadaeast(Canada East) -
centralindia(Central India) -
centralus(Central US) -
eastasia(East Asia) -
eastus(East US) -
eastus2(East US 2) -
francecentral(France Central) -
germanywestcentral(Germany West Central) -
japaneast(Japan East) -
japanwest(Japan West) -
koreacentral(Korea Central) -
koreasouth(Korea South) -
northcentralus(North Central US) -
northeurope(North Europe) -
norwayeast(Norway East) -
southafricanorth(South Africa North) -
southcentralus(South Central US) -
southeastasia(Southeast Asia) -
southindia(South India) -
switzerlandnorth(Switzerland North) -
uaenorth(UAE North) -
uksouth(UK South) -
ukwest(UK West) -
westcentralus(West Central US) -
westeurope(West Europe) -
westindia(West India) -
westus(West US) -
westus2(West US 2)
Supported Azure Government regions
Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6:
-
usgovtexas(US Gov Texas) -
usgovvirginia(US Gov Virginia)
You can reference all available MAG regions in the Azure documentation. Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested.
5.2.8. Next steps Copy linkLink copied to clipboard!
- Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options.
5.3. Manually creating IAM for Azure Copy linkLink copied to clipboard!
In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster.
5.3.1. Alternatives to storing administrator-level secrets in the kube-system project Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file.
If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually.
Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them.
5.3.2. Manually create IAM Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.
Procedure
Change to the directory that contains the installation program and create the
install-config.yamlfile:$ openshift-install create install-config --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files.Edit the
install-config.yamlconfiguration file so that it contains thecredentialsModeparameter set toManual.Example
install-config.yamlconfiguration fileapiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual1 compute: - architecture: amd64 hyperthreading: Enabled ...- 1
- This line is added to set the
credentialsModeparameter toManual.
To generate the manifests, run the following command from the directory that contains the installation program:
$ openshift-install create manifests --dir <installation_directory>From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your
openshift-installbinary is built to use:$ openshift-install versionExample output
release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64Locate all
CredentialsRequestobjects in this release image that target the cloud you are deploying on:$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azureThis command creates a YAML file for each
CredentialsRequestobject.Sample
CredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor-
Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject. The format for the secret data varies for each cloud provider. From the directory that contains the installation program, proceed with your cluster creation:
$ openshift-install create cluster --dir <installation_directory>ImportantBefore upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. For details, see the "Upgrading clusters with manually maintained credentials" section of the installation content for your cloud provider.
5.3.3. Upgrading clusters with manually maintained credentials Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default.
-
For minor releases, for example, from 4.7 to 4.8, this status prevents you from upgrading until you have addressed any updated permissions and annotated the
CloudCredentialresource to indicate that the permissions are updated as needed for the next version. This annotation changes theUpgradablestatus toTrue. - For z-stream releases, for example, from 4.8.9 to 4.8.10, no permissions are added or changed, so the upgrade is not blocked.
Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. Additionally, you must review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components.
Procedure
Extract and examine the
CredentialsRequestcustom resource for the new release.The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud.
Update the manually maintained credentials on your cluster:
-
Create new secrets for any
CredentialsRequestcustom resources that are added by the new release image. -
If the
CredentialsRequestcustom resources for any existing credentials that are stored in secrets have changed their permissions requirements, update the permissions as required.
-
Create new secrets for any
When all of the secrets are correct for the new release, indicate that the cluster is ready to upgrade:
-
Log in to the OpenShift Container Platform CLI as a user with the
cluster-adminrole. Edit the
CloudCredentialresource to add anupgradeable-toannotation within themetadatafield:$ oc edit cloudcredential clusterText to add
... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ...Where
<version_number>is the version you are upgrading to, in the formatx.y.z. For example,4.8.2for OpenShift Container Platform 4.8.2.It may take several minutes after adding the annotation for the upgradeable status to change.
-
Log in to the OpenShift Container Platform CLI as a user with the
Verify that the CCO is upgradeable:
- In the Administrator perspective of the web console, navigate to Administration → Cluster Settings.
- To view the CCO status details, click cloud-credential in the Cluster Operators list.
-
If the Upgradeable status in the Conditions section is False, verify that the
upgradeable-toannotation is free of typographical errors.
When the Upgradeable status in the Conditions section is True, you can begin the OpenShift Container Platform upgrade.
5.3.4. Next steps Copy linkLink copied to clipboard!
Install an OpenShift Container Platform cluster:
- Installing a cluster quickly on Azure with default options on installer-provisioned infrastructure
- Install a cluster with cloud customizations on installer-provisioned infrastructure
- Install a cluster with network customizations on installer-provisioned infrastructure
5.4. Installing a cluster quickly on Azure Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Microsoft Azure that uses the default configuration options.
5.4.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.4.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.4.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.4.4. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.4.5. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select azure as the platform to target.
If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:
-
azure subscription id: The subscription ID to use for the cluster. Specify the
idvalue in your account output. -
azure tenant id: The tenant ID. Specify the
tenantIdvalue in your account output. -
azure service principal client id: The value of the
appIdparameter for the service principal. -
azure service principal client secret: The value of the
passwordparameter for the service principal.
-
azure subscription id: The subscription ID to use for the cluster. Specify the
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
5.4.6. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
5.4.7. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
5.4.8. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.4.9. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
5.5. Installing a cluster on Azure with customizations Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
5.5.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.5.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.5.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.5.4. Selecting an Azure Marketplace image Copy linkLink copied to clipboard!
If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following:
-
While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify
redhatas the publisher. If you are located in EMEA, specifyredhat-limitedas the publisher. -
The offer includes a
rh-ocp-workerSKU and arh-ocp-worker-gen1SKU. Therh-ocp-workerSKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you are going to use an instance type that is only version 1 compatible, use the image associated with therh-ocp-worker-gen1SKU. Therh-ocp-worker-gen1SKU represents a Hyper-V version 1 VM image.
Prerequisites
-
You have installed the Azure CLI client
(az). - Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.
Procedure
Display all of the available OpenShift Container Platform images by running one of the following commands:
North America:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat -o tableExample output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100EMEA:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o tableExample output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100
NoteRegardless of the version of OpenShift Container Platform you are installing, the correct version of the Azure Marketplace image to use is 4.8.x. If required, as part of the installation process, your VMs are automatically upgraded.
Inspect the image for your offer by running one of the following commands:
North America:
$ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Review the terms of the offer by running one of the following commands:
North America:
$ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Accept the terms of the offering by running one of the following commands:
North America:
$ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
-
Record the image details of your offer. You must update the
99_openshift-cluster-api_worker-machineset-[0-2].yamlfiles in the section titled "Updating Manifests for Marketplace Installation" before completing the installation.
5.5.5. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.5.6. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select azure as the platform to target.
If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:
-
azure subscription id: The subscription ID to use for the cluster. Specify the
idvalue in your account output. -
azure tenant id: The tenant ID. Specify the
tenantIdvalue in your account output. -
azure service principal client id: The value of the
appIdparameter for the service principal. -
azure service principal client secret: The value of the
passwordparameter for the service principal.
-
azure subscription id: The subscription ID to use for the cluster. Specify the
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.5.6.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
5.5.6.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.5.6.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.5.6.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
5.5.6.1.4. Additional Azure configuration parameters Copy linkLink copied to clipboard!
Additional Azure configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The name of the resource group that contains the DNS zone for your base domain. |
String, for example |
|
| The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. |
String, for example |
|
| The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. |
|
|
| The name of the Azure region that hosts your cluster. |
Any valid region name, such as |
|
| List of availability zones to place machines in. For high availability, specify at least two zones. |
List of zones, for example |
|
|
The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the | String. |
|
| The name of the existing VNet that you want to deploy your cluster to. | String. |
|
| The name of the existing subnet in your VNet that you want to deploy your control plane machines to. |
Valid CIDR, for example |
|
| The name of the existing subnet in your VNet that you want to deploy your compute machines to. |
Valid CIDR, for example |
|
|
The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value |
Any valid cloud environment, such as |
You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.
5.5.6.2. Sample customized install-config.yaml file for Azure Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024
diskType: Premium_LRS
type: Standard_D8s_v3
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512
diskType: Standard_LRS
zones:
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: resource_group
region: centralus
resourceGroupName: existing_resource_group
outboundType: Loadbalancer
cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
- 1 10 12 14
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 11
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 13
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 15
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 16
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
5.5.6.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.5.7. Updating manifests for Marketplace installation Copy linkLink copied to clipboard!
If you selected a Marketplace image for installation, you must create and modify the manifests to use the Marketplace image.
Prerequisites
-
You created the
install-config.yamlfile and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests by running the following command:
$ openshift-install create manifests --dir <installation_dir>Edit the
.spec.template.spec.providerSpec.value.imageproperty of the compute machine set definitions, replacing theoffer,publisher,sku, andversionvalues with the details gathered in the section titled "Selecting an Azure Marketplace image". These are the three files that must be updated:-
<installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml -
<installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-1.yaml -
<installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-2.yaml
-
-
In each file, replace the value of the
.spec.template.spec.providerSpec.value.image.resourceIDproperty with an empty value (""). -
In each file, set the
typeproperty toMarketplaceWithPlan. Using the first machine set file as an example, the
.spec.template.spec.providerSpec.value.imagesection must look like the following example:image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker version: 4.8.2021122100 type: MarketplaceWithPlan
5.5.8. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
5.5.9. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
5.5.10. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
5.5.11. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.5.12. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
5.6. Installing a cluster on Azure with network customizations Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.
5.6.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials. Manual mode can also be used in environments where the cloud IAM APIs are not reachable.
5.6.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.6.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.6.4. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.6.5. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select azure as the platform to target.
If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:
-
azure subscription id: The subscription ID to use for the cluster. Specify the
idvalue in your account output. -
azure tenant id: The tenant ID. Specify the
tenantIdvalue in your account output. -
azure service principal client id: The value of the
appIdparameter for the service principal. -
azure service principal client secret: The value of the
passwordparameter for the service principal.
-
azure subscription id: The subscription ID to use for the cluster. Specify the
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.6.5.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
5.6.5.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.6.5.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.6.5.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
5.6.5.1.4. Additional Azure configuration parameters Copy linkLink copied to clipboard!
Additional Azure configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The name of the resource group that contains the DNS zone for your base domain. |
String, for example |
|
| The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. |
String, for example |
|
| The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. |
|
|
| The name of the Azure region that hosts your cluster. |
Any valid region name, such as |
|
| List of availability zones to place machines in. For high availability, specify at least two zones. |
List of zones, for example |
|
|
The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the | String. |
|
| The name of the existing VNet that you want to deploy your cluster to. | String. |
|
| The name of the existing subnet in your VNet that you want to deploy your control plane machines to. |
Valid CIDR, for example |
|
| The name of the existing subnet in your VNet that you want to deploy your compute machines to. |
Valid CIDR, for example |
|
|
The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value |
Any valid cloud environment, such as |
You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.
5.6.5.2. Sample customized install-config.yaml file for Azure Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024
diskType: Premium_LRS
type: Standard_D8s_v3
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512
diskType: Standard_LRS
zones:
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: resource_group
region: centralus
resourceGroupName: existing_resource_group
outboundType: Loadbalancer
cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
- 1 10 13 15
- Required. The installation program prompts you for this value.
- 2 6 11
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 12
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 14
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 16
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 17
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
5.6.5.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.6.6. Network configuration phases Copy linkLink copied to clipboard!
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork networking.machineNetworkFor more information on these fields, refer to Installation configuration parameters.
NoteSet the
networking.machineNetworkto match the CIDR that the preferred NIC resides in.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.
5.6.7. Specifying advanced network configuration Copy linkLink copied to clipboard!
You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
Prerequisites
-
You have created the
install-config.yamlfile and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
<installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}-
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files.
5.6.8. Cluster Network Operator configuration Copy linkLink copied to clipboard!
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:
clusterNetwork- IP address pools from which pod IP addresses are allocated.
serviceNetwork- IP address pool for services.
defaultNetwork.type- Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
5.6.8.1. Cluster Network Operator configuration object Copy linkLink copied to clipboard!
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The name of the CNO object. This name is always |
|
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
You can customize this field only in the |
|
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
| Configures the Container Network Interface (CNI) cluster network provider for the cluster network. |
|
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
Either Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. |
|
|
| This object is only valid for the OpenShift SDN cluster network provider. |
|
|
| This object is only valid for the OVN-Kubernetes cluster network provider. |
Configuration for the OpenShift SDN CNI cluster network provider
The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
|
Configures the network isolation mode for OpenShift SDN. The default value is
The values |
|
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
Configuration for the OVN-Kubernetes CNI cluster network provider
The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all Geneve packets. The default value is |
|
|
| Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. |
|
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
| Field | Type | Description |
|---|---|---|
|
| integer |
The maximum number of messages to generate every second per node. The default value is |
|
| integer |
The maximum size for the audit log in bytes. The default value is |
|
| string | One of the following additional audit log targets:
|
|
| string |
The syslog facility, such as |
Example OVN-Kubernetes configuration
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig: {}
kubeProxyConfig object configuration
The values for the kubeProxyConfig object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The refresh period for Note
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the |
|
|
|
The minimum duration before refreshing
|
5.6.9. Configuring hybrid networking with OVN-Kubernetes Copy linkLink copied to clipboard!
You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster.
You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process.
Prerequisites
-
You defined
OVNKubernetesfor thenetworking.networkTypeparameter in theinstall-config.yamlfile. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>where:
<installation_directory>-
Specifies the name of the directory that contains the
install-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOFwhere:
<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
Open the
cluster-network-03-config.ymlfile in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example:Specify a hybrid networking configuration
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork:1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 98982 - 1
- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetworkCIDR cannot overlap with theclusterNetworkCIDR. - 2
- Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
4789port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken.
NoteWindows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPortvalue because this Windows server version does not support selecting a custom VXLAN port.-
Save the
cluster-network-03-config.ymlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program deletes themanifests/directory when creating the cluster.
For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
5.6.10. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
5.6.11. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
5.6.12. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
5.6.13. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.6.14. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
5.7. Installing a cluster on Azure into an existing VNet Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
5.7.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.7.2. About reusing a VNet for your OpenShift Container Platform cluster Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.
By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.
5.7.2.1. Requirements for using your VNet Copy linkLink copied to clipboard!
When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:
- Subnets
- Route tables
- VNets
- Network Security Groups
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.
The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.
Your VNet must meet the following characteristics:
-
The VNet’s CIDR block must contain the
Networking.MachineCIDRrange, which is the IP address pool for cluster machines. - The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.
You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the specified subnets exist.
- There are two private subnets, one for the control plane machines and one for the compute machines.
- The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them.
If you destroy a cluster that uses an existing VNet, the VNet is not deleted.
5.7.2.1.1. Network security group requirements Copy linkLink copied to clipboard!
The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.
The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.
| Port | Description | Control plane | Compute |
|---|---|---|---|
|
| Allows HTTP traffic | x | |
|
| Allows HTTPS traffic | x | |
|
| Allows communication to the control plane machines | x | |
|
| Allows internal communication to the machine config server for provisioning machines | x |
Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.
5.7.2.2. Division of permissions Copy linkLink copied to clipboard!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.
The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.
5.7.2.3. Isolation between clusters Copy linkLink copied to clipboard!
Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.
5.7.3. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.7.4. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.7.5. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.7.6. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select azure as the platform to target.
If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:
-
azure subscription id: The subscription ID to use for the cluster. Specify the
idvalue in your account output. -
azure tenant id: The tenant ID. Specify the
tenantIdvalue in your account output. -
azure service principal client id: The value of the
appIdparameter for the service principal. -
azure service principal client secret: The value of the
passwordparameter for the service principal.
-
azure subscription id: The subscription ID to use for the cluster. Specify the
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.7.6.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
5.7.6.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.7.6.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.7.6.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
5.7.6.1.4. Additional Azure configuration parameters Copy linkLink copied to clipboard!
Additional Azure configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The name of the resource group that contains the DNS zone for your base domain. |
String, for example |
|
| The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. |
String, for example |
|
| The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. |
|
|
| The name of the Azure region that hosts your cluster. |
Any valid region name, such as |
|
| List of availability zones to place machines in. For high availability, specify at least two zones. |
List of zones, for example |
|
|
The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the | String. |
|
| The name of the existing VNet that you want to deploy your cluster to. | String. |
|
| The name of the existing subnet in your VNet that you want to deploy your control plane machines to. |
Valid CIDR, for example |
|
| The name of the existing subnet in your VNet that you want to deploy your compute machines to. |
Valid CIDR, for example |
|
|
The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value |
Any valid cloud environment, such as |
You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.
5.7.6.2. Sample customized install-config.yaml file for Azure Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024
diskType: Premium_LRS
type: Standard_D8s_v3
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512
diskType: Standard_LRS
zones:
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: resource_group
region: centralus
resourceGroupName: existing_resource_group
networkResourceGroupName: vnet_resource_group
virtualNetwork: vnet
controlPlaneSubnet: control_plane_subnet
computeSubnet: compute_subnet
outboundType: Loadbalancer
cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
- 1 10 12 18
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 11
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 13
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 14
- If you use an existing VNet, specify the name of the resource group that contains it.
- 15
- If you use an existing VNet, specify its name.
- 16
- If you use an existing VNet, specify the name of the subnet to host the control plane machines.
- 17
- If you use an existing VNet, specify the name of the subnet to host the compute machines.
- 19
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 20
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
5.7.6.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.7.7. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
5.7.8. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
5.7.9. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
5.7.10. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.7.11. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
5.8. Installing a private cluster on Azure Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
5.8.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.8.2. Private clusters Copy linkLink copied to clipboard!
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
<<<<<<< HEAD By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
5.8.2.1. Private clusters in Azure Copy linkLink copied to clipboard!
To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.
Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation.
The cluster still requires access to internet to access the Azure APIs.
The following items are not required or created when you install a private cluster:
-
A
BaseDomainResourceGroup, since the cluster does not create public records - Public IP addresses
- Public DNS records
Public endpoints
The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
5.8.2.1.1. Limitations Copy linkLink copied to clipboard!
Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet.
5.8.2.2. User-defined outbound routing Copy linkLink copied to clipboard!
In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer.
You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.
When configuring a cluster to use user-defined routing, the installation program does not create the following resources:
- Outbound rules for access to the internet.
- Public IPs for the public load balancer.
- Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests.
You must ensure the following items are available before setting user-defined routing:
- Egress to the internet is possible to pull container images, unless using an internal registry mirror.
- The cluster can access Azure APIs.
- Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section.
There are several pre-existing networking setups that are supported for internet access using user-defined routing.
Private cluster with network address translation
You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions.
When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.
Private cluster with Azure Firewall
You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation.
When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.
Private cluster with a proxy configuration
You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.
When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.
Private cluster with no internet access
You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following:
- An internal registry mirror that allows for pulling container images
- Access to Azure APIs
With these requirements available, you can use user-defined routing to create private clusters with no public endpoints.
5.8.3. About reusing a VNet for your OpenShift Container Platform cluster Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.
By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.
5.8.3.1. Requirements for using your VNet Copy linkLink copied to clipboard!
When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:
- Subnets
- Route tables
- VNets
- Network Security Groups
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.
The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.
Your VNet must meet the following characteristics:
-
The VNet’s CIDR block must contain the
Networking.MachineCIDRrange, which is the IP address pool for cluster machines. - The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.
You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the specified subnets exist.
- There are two private subnets, one for the control plane machines and one for the compute machines.
- The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VNet, the VNet is not deleted.
5.8.3.1.1. Network security group requirements Copy linkLink copied to clipboard!
The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.
The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.
| Port | Description | Control plane | Compute |
|---|---|---|---|
|
| Allows HTTP traffic | x | |
|
| Allows HTTPS traffic | x | |
|
| Allows communication to the control plane machines | x | |
|
| Allows internal communication to the machine config server for provisioning machines | x |
Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.
5.8.3.2. Division of permissions Copy linkLink copied to clipboard!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.
The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.
5.8.3.3. Isolation between clusters Copy linkLink copied to clipboard!
Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.
5.8.4. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.8.5. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.8.6. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.8.7. Manually creating the installation configuration file Copy linkLink copied to clipboard!
For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the internet, you must manually generate your installation configuration file.
Prerequisites
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yamlfile template that is provided and save it in the<installation_directory>.NoteYou must name this configuration file
install-config.yaml.NoteFor some platform types, you can alternatively run
./openshift-install create install-config --dir <installation_directory>to generate aninstall-config.yamlfile. You can provide details about your cluster configuration at the prompts.Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the next step of the installation process. You must back it up now.
5.8.7.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
5.8.7.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.8.7.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.8.7.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
5.8.7.1.4. Additional Azure configuration parameters Copy linkLink copied to clipboard!
Additional Azure configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The name of the resource group that contains the DNS zone for your base domain. |
String, for example |
|
| The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. |
String, for example |
|
| The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. |
|
|
| The name of the Azure region that hosts your cluster. |
Any valid region name, such as |
|
| List of availability zones to place machines in. For high availability, specify at least two zones. |
List of zones, for example |
|
|
The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the | String. |
|
| The name of the existing VNet that you want to deploy your cluster to. | String. |
|
| The name of the existing subnet in your VNet that you want to deploy your control plane machines to. |
Valid CIDR, for example |
|
| The name of the existing subnet in your VNet that you want to deploy your compute machines to. |
Valid CIDR, for example |
|
|
The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value |
Any valid cloud environment, such as |
You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.
5.8.7.2. Sample customized install-config.yaml file for Azure Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024
diskType: Premium_LRS
type: Standard_D8s_v3
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512
diskType: Standard_LRS
zones:
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: resource_group
region: centralus
resourceGroupName: existing_resource_group
networkResourceGroupName: vnet_resource_group
virtualNetwork: vnet
controlPlaneSubnet: control_plane_subnet
computeSubnet: compute_subnet
outboundType: UserDefinedRouting
cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
publish: Internal
- 1 10 12 19
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 11
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 13
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 14
- If you use an existing VNet, specify the name of the resource group that contains it.
- 15
- If you use an existing VNet, specify its name.
- 16
- If you use an existing VNet, specify the name of the subnet to host the control plane machines.
- 17
- If you use an existing VNet, specify the name of the subnet to host the compute machines.
- 18
- You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet.
- 20
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 21
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 22
- How to publish the user-facing endpoints of your cluster. Set
publishtoInternalto deploy a private cluster, which cannot be accessed from the internet. The default value isExternal.
5.8.7.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.8.8. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
5.8.9. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
5.8.10. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
5.8.11. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.8.12. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
5.9. Installing a cluster on Azure into a government region Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster.
5.9.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster and determined the tested and validated government region to deploy the cluster to.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
5.9.2. Azure government regions Copy linkLink copied to clipboard!
OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization.
Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment.
The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud, based on the region specified.
5.9.3. Private clusters Copy linkLink copied to clipboard!
You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet.
<<<<<<< HEAD By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet.
To deploy a private cluster, you must use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network.
If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.
Additionally, you must deploy a private cluster from a machine that has access the API services for the cloud you provision to, the hosts on the network that you provision, and to the internet to obtain installation media. You can use any machine that meets these access requirements and follows your company’s guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN.
5.9.3.1. Private clusters in Azure Copy linkLink copied to clipboard!
To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic.
Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster’s private DNS records. The cluster’s machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation.
The cluster still requires access to internet to access the Azure APIs.
The following items are not required or created when you install a private cluster:
-
A
BaseDomainResourceGroup, since the cluster does not create public records - Public IP addresses
- Public DNS records
Public endpoints
The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.
5.9.3.1.1. Limitations Copy linkLink copied to clipboard!
Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet.
5.9.3.2. User-defined outbound routing Copy linkLink copied to clipboard!
In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer.
You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this.
When configuring a cluster to use user-defined routing, the installation program does not create the following resources:
- Outbound rules for access to the internet.
- Public IPs for the public load balancer.
- Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests.
You must ensure the following items are available before setting user-defined routing:
- Egress to the internet is possible to pull container images, unless using an internal registry mirror.
- The cluster can access Azure APIs.
- Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section.
There are several pre-existing networking setups that are supported for internet access using user-defined routing.
Private cluster with network address translation
You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions.
When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints.
Private cluster with Azure Firewall
You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation.
When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints.
Private cluster with a proxy configuration
You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy.
When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure’s internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints.
Private cluster with no internet access
You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following:
- An internal registry mirror that allows for pulling container images
- Access to Azure APIs
With these requirements available, you can use user-defined routing to create private clusters with no public endpoints.
5.9.4. About reusing a VNet for your OpenShift Container Platform cluster Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules.
By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet.
5.9.4.1. Requirements for using your VNet Copy linkLink copied to clipboard!
When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet:
- Subnets
- Route tables
- VNets
- Network Security Groups
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster.
The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group.
Your VNet must meet the following characteristics:
-
The VNet’s CIDR block must contain the
Networking.MachineCIDRrange, which is the IP address pool for cluster machines. - The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses.
You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default.
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
- All the specified subnets exist.
- There are two private subnets, one for the control plane machines and one for the compute machines.
- The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them.
If you destroy a cluster that uses an existing VNet, the VNet is not deleted.
5.9.4.1.1. Network security group requirements Copy linkLink copied to clipboard!
The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports.
The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails.
| Port | Description | Control plane | Compute |
|---|---|---|---|
|
| Allows HTTP traffic | x | |
|
| Allows HTTPS traffic | x | |
|
| Allows communication to the control plane machines | x | |
|
| Allows internal communication to the machine config server for provisioning machines | x |
Since cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment.
5.9.4.2. Division of permissions Copy linkLink copied to clipboard!
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules.
The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes.
5.9.4.3. Isolation between clusters Copy linkLink copied to clipboard!
Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet.
5.9.5. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.9.6. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
5.9.7. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.9.8. Manually creating the installation configuration file Copy linkLink copied to clipboard!
When installing OpenShift Container Platform on Microsoft Azure into a government region, you must manually generate your installation configuration file.
Prerequisites
- You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery.
- You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>ImportantYou must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Customize the sample
install-config.yamlfile template that is provided and save it in the<installation_directory>.NoteYou must name this configuration file
install-config.yaml.NoteFor some platform types, you can alternatively run
./openshift-install create install-config --dir <installation_directory>to generate aninstall-config.yamlfile. You can provide details about your cluster configuration at the prompts.Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the next step of the installation process. You must back it up now.
5.9.8.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
5.9.8.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
5.9.8.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
5.9.8.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
5.9.8.1.4. Additional Azure configuration parameters Copy linkLink copied to clipboard!
Additional Azure configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The Azure disk size for the VM. |
Integer that represents the size of the disk in GB. The default is |
|
| Defines the type of disk. |
|
|
| The name of the resource group that contains the DNS zone for your base domain. |
String, for example |
|
| The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster using the installation program deletes this resource group. |
String, for example |
|
| The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. |
|
|
| The name of the Azure region that hosts your cluster. |
Any valid region name, such as |
|
| List of availability zones to place machines in. For high availability, specify at least two zones. |
List of zones, for example |
|
|
The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the | String. |
|
| The name of the existing VNet that you want to deploy your cluster to. | String. |
|
| The name of the existing subnet in your VNet that you want to deploy your control plane machines to. |
Valid CIDR, for example |
|
| The name of the existing subnet in your VNet that you want to deploy your compute machines to. |
Valid CIDR, for example |
|
|
The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value |
Any valid cloud environment, such as |
You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster.
5.9.8.2. Sample customized install-config.yaml file for Azure Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024
diskType: Premium_LRS
type: Standard_D8s_v3
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
type: Standard_D2s_v3
osDisk:
diskSizeGB: 512
diskType: Standard_LRS
zones:
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
baseDomainResourceGroupName: resource_group
region: usgovvirginia
resourceGroupName: existing_resource_group
networkResourceGroupName: vnet_resource_group
virtualNetwork: vnet
controlPlaneSubnet: control_plane_subnet
computeSubnet: compute_subnet
outboundType: UserDefinedRouting
cloudName: AzureUSGovernmentCloud
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
publish: Internal
- 1 10 19
- Required.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3, for your machines if you disable simultaneous multithreading. - 5 8
- You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes (also known as the master nodes) is 1024 GB.
- 9
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- 11
- Specify the name of the resource group that contains the DNS zone for your base domain.
- 12
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- 13
- If you use an existing VNet, specify the name of the resource group that contains it.
- 14
- If you use an existing VNet, specify its name.
- 15
- If you use an existing VNet, specify the name of the subnet to host the control plane machines.
- 16
- If you use an existing VNet, specify the name of the subnet to host the compute machines.
- 17
- You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet.
- 18
- Specify the name of the Azure cloud environment to deploy your cluster to. Set
AzureUSGovernmentCloudto deploy to a Microsoft Azure Government (MAG) region. The default value isAzurePublicCloud. - 20
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 21
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 22
- How to publish the user-facing endpoints of your cluster. Set
publishtoInternalto deploy a private cluster, which cannot be accessed from the internet. The default value isExternal.
5.9.8.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.9.9. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
5.9.10. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
5.9.11. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
5.9.12. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.9.13. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
5.10. Installing a cluster on Azure using ARM templates Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Microsoft Azure by using infrastructure that you provide.
Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own.
The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.
5.10.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured an Azure account to host the cluster.
-
You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was last tested using version
2.2.0of the Azure CLI. Azure CLI commands might perform differently based on the version you use. - If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.NoteBe sure to also review this site list if you are configuring a proxy.
5.10.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
5.10.3. Configuring your Azure project Copy linkLink copied to clipboard!
Before you can install OpenShift Container Platform, you must configure an Azure project to host it.
All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
5.10.3.1. Azure account limits Copy linkLink copied to clipboard!
The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters.
Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores.
Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure.
The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters.
| Component | Number of components required by default | Default Azure limit | Description | ||||||
|---|---|---|---|---|---|---|---|---|---|
| vCPU | 40 | 20 per region | A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances:
Because the bootstrap machine uses To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. By default, the installation program distributes control plane and compute machines across all availability zones within a region. To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. | ||||||
| OS Disk | 7 |
VM OS disk must be able to sustain a minimum throughput of 5000 IOPS / 200MBps. This throughput can be provided by having a minimum of 1 TiB Premium SSD (P30). In Azure, disk performance is directly dependent on SSD disk sizes, so to achieve the throughput supported by
Host caching must be set to | |||||||
| VNet | 1 | 1000 per region | Each default cluster requires one Virtual Network (VNet), which contains two subnets. | ||||||
| Network interfaces | 6 | 65,536 per region | Each default cluster requires six network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. | ||||||
| Network security groups | 2 | 5000 | Each default cluster Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets:
| ||||||
| Network load balancers | 3 | 1000 per region | Each cluster creates the following load balancers:
If your applications create more Kubernetes | ||||||
| Public IP addresses | 3 | Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. | |||||||
| Private IP addresses | 7 | The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. | |||||||
| Spot VM vCPUs (optional) | 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. | 20 per region | This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. |
5.10.3.2. Configuring a public DNS zone in Azure Copy linkLink copied to clipboard!
To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source.
NoteFor more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation.
- If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation.
Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses.
Use an appropriate root domain, such as
openshiftcorp.com, or subdomain, such asclusters.openshiftcorp.com.- If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain.
You can view Azure’s DNS solution by visiting this example for creating DNS zones.
5.10.3.3. Increasing Azure account limits Copy linkLink copied to clipboard!
To increase an account limit, file a support request on the Azure portal.
You can increase only one type of quota per support request.
Procedure
- From the Azure portal, click Help + support in the lower left corner.
Click New support request and then select the required values:
- From the Issue type list, select Service and subscription limits (quotas).
- From the Subscription list, select the subscription to modify.
- From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster.
- Click Next: Solutions.
On the Problem Details page, provide the required information for your quota increase:
- Click Provide details and provide the required details in the Quota details window.
- In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details.
- Click Next: Review + create and then click Create.
5.10.3.4. Certificate signing requests management Copy linkLink copied to clipboard!
Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them.
5.10.3.5. Required Azure roles Copy linkLink copied to clipboard!
OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, your Azure account subscription must have the following roles:
-
User Access Administrator -
Owner
To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation.
5.10.3.6. Creating a service principal Copy linkLink copied to clipboard!
Because OpenShift Container Platform and its installation program must create Microsoft Azure resources through Azure Resource Manager, you must create a service principal to represent it.
Prerequisites
- Install or update the Azure CLI.
-
Install the
jqpackage. - Your Azure account has the required roles for the subscription that you use.
Procedure
Log in to the Azure CLI:
$ az loginLog in to Azure in the web console by using your credentials.
If your Azure account uses subscriptions, ensure that you are using the right subscription.
View the list of available accounts and record the
tenantIdvalue for the subscription you want to use for your cluster:$ az account list --refreshExample output
[ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "you@example.com", "type": "user" } } ]View your active account details and confirm that the
tenantIdvalue matches the subscription you want to use:$ az account showExample output
{ "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee",1 "user": { "name": "you@example.com", "type": "user" } }- 1
- Ensure that the value of the
tenantIdparameter is the UUID of the correct subscription.
If you are not using the right subscription, change the active subscription:
$ az account set -s <id>1 - 1
- Substitute the value of the
idfor the subscription that you want to use for<id>.
If you changed the active subscription, display your account information again:
$ az account showExample output
{ "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "you@example.com", "type": "user" } }
-
Record the values of the
tenantIdandidparameters from the previous output. You need these values during OpenShift Container Platform installation. Create the service principal for your account:
$ az ad sp create-for-rbac --role Contributor --name <service_principal>1 - 1
- Replace
<service_principal>with the name to assign to the service principal.
Example output
Changing "<service_principal>" to a valid URI of "http://<service_principal>", which is the required format used for service principal names Retrying role assignment creation: 1/36 Retrying role assignment creation: 2/36 Retrying role assignment creation: 3/36 Retrying role assignment creation: 4/36 { "appId": "8bd0d04d-0ac2-43a8-928d-705c598c6956", "displayName": "<service_principal>", "name": "http://<service_principal>", "password": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "tenant": "6048c7e9-b2ad-488d-a54e-dc3f6be6a7ee" }-
Record the values of the
appIdandpasswordparameters from the previous output. You need these values during OpenShift Container Platform installation. Grant additional permissions to the service principal.
-
You must always add the
ContributorandUser Access Administratorroles to the app registration service principal so the cluster can assign credentials for its components. -
To operate the Cloud Credential Operator (CCO) in mint mode, the app registration service principal also requires the
Azure Active Directory Graph/Application.ReadWrite.OwnedByAPI permission. - To operate the CCO in passthrough mode, the app registration service principal does not require additional API permissions.
For more information about CCO modes, see "About the Cloud Credential Operator" in the "Managing cloud provider credentials" section of the Authentication and authorization guide.
NoteIf you limit the service principal scope of the OpenShift Container Platform installation program to an already existing Azure resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying a cluster using the installation program deletes this resource group.
To assign the
User Access Administratorrole, run the following command:$ az role assignment create --role "User Access Administrator" \ --assignee-object-id $(az ad sp list --filter "appId eq '<appId>'" \ | jq '.[0].id' -r)1 - 1
- Replace
<appId>with theappIdparameter value for your service principal.
To assign the
Azure Active Directory Graphpermission, run the following command:$ az ad app permission add --id <appId> \1 --api 00000002-0000-0000-c000-000000000000 \ --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role- 1
- Replace
<appId>with theappIdparameter value for your service principal.
Example output
Invoking "az ad app permission grant --id 46d33abc-b8a3-46d8-8c84-f0fd58177435 --api 00000002-0000-0000-c000-000000000000" is needed to make the change effectiveFor more information about the specific permissions that you grant with this command, see the GUID Table for Windows Azure Active Directory Permissions.
Approve the permissions request. If your account does not have the Azure Active Directory tenant administrator role, follow the guidelines for your organization to request that the tenant administrator approve your permissions request.
$ az ad app permission grant --id <appId> \1 --api 00000002-0000-0000-c000-000000000000- 1
- Replace
<appId>with theappIdparameter value for your service principal.
-
You must always add the
5.10.3.7. Supported Azure regions Copy linkLink copied to clipboard!
The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription.
Supported Azure public regions
-
australiacentral(Australia Central) -
australiaeast(Australia East) -
australiasoutheast(Australia South East) -
brazilsouth(Brazil South) -
canadacentral(Canada Central) -
canadaeast(Canada East) -
centralindia(Central India) -
centralus(Central US) -
eastasia(East Asia) -
eastus(East US) -
eastus2(East US 2) -
francecentral(France Central) -
germanywestcentral(Germany West Central) -
japaneast(Japan East) -
japanwest(Japan West) -
koreacentral(Korea Central) -
koreasouth(Korea South) -
northcentralus(North Central US) -
northeurope(North Europe) -
norwayeast(Norway East) -
southafricanorth(South Africa North) -
southcentralus(South Central US) -
southeastasia(Southeast Asia) -
southindia(South India) -
switzerlandnorth(Switzerland North) -
uaenorth(UAE North) -
uksouth(UK South) -
ukwest(UK West) -
westcentralus(West Central US) -
westeurope(West Europe) -
westindia(West India) -
westus(West US) -
westus2(West US 2)
Supported Azure Government regions
Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6:
-
usgovtexas(US Gov Texas) -
usgovvirginia(US Gov Virginia)
You can reference all available MAG regions in the Azure documentation. Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested.
5.10.4. Selecting an Azure Marketplace image Copy linkLink copied to clipboard!
If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following:
-
While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify
redhatas the publisher. If you are located in EMEA, specifyredhat-limitedas the publisher. -
The offer includes a
rh-ocp-workerSKU and arh-ocp-worker-gen1SKU. Therh-ocp-workerSKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you are going to use an instance type that is only version 1 compatible, use the image associated with therh-ocp-worker-gen1SKU. Therh-ocp-worker-gen1SKU represents a Hyper-V version 1 VM image.
Prerequisites
-
You have installed the Azure CLI client
(az). - Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.
Procedure
Display all of the available OpenShift Container Platform images by running one of the following commands:
North America:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat -o tableExample output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100EMEA:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o tableExample output
Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100
NoteRegardless of the version of OpenShift Container Platform you are installing, the correct version of the Azure Marketplace image to use is 4.8.x. If required, as part of the installation process, your VMs are automatically upgraded.
Inspect the image for your offer by running one of the following commands:
North America:
$ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Review the terms of the offer by running one of the following commands:
North America:
$ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
Accept the terms of the offering by running one of the following commands:
North America:
$ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>EMEA:
$ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
-
Record the image details of your offer and use them to update the
06_workers.jsonAzure Resource Manager (ARM) template. Update thestorageProfile.imageReferencefield by deleting theidparameter and adding theoffer,publisher,sku, andversionparameters by using the values from your offer. You can find a sample template in the "Creating additional worker machines in Azure" section.
5.10.5. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
5.10.6. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program.
5.10.7. Creating the installation files for Azure Copy linkLink copied to clipboard!
To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.
5.10.7.1. Optional: Creating a separate /var partition Copy linkLink copied to clipboard!
It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:
-
/var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system. -
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. -
/var: Holds data that you might want to keep separate for purposes such as auditing.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.
Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.
If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section.
Procedure
Create a directory to hold the OpenShift Container Platform installation files:
$ mkdir $HOME/clusterconfigRun
openshift-installto create a set of files in themanifestandopenshiftsubdirectories. Answer the system questions as you are prompted:$ openshift-install create manifests --dir $HOME/clusterconfigExample output
? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshiftOptional: Confirm that the installation program created manifests in the
clusterconfig/openshiftdirectory:$ ls $HOME/clusterconfig/openshift/Example output
99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ...Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the storage device on theworkersystems, and set the storage size as appropriate. This example places the/vardirectory on a separate partition:variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name>1 partitions: - label: var start_mib: <partition_start_offset>2 size_mib: <partition_size>3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota]4 with_mount_unit: true- 1
- The storage device name of the disk that you want to partition.
- 2
- When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
- 3
- The size of the data partition in mebibytes.
- 4
- The
prjquotamount option must be enabled for filesystems used for container storage.
NoteWhen creating a separate
/varpartition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.Create a manifest from the Butane config and save it to the
clusterconfig/openshiftdirectory. For example, run the following command:$ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshift/98-var-partition.yamlRun
openshift-installagain to create Ignition configs from a set of files in themanifestandopenshiftsubdirectories:$ openshift-install create ignition-configs --dir $HOME/clusterconfig $ ls $HOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign
Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.
5.10.7.2. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select azure as the platform to target.
If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal:
-
azure subscription id: The subscription ID to use for the cluster. Specify the
idvalue in your account output. -
azure tenant id: The tenant ID. Specify the
tenantIdvalue in your account output. -
azure service principal client id: The value of the
appIdparameter for the service principal. -
azure service principal client secret: The value of the
passwordparameter for the service principal.
-
azure subscription id: The subscription ID to use for the cluster. Specify the
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
Enter a descriptive name for your cluster.
ImportantAll Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting
install-config.yamlfile to setreplicasto0for thecomputepool:compute: - hyperthreading: Enabled name: worker platform: {} replicas: 01 - 1
- Set to
0.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
5.10.7.3. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
5.10.7.4. Exporting common variables for ARM templates Copy linkLink copied to clipboard!
You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure.
Specific ARM templates can also require additional exported variables, which are detailed in their related procedures.
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Export common variables found in the
install-config.yamlto be used by the provided ARM templates:$ export CLUSTER_NAME=<cluster_name>1 $ export AZURE_REGION=<azure_region>2 $ export SSH_KEY=<ssh_key>3 $ export BASE_DOMAIN=<base_domain>4 $ export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group>5 - 1
- The value of the
.metadata.nameattribute from theinstall-config.yamlfile. - 2
- The region to deploy the cluster into, for example
centralus. This is the value of the.platform.azure.regionattribute from theinstall-config.yamlfile. - 3
- The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the
.sshKeyattribute from theinstall-config.yamlfile. - 4
- The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the
.baseDomainattribute from theinstall-config.yamlfile. - 5
- The resource group where the public DNS zone exists. This is the value of the
.platform.azure.baseDomainResourceGroupNameattribute from theinstall-config.yamlfile.
For example:
$ export CLUSTER_NAME=test-cluster $ export AZURE_REGION=centralus $ export SSH_KEY="ssh-rsa xxx/xxx/xxx= user@email.com" $ export BASE_DOMAIN=example.com $ export BASE_DOMAIN_RESOURCE_GROUP=ocp-clusterExport the kubeadmin credentials:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
5.10.7.5. Creating the Kubernetes manifest and Ignition config files Copy linkLink copied to clipboard!
Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.
-
The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Prerequisites
- You obtained the OpenShift Container Platform installation program.
-
You created the
install-config.yamlinstallation configuration file.
Procedure
Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the installation directory that contains theinstall-config.yamlfile you created.
Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yamlBy removing these files, you prevent the cluster from automatically generating control plane machines.
Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yamlBecause you create and manage the worker machines yourself, you do not need to initialize these machines.
Check that the
mastersSchedulableparameter in the<installation_directory>/manifests/cluster-scheduler-02-config.ymlKubernetes manifest file is set tofalse. This setting prevents pods from being scheduled on the control plane machines:-
Open the
<installation_directory>/manifests/cluster-scheduler-02-config.ymlfile. -
Locate the
mastersSchedulableparameter and ensure that it is set tofalse. - Save and exit the file.
-
Open the
Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the
privateZoneandpublicZonesections from the<installation_directory>/manifests/cluster-dns-02-config.ymlDNS configuration file:apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone:1 id: mycluster-100419-private-zone publicZone:2 id: example.openshift.com status: {}If you do so, you must add ingress DNS records manually in a later step.
When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates:
Export the infrastructure ID by using the following command:
$ export INFRA_ID=<infra_id>1 - 1
- The OpenShift Container Platform cluster has been assigned an identifier (
INFRA_ID) in the form of<cluster_name>-<random_string>. This will be used as the base name for most resources created using the provided ARM templates. This is the value of the.status.infrastructureNameattribute from themanifests/cluster-infrastructure-02-config.ymlfile.
Export the resource group by using the following command:
$ export RESOURCE_GROUP=<resource_group>1 - 1
- All resources created in this Azure deployment exists as part of a resource group. The resource group name is also based on the
INFRA_ID, in the form of<cluster_name>-<random_string>-rg. This is the value of the.status.platformStatus.azure.resourceGroupNameattribute from themanifests/cluster-infrastructure-02-config.ymlfile.
To create the Ignition configuration files, run the following command from the directory that contains the installation program:
$ ./openshift-install create ignition-configs --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the same installation directory.
Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The
kubeadmin-passwordandkubeconfigfiles are created in the./<installation_directory>/authdirectory:. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
5.10.8. Creating the Azure resource group and identity Copy linkLink copied to clipboard!
You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
Procedure
Create the resource group in a supported Azure region:
$ az group create --name ${RESOURCE_GROUP} --location ${AZURE_REGION}Create an Azure identity for the resource group:
$ az identity create -g ${RESOURCE_GROUP} -n ${INFRA_ID}-identityThis is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role.
Grant the Contributor role to the Azure identity:
Export the following variables required by the Azure role assignment:
$ export PRINCIPAL_ID=`az identity show -g ${RESOURCE_GROUP} -n ${INFRA_ID}-identity --query principalId --out tsv`$ export RESOURCE_GROUP_ID=`az group show -g ${RESOURCE_GROUP} --query id --out tsv`Assign the Contributor role to the identity:
$ az role assignment create --assignee "${PRINCIPAL_ID}" --role 'Contributor' --scope "${RESOURCE_GROUP_ID}"
5.10.9. Uploading the RHCOS cluster image and bootstrap Ignition config file Copy linkLink copied to clipboard!
The Azure client does not support deployments based on files existing locally; therefore, you must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
Procedure
Create an Azure storage account to store the VHD cluster image:
$ az storage account create -g ${RESOURCE_GROUP} --location ${AZURE_REGION} --name ${CLUSTER_NAME}sa --kind Storage --sku Standard_LRSWarningThe Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your
CLUSTER_NAMEvariable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation.Export the storage account key as an environment variable:
$ export ACCOUNT_KEY=`az storage account keys list -g ${RESOURCE_GROUP} --account-name ${CLUSTER_NAME}sa --query "[0].value" -o tsv`Choose the RHCOS version to use and export the URL of its VHD to an environment variable:
$ export VHD_URL=`curl -s https://raw.githubusercontent.com/openshift/installer/release-4.8/data/data/rhcos.json | jq -r .azure.url`ImportantThe RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available.
Copy the chosen VHD to a blob:
$ az storage container create --name vhd --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY}$ az storage blob copy start --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "${VHD_URL}"To track the progress of the VHD copy task, run this script:
status="unknown" while [ "$status" != "success" ] do status=`az storage blob show --container-name vhd --name "rhcos.vhd" --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -o tsv --query properties.copy.status` echo $status doneCreate a blob storage container and upload the generated
bootstrap.ignfile:$ az storage container create --name files --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} --public-access blob$ az storage blob upload --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign"
5.10.10. Example for creating DNS zones Copy linkLink copied to clipboard!
DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario.
For this example, Azure’s DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution.
The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
Procedure
Create the new public DNS zone in the resource group exported in the
BASE_DOMAIN_RESOURCE_GROUPenvironment variable:$ az network dns zone create -g ${BASE_DOMAIN_RESOURCE_GROUP} -n ${CLUSTER_NAME}.${BASE_DOMAIN}You can skip this step if you are using a public DNS zone that already exists.
Create the private DNS zone in the same resource group as the rest of this deployment:
$ az network private-dns zone create -g ${RESOURCE_GROUP} -n ${CLUSTER_NAME}.${BASE_DOMAIN}
You can learn more about configuring a public DNS zone in Azure by visiting that section.
5.10.11. Creating a VNet in Azure Copy linkLink copied to clipboard!
You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template.
If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
Procedure
-
Copy the template from the ARM template for the VNet section of this topic and save it as
01_vnet.jsonin your cluster’s installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the
azCLI:$ az deployment group create -g ${RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="${INFRA_ID}"1 - 1
- The base name to be used in resource names; this is usually the cluster’s infrastructure ID.
Link the VNet template to the private DNS zone:
$ az network private-dns link vnet create -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n ${INFRA_ID}-network-link -v "${INFRA_ID}-vnet" -e false
5.10.11.1. ARM template for the VNet Copy linkLink copied to clipboard!
You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster:
Example 5.1. 01_vnet.json ARM template
{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"addressPrefix" : "10.0.0.0/16",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetPrefix" : "10.0.0.0/24",
"nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
"nodeSubnetPrefix" : "10.0.1.0/24",
"clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]"
},
"resources" : [
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/virtualNetworks",
"name" : "[variables('virtualNetworkName')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]"
],
"properties" : {
"addressSpace" : {
"addressPrefixes" : [
"[variables('addressPrefix')]"
]
},
"subnets" : [
{
"name" : "[variables('masterSubnetName')]",
"properties" : {
"addressPrefix" : "[variables('masterSubnetPrefix')]",
"serviceEndpoints": [],
"networkSecurityGroup" : {
"id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]"
}
}
},
{
"name" : "[variables('nodeSubnetName')]",
"properties" : {
"addressPrefix" : "[variables('nodeSubnetPrefix')]",
"serviceEndpoints": [],
"networkSecurityGroup" : {
"id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]"
}
}
}
]
}
},
{
"type" : "Microsoft.Network/networkSecurityGroups",
"name" : "[variables('clusterNsgName')]",
"apiVersion" : "2018-10-01",
"location" : "[variables('location')]",
"properties" : {
"securityRules" : [
{
"name" : "apiserver_in",
"properties" : {
"protocol" : "Tcp",
"sourcePortRange" : "*",
"destinationPortRange" : "6443",
"sourceAddressPrefix" : "*",
"destinationAddressPrefix" : "*",
"access" : "Allow",
"priority" : 101,
"direction" : "Inbound"
}
}
]
}
}
]
}
5.10.12. Deploying the RHCOS cluster image for the Azure infrastructure Copy linkLink copied to clipboard!
You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
- Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container.
- Store the bootstrap Ignition config file in an Azure storage container.
Procedure
-
Copy the template from the ARM template for image storage section of this topic and save it as
02_storage.jsonin your cluster’s installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable:
$ export VHD_BLOB_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv`Deploy the cluster image:
$ az deployment group create -g ${RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="${VHD_BLOB_URL}" \1 --parameters baseName="${INFRA_ID}"2
5.10.12.1. ARM template for image storage Copy linkLink copied to clipboard!
You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster:
Example 5.2. 02_storage.json ARM template
{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"vhdBlobURL" : {
"type" : "string",
"metadata" : {
"description" : "URL pointing to the blob where the VHD to be used to create master and worker machines is located"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"imageName" : "[concat(parameters('baseName'), '-image')]"
},
"resources" : [
{
"apiVersion" : "2018-06-01",
"type": "Microsoft.Compute/images",
"name": "[variables('imageName')]",
"location" : "[variables('location')]",
"properties": {
"storageProfile": {
"osDisk": {
"osType": "Linux",
"osState": "Generalized",
"blobUri": "[parameters('vhdBlobURL')]",
"storageAccountType": "Standard_LRS"
}
}
}
}
]
}
5.10.13. Networking requirements for user-provisioned infrastructure Copy linkLink copied to clipboard!
All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files.
5.10.13.1. Setting the cluster node hostnames through DHCP Copy linkLink copied to clipboard!
On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node.
Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation.
5.10.13.2. Network connectivity requirements Copy linkLink copied to clipboard!
You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster.
This section provides details about the ports that are required.
In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat.
| Protocol | Port | Description |
|---|---|---|
| ICMP | N/A | Network reachability tests |
| TCP |
| Metrics |
|
|
Host level services, including the node exporter on ports | |
|
| The default ports that Kubernetes reserves | |
|
| openshift-sdn | |
| UDP |
| VXLAN and Geneve |
|
| VXLAN and Geneve | |
|
|
Host level services, including the node exporter on ports | |
|
| IPsec IKE packets | |
|
| IPsec NAT-T packets | |
| TCP/UDP |
| Kubernetes node port |
| ESP | N/A | IPsec Encapsulating Security Payload (ESP) |
| Protocol | Port | Description |
|---|---|---|
| TCP |
| Kubernetes API |
| Protocol | Port | Description |
|---|---|---|
| TCP |
| etcd server and peer ports |
5.10.14. Creating networking and load balancing components in Azure Copy linkLink copied to clipboard!
You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template.
If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
- Create and configure a VNet and associated subnets in Azure.
Procedure
-
Copy the template from the ARM template for the network and load balancers section of this topic and save it as
03_infra.jsonin your cluster’s installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the
azCLI:$ az deployment group create -g ${RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \1 --parameters baseName="${INFRA_ID}"2 Create an
apiDNS record in the public zone for the API public load balancer. The${BASE_DOMAIN_RESOURCE_GROUP}variable must point to the resource group where the public DNS zone exists.Export the following variable:
$ export PUBLIC_IP=`az network public-ip list -g ${RESOURCE_GROUP} --query "[?name=='${INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv`Create the DNS record in a new public zone:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n api -a ${PUBLIC_IP} --ttl 60If you are adding the cluster to an existing public zone, you can create the DNS record in it instead:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n api.${CLUSTER_NAME} -a ${PUBLIC_IP} --ttl 60
5.10.14.1. ARM template for the network and load balancers Copy linkLink copied to clipboard!
You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster:
Example 5.3. 03_infra.json ARM template
{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"privateDNSZoneName" : {
"type" : "string",
"metadata" : {
"description" : "Name of the private DNS zone"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
"masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]",
"masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
"masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
"internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]",
"skuName": "Standard"
},
"resources" : [
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/publicIPAddresses",
"name" : "[variables('masterPublicIpAddressName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"properties" : {
"publicIPAllocationMethod" : "Static",
"dnsSettings" : {
"domainNameLabel" : "[variables('masterPublicIpAddressName')]"
}
}
},
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/loadBalancers",
"name" : "[variables('masterLoadBalancerName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"dependsOn" : [
"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]"
],
"properties" : {
"frontendIPConfigurations" : [
{
"name" : "public-lb-ip",
"properties" : {
"publicIPAddress" : {
"id" : "[variables('masterPublicIpAddressID')]"
}
}
}
],
"backendAddressPools" : [
{
"name" : "public-lb-backend"
}
],
"loadBalancingRules" : [
{
"name" : "api-internal",
"properties" : {
"frontendIPConfiguration" : {
"id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]"
},
"backendAddressPool" : {
"id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]"
},
"protocol" : "Tcp",
"loadDistribution" : "Default",
"idleTimeoutInMinutes" : 30,
"frontendPort" : 6443,
"backendPort" : 6443,
"probe" : {
"id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]"
}
}
}
],
"probes" : [
{
"name" : "api-internal-probe",
"properties" : {
"protocol" : "Https",
"port" : 6443,
"requestPath": "/readyz",
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
}
]
}
},
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/loadBalancers",
"name" : "[variables('internalLoadBalancerName')]",
"location" : "[variables('location')]",
"sku": {
"name": "[variables('skuName')]"
},
"properties" : {
"frontendIPConfigurations" : [
{
"name" : "internal-lb-ip",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"privateIPAddressVersion" : "IPv4"
}
}
],
"backendAddressPools" : [
{
"name" : "internal-lb-backend"
}
],
"loadBalancingRules" : [
{
"name" : "api-internal",
"properties" : {
"frontendIPConfiguration" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]"
},
"frontendPort" : 6443,
"backendPort" : 6443,
"enableFloatingIP" : false,
"idleTimeoutInMinutes" : 30,
"protocol" : "Tcp",
"enableTcpReset" : false,
"loadDistribution" : "Default",
"backendAddressPool" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]"
},
"probe" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]"
}
}
},
{
"name" : "sint",
"properties" : {
"frontendIPConfiguration" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]"
},
"frontendPort" : 22623,
"backendPort" : 22623,
"enableFloatingIP" : false,
"idleTimeoutInMinutes" : 30,
"protocol" : "Tcp",
"enableTcpReset" : false,
"loadDistribution" : "Default",
"backendAddressPool" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]"
},
"probe" : {
"id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]"
}
}
}
],
"probes" : [
{
"name" : "api-internal-probe",
"properties" : {
"protocol" : "Https",
"port" : 6443,
"requestPath": "/readyz",
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
},
{
"name" : "sint-probe",
"properties" : {
"protocol" : "Https",
"port" : 22623,
"requestPath": "/healthz",
"intervalInSeconds" : 10,
"numberOfProbes" : 3
}
}
]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/A",
"name": "[concat(parameters('privateDNSZoneName'), '/api')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]"
],
"properties": {
"ttl": 60,
"aRecords": [
{
"ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]"
}
]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/A",
"name": "[concat(parameters('privateDNSZoneName'), '/api-int')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]"
],
"properties": {
"ttl": 60,
"aRecords": [
{
"ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]"
}
]
}
}
]
}
5.10.15. Creating the bootstrap machine in Azure Copy linkLink copied to clipboard!
You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template.
If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
- Create and configure a VNet and associated subnets in Azure.
- Create and configure networking and load balancers in Azure.
- Create control plane and compute roles.
Procedure
-
Copy the template from the ARM template for the bootstrap machine section of this topic and save it as
04_bootstrap.jsonin your cluster’s installation directory. This template describes the bootstrap machine that your cluster requires. Export the following variables required by the bootstrap machine deployment:
$ export BOOTSTRAP_URL=`az storage blob url --account-name ${CLUSTER_NAME}sa --account-key ${ACCOUNT_KEY} -c "files" -n "bootstrap.ign" -o tsv` $ export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url ${BOOTSTRAP_URL} '{ignition:{version:$v,config:{replace:{source:$url}}}}' | base64 | tr -d '\n'`Create the deployment by using the
azCLI:$ az deployment group create -g ${RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="${BOOTSTRAP_IGNITION}" \1 --parameters sshKeyData="${SSH_KEY}" \2 --parameters baseName="${INFRA_ID}"3
5.10.15.1. ARM template for the bootstrap machine Copy linkLink copied to clipboard!
You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster:
Example 5.4. 04_bootstrap.json ARM template
{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"bootstrapIgnition" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Bootstrap ignition content for the bootstrap cluster"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string."
}
},
"bootstrapVMSize" : {
"type" : "string",
"defaultValue" : "Standard_D4s_v3",
"allowedValues" : [
"Standard_A2",
"Standard_A3",
"Standard_A4",
"Standard_A5",
"Standard_A6",
"Standard_A7",
"Standard_A8",
"Standard_A9",
"Standard_A10",
"Standard_A11",
"Standard_D2",
"Standard_D3",
"Standard_D4",
"Standard_D11",
"Standard_D12",
"Standard_D13",
"Standard_D14",
"Standard_D2_v2",
"Standard_D3_v2",
"Standard_D4_v2",
"Standard_D5_v2",
"Standard_D8_v3",
"Standard_D11_v2",
"Standard_D12_v2",
"Standard_D13_v2",
"Standard_D14_v2",
"Standard_E2_v3",
"Standard_E4_v3",
"Standard_E8_v3",
"Standard_E16_v3",
"Standard_E32_v3",
"Standard_E64_v3",
"Standard_E2s_v3",
"Standard_E4s_v3",
"Standard_E8s_v3",
"Standard_E16s_v3",
"Standard_E32s_v3",
"Standard_E64s_v3",
"Standard_G1",
"Standard_G2",
"Standard_G3",
"Standard_G4",
"Standard_G5",
"Standard_DS2",
"Standard_DS3",
"Standard_DS4",
"Standard_DS11",
"Standard_DS12",
"Standard_DS13",
"Standard_DS14",
"Standard_DS2_v2",
"Standard_DS3_v2",
"Standard_DS4_v2",
"Standard_DS5_v2",
"Standard_DS11_v2",
"Standard_DS12_v2",
"Standard_DS13_v2",
"Standard_DS14_v2",
"Standard_GS1",
"Standard_GS2",
"Standard_GS3",
"Standard_GS4",
"Standard_GS5",
"Standard_D2s_v3",
"Standard_D4s_v3",
"Standard_D8s_v3"
],
"metadata" : {
"description" : "The size of the Bootstrap Virtual Machine"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
"sshKeyPath" : "/home/core/.ssh/authorized_keys",
"identityName" : "[concat(parameters('baseName'), '-identity')]",
"vmName" : "[concat(parameters('baseName'), '-bootstrap')]",
"nicName" : "[concat(variables('vmName'), '-nic')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]",
"sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]"
},
"resources" : [
{
"apiVersion" : "2018-12-01",
"type" : "Microsoft.Network/publicIPAddresses",
"name" : "[variables('sshPublicIpAddressName')]",
"location" : "[variables('location')]",
"sku": {
"name": "Standard"
},
"properties" : {
"publicIPAllocationMethod" : "Static",
"dnsSettings" : {
"domainNameLabel" : "[variables('sshPublicIpAddressName')]"
}
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Network/networkInterfaces",
"name" : "[variables('nicName')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]"
],
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]"
},
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"loadBalancerBackendAddressPools" : [
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]"
},
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]"
}
]
}
}
]
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Compute/virtualMachines",
"name" : "[variables('vmName')]",
"location" : "[variables('location')]",
"identity" : {
"type" : "userAssigned",
"userAssignedIdentities" : {
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {}
}
},
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"properties" : {
"hardwareProfile" : {
"vmSize" : "[parameters('bootstrapVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vmName')]",
"adminUsername" : "core",
"customData" : "[parameters('bootstrapIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vmName'),'_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB" : 100
}
},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
}
]
}
}
},
{
"apiVersion" : "2018-06-01",
"type": "Microsoft.Network/networkSecurityGroups/securityRules",
"name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]",
"location" : "[variables('location')]",
"dependsOn" : [
"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]"
],
"properties": {
"protocol" : "Tcp",
"sourcePortRange" : "*",
"destinationPortRange" : "22",
"sourceAddressPrefix" : "*",
"destinationAddressPrefix" : "*",
"access" : "Allow",
"priority" : 100,
"direction" : "Inbound"
}
}
]
}
5.10.16. Creating the control plane machines in Azure Copy linkLink copied to clipboard!
You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template.
If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
- Create and configure a VNet and associated subnets in Azure.
- Create and configure networking and load balancers in Azure.
- Create control plane and compute roles.
- Create the bootstrap machine.
Procedure
-
Copy the template from the ARM template for control plane machines section of this topic and save it as
05_masters.jsonin your cluster’s installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment:
$ export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'`Create the deployment by using the
azCLI:$ az deployment group create -g ${RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="${MASTER_IGNITION}" \1 --parameters sshKeyData="${SSH_KEY}" \2 --parameters privateDNSZoneName="${CLUSTER_NAME}.${BASE_DOMAIN}" \3 --parameters baseName="${INFRA_ID}"4 - 1
- The Ignition content for the control plane nodes (also known as the master nodes).
- 2
- The SSH RSA public key file as a string.
- 3
- The name of the private DNS zone to which the control plane nodes are attached.
- 4
- The base name to be used in resource names; this is usually the cluster’s infrastructure ID.
5.10.16.1. ARM template for control plane machines Copy linkLink copied to clipboard!
You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster:
Example 5.5. 05_masters.json ARM template
{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"masterIgnition" : {
"type" : "string",
"metadata" : {
"description" : "Ignition content for the master nodes"
}
},
"numberOfMasters" : {
"type" : "int",
"defaultValue" : 3,
"minValue" : 2,
"maxValue" : 30,
"metadata" : {
"description" : "Number of OpenShift masters to deploy"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string"
}
},
"privateDNSZoneName" : {
"type" : "string",
"metadata" : {
"description" : "Name of the private DNS zone the master nodes are going to be attached to"
}
},
"masterVMSize" : {
"type" : "string",
"defaultValue" : "Standard_D8s_v3",
"allowedValues" : [
"Standard_A2",
"Standard_A3",
"Standard_A4",
"Standard_A5",
"Standard_A6",
"Standard_A7",
"Standard_A8",
"Standard_A9",
"Standard_A10",
"Standard_A11",
"Standard_D2",
"Standard_D3",
"Standard_D4",
"Standard_D11",
"Standard_D12",
"Standard_D13",
"Standard_D14",
"Standard_D2_v2",
"Standard_D3_v2",
"Standard_D4_v2",
"Standard_D5_v2",
"Standard_D8_v3",
"Standard_D11_v2",
"Standard_D12_v2",
"Standard_D13_v2",
"Standard_D14_v2",
"Standard_E2_v3",
"Standard_E4_v3",
"Standard_E8_v3",
"Standard_E16_v3",
"Standard_E32_v3",
"Standard_E64_v3",
"Standard_E2s_v3",
"Standard_E4s_v3",
"Standard_E8s_v3",
"Standard_E16s_v3",
"Standard_E32s_v3",
"Standard_E64s_v3",
"Standard_G1",
"Standard_G2",
"Standard_G3",
"Standard_G4",
"Standard_G5",
"Standard_DS2",
"Standard_DS3",
"Standard_DS4",
"Standard_DS11",
"Standard_DS12",
"Standard_DS13",
"Standard_DS14",
"Standard_DS2_v2",
"Standard_DS3_v2",
"Standard_DS4_v2",
"Standard_DS5_v2",
"Standard_DS11_v2",
"Standard_DS12_v2",
"Standard_DS13_v2",
"Standard_DS14_v2",
"Standard_GS1",
"Standard_GS2",
"Standard_GS3",
"Standard_GS4",
"Standard_GS5",
"Standard_D2s_v3",
"Standard_D4s_v3",
"Standard_D8s_v3"
],
"metadata" : {
"description" : "The size of the Master Virtual Machines"
}
},
"diskSizeGB" : {
"type" : "int",
"defaultValue" : 1024,
"metadata" : {
"description" : "Size of the Master VM OS disk, in GB"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]",
"masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]",
"masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]",
"internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]",
"sshKeyPath" : "/home/core/.ssh/authorized_keys",
"identityName" : "[concat(parameters('baseName'), '-identity')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"copy" : [
{
"name" : "vmNames",
"count" : "[parameters('numberOfMasters')]",
"input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]"
}
]
},
"resources" : [
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Network/networkInterfaces",
"copy" : {
"name" : "nicCopy",
"count" : "[length(variables('vmNames'))]"
},
"name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]",
"location" : "[variables('location')]",
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('masterSubnetRef')]"
},
"loadBalancerBackendAddressPools" : [
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]"
},
{
"id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]"
}
]
}
}
]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/SRV",
"name": "[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]",
"location" : "[variables('location')]",
"properties": {
"ttl": 60,
"copy": [{
"name": "srvRecords",
"count": "[length(variables('vmNames'))]",
"input": {
"priority": 0,
"weight" : 10,
"port" : 2380,
"target" : "[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]"
}
}]
}
},
{
"apiVersion": "2018-09-01",
"type": "Microsoft.Network/privateDnsZones/A",
"copy" : {
"name" : "dnsCopy",
"count" : "[length(variables('vmNames'))]"
},
"name": "[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]",
"location" : "[variables('location')]",
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]"
],
"properties": {
"ttl": 60,
"aRecords": [
{
"ipv4Address": "[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]"
}
]
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Compute/virtualMachines",
"copy" : {
"name" : "vmCopy",
"count" : "[length(variables('vmNames'))]"
},
"name" : "[variables('vmNames')[copyIndex()]]",
"location" : "[variables('location')]",
"identity" : {
"type" : "userAssigned",
"userAssignedIdentities" : {
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {}
}
},
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]",
"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]",
"[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]"
],
"properties" : {
"hardwareProfile" : {
"vmSize" : "[parameters('masterVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vmNames')[copyIndex()]]",
"adminUsername" : "core",
"customData" : "[parameters('masterIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"caching": "ReadOnly",
"writeAcceleratorEnabled": false,
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB" : "[parameters('diskSizeGB')]"
}
},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]",
"properties": {
"primary": false
}
}
]
}
}
}
]
}
5.10.17. Wait for bootstrap completion and remove bootstrap resources in Azure Copy linkLink copied to clipboard!
After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
- Create and configure a VNet and associated subnets in Azure.
- Create and configure networking and load balancers in Azure.
- Create control plane and compute roles.
- Create the bootstrap machine.
- Create the control plane machines.
Procedure
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \1 --log-level info2 If the command exits without a
FATALwarning, your production control plane has initialized.Delete the bootstrap resources:
$ az network nsg rule delete -g ${RESOURCE_GROUP} --nsg-name ${INFRA_ID}-nsg --name bootstrap_ssh_in $ az vm stop -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap $ az vm deallocate -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap $ az vm delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap --yes $ az disk delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap_OSDisk --no-wait --yes $ az network nic delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-nic --no-wait $ az storage blob delete --account-key ${ACCOUNT_KEY} --account-name ${CLUSTER_NAME}sa --container-name files --name bootstrap.ign $ az network public-ip delete -g ${RESOURCE_GROUP} --name ${INFRA_ID}-bootstrap-ssh-pip
If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server.
5.10.18. Creating additional worker machines in Azure Copy linkLink copied to clipboard!
You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform.
In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file.
If you do not use the provided ARM template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
Prerequisites
- Configure an Azure account.
- Generate the Ignition config files for your cluster.
- Create and configure a VNet and associated subnets in Azure.
- Create and configure networking and load balancers in Azure.
- Create control plane and compute roles.
- Create the bootstrap machine.
- Create the control plane machines.
Procedure
-
Copy the template from the ARM template for worker machines section of this topic and save it as
06_workers.jsonin your cluster’s installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment:
$ export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'`Create the deployment by using the
azCLI:$ az deployment group create -g ${RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="${WORKER_IGNITION}" \1 --parameters sshKeyData="${SSH_KEY}" \2 --parameters baseName="${INFRA_ID}"3
5.10.18.1. ARM template for worker machines Copy linkLink copied to clipboard!
You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster:
Example 5.6. 06_workers.json ARM template
{
"$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"parameters" : {
"baseName" : {
"type" : "string",
"minLength" : 1,
"metadata" : {
"description" : "Base name to be used in resource names (usually the cluster's Infra ID)"
}
},
"workerIgnition" : {
"type" : "string",
"metadata" : {
"description" : "Ignition content for the worker nodes"
}
},
"numberOfNodes" : {
"type" : "int",
"defaultValue" : 3,
"minValue" : 2,
"maxValue" : 30,
"metadata" : {
"description" : "Number of OpenShift compute nodes to deploy"
}
},
"sshKeyData" : {
"type" : "securestring",
"metadata" : {
"description" : "SSH RSA public key file as a string"
}
},
"nodeVMSize" : {
"type" : "string",
"defaultValue" : "Standard_D4s_v3",
"allowedValues" : [
"Standard_A2",
"Standard_A3",
"Standard_A4",
"Standard_A5",
"Standard_A6",
"Standard_A7",
"Standard_A8",
"Standard_A9",
"Standard_A10",
"Standard_A11",
"Standard_D2",
"Standard_D3",
"Standard_D4",
"Standard_D11",
"Standard_D12",
"Standard_D13",
"Standard_D14",
"Standard_D2_v2",
"Standard_D3_v2",
"Standard_D4_v2",
"Standard_D5_v2",
"Standard_D8_v3",
"Standard_D11_v2",
"Standard_D12_v2",
"Standard_D13_v2",
"Standard_D14_v2",
"Standard_E2_v3",
"Standard_E4_v3",
"Standard_E8_v3",
"Standard_E16_v3",
"Standard_E32_v3",
"Standard_E64_v3",
"Standard_E2s_v3",
"Standard_E4s_v3",
"Standard_E8s_v3",
"Standard_E16s_v3",
"Standard_E32s_v3",
"Standard_E64s_v3",
"Standard_G1",
"Standard_G2",
"Standard_G3",
"Standard_G4",
"Standard_G5",
"Standard_DS2",
"Standard_DS3",
"Standard_DS4",
"Standard_DS11",
"Standard_DS12",
"Standard_DS13",
"Standard_DS14",
"Standard_DS2_v2",
"Standard_DS3_v2",
"Standard_DS4_v2",
"Standard_DS5_v2",
"Standard_DS11_v2",
"Standard_DS12_v2",
"Standard_DS13_v2",
"Standard_DS14_v2",
"Standard_GS1",
"Standard_GS2",
"Standard_GS3",
"Standard_GS4",
"Standard_GS5",
"Standard_D2s_v3",
"Standard_D4s_v3",
"Standard_D8s_v3"
],
"metadata" : {
"description" : "The size of the each Node Virtual Machine"
}
}
},
"variables" : {
"location" : "[resourceGroup().location]",
"virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]",
"virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]",
"nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]",
"nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]",
"infraLoadBalancerName" : "[parameters('baseName')]",
"sshKeyPath" : "/home/capi/.ssh/authorized_keys",
"identityName" : "[concat(parameters('baseName'), '-identity')]",
"imageName" : "[concat(parameters('baseName'), '-image')]",
"copy" : [
{
"name" : "vmNames",
"count" : "[parameters('numberOfNodes')]",
"input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]"
}
]
},
"resources" : [
{
"apiVersion" : "2019-05-01",
"name" : "[concat('node', copyIndex())]",
"type" : "Microsoft.Resources/deployments",
"copy" : {
"name" : "nodeCopy",
"count" : "[length(variables('vmNames'))]"
},
"properties" : {
"mode" : "Incremental",
"template" : {
"$schema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion" : "1.0.0.0",
"resources" : [
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Network/networkInterfaces",
"name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]",
"location" : "[variables('location')]",
"properties" : {
"ipConfigurations" : [
{
"name" : "pipConfig",
"properties" : {
"privateIPAllocationMethod" : "Dynamic",
"subnet" : {
"id" : "[variables('nodeSubnetRef')]"
}
}
}
]
}
},
{
"apiVersion" : "2018-06-01",
"type" : "Microsoft.Compute/virtualMachines",
"name" : "[variables('vmNames')[copyIndex()]]",
"location" : "[variables('location')]",
"tags" : {
"kubernetes.io-cluster-ffranzupi": "owned"
},
"identity" : {
"type" : "userAssigned",
"userAssignedIdentities" : {
"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {}
}
},
"dependsOn" : [
"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]"
],
"properties" : {
"hardwareProfile" : {
"vmSize" : "[parameters('nodeVMSize')]"
},
"osProfile" : {
"computerName" : "[variables('vmNames')[copyIndex()]]",
"adminUsername" : "capi",
"customData" : "[parameters('workerIgnition')]",
"linuxConfiguration" : {
"disablePasswordAuthentication" : true,
"ssh" : {
"publicKeys" : [
{
"path" : "[variables('sshKeyPath')]",
"keyData" : "[parameters('sshKeyData')]"
}
]
}
}
},
"storageProfile" : {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]"
},
"osDisk" : {
"name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]",
"osType" : "Linux",
"createOption" : "FromImage",
"managedDisk": {
"storageAccountType": "Premium_LRS"
},
"diskSizeGB": 128
}
},
"networkProfile" : {
"networkInterfaces" : [
{
"id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]",
"properties": {
"primary": true
}
}
]
}
}
}
]
}
}
}
]
}
5.10.19. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
5.10.20. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
5.10.21. Approving the certificate signing requests for your machines Copy linkLink copied to clipboard!
When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests.
Prerequisites
- You added machines to your cluster.
Procedure
Confirm that the cluster recognizes the machines:
$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.21.0 master-1 Ready master 63m v1.21.0 master-2 Ready master 64m v1.21.0The output lists all of the machines that you created.
NoteThe preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
Review the pending CSRs and ensure that you see the client requests with the
PendingorApprovedstatus for each machine that you added to the cluster:$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pendingstatus, approve the CSRs for your cluster machines:NoteBecause the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approverif the Kubelet requests a new certificate with identical parameters.NoteFor clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec,oc rsh, andoc logscommands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapperservice account in thesystem:nodeorsystem:admingroups, and confirm the identity of the node.To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approveNoteSome Operators might not become available until some CSRs are approved.
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csrExample output
NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ...If the remaining CSRs are not approved, and are in the
Pendingstatus, approve the CSRs for your cluster machines:To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>1 - 1
<csr_name>is the name of a CSR from the list of current CSRs.
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
After all client and server CSRs have been approved, the machines have the
Readystatus. Verify this by running the following command:$ oc get nodesExample output
NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.21.0 master-1 Ready master 73m v1.21.0 master-2 Ready master 74m v1.21.0 worker-0 Ready worker 11m v1.21.0 worker-1 Ready worker 11m v1.21.0NoteIt can take a few minutes after approval of the server CSRs for the machines to transition to the
Readystatus.
Additional information
- For more information on CSRs, see Certificate Signing Requests.
5.10.22. Adding the Ingress DNS records Copy linkLink copied to clipboard!
If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements.
Prerequisites
- You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned.
-
Install the OpenShift CLI (
oc). -
Install the
jqpackage. - Install or update the Azure CLI.
Procedure
Confirm the Ingress router has created a load balancer and populated the
EXTERNAL-IPfield:$ oc -n openshift-ingress get service router-defaultExample output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20Export the Ingress router IP as a variable:
$ export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}'`Add a
*.appsrecord to the public DNS zone.If you are adding this cluster to a new public zone, run:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER} --ttl 300If you are adding this cluster to an already existing public zone, run:
$ az network dns record-set a add-record -g ${BASE_DOMAIN_RESOURCE_GROUP} -z ${BASE_DOMAIN} -n *.apps.${CLUSTER_NAME} -a ${PUBLIC_IP_ROUTER} --ttl 300
Add a
*.appsrecord to the private DNS zone:Create a
*.appsrecord by using the following command:$ az network private-dns record-set a create -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps --ttl 300Add the
*.appsrecord to the private DNS zone by using the following command:$ az network private-dns record-set a add-record -g ${RESOURCE_GROUP} -z ${CLUSTER_NAME}.${BASE_DOMAIN} -n *.apps -a ${PUBLIC_IP_ROUTER}
If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster’s current routes:
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
Example output
oauth-openshift.apps.cluster.basedomain.com
console-openshift-console.apps.cluster.basedomain.com
downloads-openshift-console.apps.cluster.basedomain.com
alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com
grafana-openshift-monitoring.apps.cluster.basedomain.com
prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com
5.10.23. Completing an Azure installation on user-provisioned infrastructure Copy linkLink copied to clipboard!
After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready.
Prerequisites
- Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure.
-
Install the
ocCLI and log in.
Procedure
Complete the cluster installation:
$ ./openshift-install --dir <installation_directory> wait-for install-complete1 Example output
INFO Waiting up to 30m0s for the cluster to initialize...- 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
5.10.24. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
5.11. Uninstalling a cluster on Azure Copy linkLink copied to clipboard!
You can remove a cluster that you deployed to Microsoft Azure.
5.11.1. Removing a cluster that uses installer-provisioned infrastructure Copy linkLink copied to clipboard!
You can remove a cluster that uses installer-provisioned infrastructure from your cloud.
After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access.
Prerequisites
- Have a copy of the installation program that you used to deploy the cluster.
- Have the files that the installation program generated when you created your cluster.
Procedure
From the directory that contains the installation program on the computer that you used to install the cluster, run the following command:
$ ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info1 2 NoteYou must specify the directory that contains the cluster definition files for your cluster. The installation program requires the
metadata.jsonfile in this directory to delete the cluster.-
Optional: Delete the
<installation_directory>directory and the OpenShift Container Platform installation program.
Chapter 6. Installing on GCP Copy linkLink copied to clipboard!
6.1. Preparing to install on GCP Copy linkLink copied to clipboard!
6.1.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
6.1.2. Requirements for installing OpenShift Container Platform on GCP Copy linkLink copied to clipboard!
Before installing OpenShift Container Platform on Google Cloud Platform (GCP), you must create a service account and configure a GCP project. See Configuring a GCP project for details about creating a project, enabling API services, configuring DNS, GCP account limits, and supported GCP regions.
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for GCP for other options.
6.1.3. Choosing a method to install OpenShift Container Platform on GCP Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself.
See Installation process for more information about installer-provisioned and user-provisioned installation processes.
6.1.3.1. Installing a cluster on installer-provisioned infrastructure Copy linkLink copied to clipboard!
You can install a cluster on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods:
- Installing a cluster quickly on GCP: You can install OpenShift Container Platform on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options.
- Installing a customized cluster on GCP: You can install a customized cluster on GCP infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation.
- Installing a cluster on GCP with network customizations: You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements.
- Installing a cluster on GCP in a restricted network: You can install OpenShift Container Platform on GCP on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the GCP APIs.
- Installing a cluster into an existing Virtual Private Cloud: You can install OpenShift Container Platform on an existing GCP Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits on creating new accounts or infrastructure.
- Installing a private cluster on an existing VPC: You can install a private cluster on an existing GCP VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet.
6.1.3.2. Installing a cluster on user-provisioned infrastructure Copy linkLink copied to clipboard!
You can install a cluster on GCP infrastructure that you provision, by using one of the following methods:
- Installing a cluster on GCP with user-provisioned infrastructure: You can install OpenShift Container Platform on GCP infrastructure that you provide. You can use the provided Deployment Manager templates to assist with the installation.
- Installing a cluster with shared VPC on user-provisioned infrastructure in GCP: You can use the provided Deployment Manager templates to create GCP resources in a shared VPC infrastructure.
- Installing a cluster on GCP in a restricted network with user-provisioned infrastructure: You can install OpenShift Container Platform on GCP in a restricted network with user-provisioned infrastructure. By creating an internal mirror of the installation release content, you can install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
6.1.4. Next steps Copy linkLink copied to clipboard!
6.2. Configuring a GCP project Copy linkLink copied to clipboard!
Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it.
6.2.1. Creating a GCP project Copy linkLink copied to clipboard!
To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster.
Procedure
Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation.
ImportantYour GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the
api-int.<cluster_name>.<base_domain>URL; the Premium Tier is required for internal load balancing.
6.2.2. Enabling API services in GCP Copy linkLink copied to clipboard!
Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation.
Prerequisites
- You created a project to host your cluster.
Procedure
Enable the following required API services in the project that hosts your cluster. See Enabling services in the GCP documentation.
Expand Table 6.1. Required API services API service Console service name Compute Engine API
compute.googleapis.comGoogle Cloud APIs
cloudapis.googleapis.comCloud Resource Manager API
cloudresourcemanager.googleapis.comGoogle DNS API
dns.googleapis.comIAM Service Account Credentials API
iamcredentials.googleapis.comIdentity and Access Management (IAM) API
iam.googleapis.comService Management API
servicemanagement.googleapis.comService Usage API
serviceusage.googleapis.comGoogle Cloud Storage JSON API
storage-api.googleapis.comCloud Storage
storage-component.googleapis.com
6.2.3. Configuring DNS for GCP Copy linkLink copied to clipboard!
To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster.
Procedure
Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source.
NoteIf you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains.
Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation.
Use an appropriate root domain, such as
openshiftcorp.com, or subdomain, such asclusters.openshiftcorp.com.Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation.
You typically have four name servers.
- Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers.
- If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation.
- If you use a subdomain, follow your company’s procedures to add its delegation records to the parent domain. This process might include a request to your company’s IT department or the division that controls the root domain and DNS services for your company.
6.2.4. GCP account limits Copy linkLink copied to clipboard!
The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster.
A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys.
| Service | Component | Location | Total resources required | Resources removed after bootstrap |
|---|---|---|---|---|
| Service account | IAM | Global | 5 | 0 |
| Firewall rules | Compute | Global | 11 | 1 |
| Forwarding rules | Compute | Global | 2 | 0 |
| In-use global IP addresses | Compute | Global | 4 | 1 |
| Health checks | Compute | Global | 3 | 0 |
| Images | Compute | Global | 1 | 0 |
| Networks | Compute | Global | 2 | 0 |
| Static IP addresses | Compute | Region | 4 | 1 |
| Routers | Compute | Global | 1 | 0 |
| Routes | Compute | Global | 2 | 0 |
| Subnetworks | Compute | Global | 2 | 0 |
| Target pools | Compute | Global | 3 | 0 |
| CPUs | Compute | Region | 28 | 4 |
| Persistent disk SSD (GB) | Compute | Region | 896 | 128 |
If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region.
Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient.
If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit:
-
asia-east2 -
asia-northeast2 -
asia-south1 -
australia-southeast1 -
europe-north1 -
europe-west2 -
europe-west3 -
europe-west6 -
northamerica-northeast1 -
southamerica-east1 -
us-west2
You can increase resource quotas from the GCP console, but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster.
6.2.5. Creating a service account in GCP Copy linkLink copied to clipboard!
OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one.
Prerequisites
- You created a project to host your cluster.
Procedure
- Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation.
Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the
Ownerrole to it. See Granting roles to a service account for specific resources.NoteWhile making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable.
Create the service account key in JSON format. See Creating service account keys in the GCP documentation.
The service account key is required to create a cluster.
6.2.5.1. Required GCP permissions Copy linkLink copied to clipboard!
When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. To deploy an OpenShift Container Platform cluster, the service account requires the following permissions. If you deploy your cluster into an existing VPC, the service account does not require certain networking permissions, which are noted in the following lists:
Required roles for the installation program
- Compute Admin
- Security Admin
- Service Account Admin
- Service Account User
- Storage Admin
Required roles for creating network resources during installation
- DNS Administrator
Optional roles
For the cluster to create new limited credentials for its Operators, add the following role:
- Service Account Key Admin
The roles are applied to the service accounts that the control plane and compute machines use:
| Account | Roles |
|---|---|
| Control Plane |
|
|
| |
|
| |
|
| |
|
| |
| Compute |
|
|
|
6.2.6. Supported GCP regions Copy linkLink copied to clipboard!
You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions:
-
asia-east1(Changhua County, Taiwan) -
asia-east2(Hong Kong) -
asia-northeast1(Tokyo, Japan) -
asia-northeast2(Osaka, Japan) -
asia-northeast3(Seoul, South Korea) -
asia-south1(Mumbai, India) -
asia-southeast1(Jurong West, Singapore) -
asia-southeast2(Jakarta, Indonesia) -
australia-southeast1(Sydney, Australia) -
europe-central2(Warsaw, Poland) -
europe-north1(Hamina, Finland) -
europe-west1(St. Ghislain, Belgium) -
europe-west2(London, England, UK) -
europe-west3(Frankfurt, Germany) -
europe-west4(Eemshaven, Netherlands) -
europe-west6(Zürich, Switzerland) -
northamerica-northeast1(Montréal, Québec, Canada) -
southamerica-east1(São Paulo, Brazil) -
us-central1(Council Bluffs, Iowa, USA) -
us-east1(Moncks Corner, South Carolina, USA) -
us-east4(Ashburn, Northern Virginia, USA) -
us-west1(The Dalles, Oregon, USA) -
us-west2(Los Angeles, California, USA) -
us-west3(Salt Lake City, Utah, USA) -
us-west4(Las Vegas, Nevada, USA)
6.2.7. Next steps Copy linkLink copied to clipboard!
- Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options.
6.3. Manually creating IAM for GCP Copy linkLink copied to clipboard!
In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster.
6.3.1. Alternatives to storing administrator-level secrets in the kube-system project Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file.
If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can choose one of the following options when installing OpenShift Container Platform:
Manage cloud credentials manually:
You can set the
credentialsModeparameter for the CCO toManualto manage cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them.Remove the administrator-level credential secret after installing OpenShift Container Platform with mint mode:
If you are using the CCO with the
credentialsModeparameter set toMint, you can remove or rotate the administrator-level credential after installing OpenShift Container Platform. Mint mode is the default configuration for the CCO. This option requires the presence of the administrator-level credential during an installation. The administrator-level credential is used during the installation to mint other credentials with some permissions granted. The original credential secret is not stored in the cluster permanently.
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.
For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator.
6.3.2. Manually create IAM Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.
Procedure
Change to the directory that contains the installation program and create the
install-config.yamlfile:$ openshift-install create install-config --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files.Edit the
install-config.yamlconfiguration file so that it contains thecredentialsModeparameter set toManual.Example
install-config.yamlconfiguration fileapiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual1 compute: - architecture: amd64 hyperthreading: Enabled ...- 1
- This line is added to set the
credentialsModeparameter toManual.
To generate the manifests, run the following command from the directory that contains the installation program:
$ openshift-install create manifests --dir <installation_directory>From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your
openshift-installbinary is built to use:$ openshift-install versionExample output
release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64Locate all
CredentialsRequestobjects in this release image that target the cloud you are deploying on:$ oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=gcpThis command creates a YAML file for each
CredentialsRequestobject.Sample
CredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-gcs namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true-
Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject. The format for the secret data varies for each cloud provider. From the directory that contains the installation program, proceed with your cluster creation:
$ openshift-install create cluster --dir <installation_directory>ImportantBefore upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. For details, see the "Upgrading clusters with manually maintained credentials" section of the installation content for your cloud provider.
6.3.3. Upgrading clusters with manually maintained credentials Copy linkLink copied to clipboard!
The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default.
-
For minor releases, for example, from 4.7 to 4.8, this status prevents you from upgrading until you have addressed any updated permissions and annotated the
CloudCredentialresource to indicate that the permissions are updated as needed for the next version. This annotation changes theUpgradablestatus toTrue. - For z-stream releases, for example, from 4.8.9 to 4.8.10, no permissions are added or changed, so the upgrade is not blocked.
Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. Additionally, you must review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components.
Procedure
Extract and examine the
CredentialsRequestcustom resource for the new release.The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud.
Update the manually maintained credentials on your cluster:
-
Create new secrets for any
CredentialsRequestcustom resources that are added by the new release image. -
If the
CredentialsRequestcustom resources for any existing credentials that are stored in secrets have changed their permissions requirements, update the permissions as required.
-
Create new secrets for any
When all of the secrets are correct for the new release, indicate that the cluster is ready to upgrade:
-
Log in to the OpenShift Container Platform CLI as a user with the
cluster-adminrole. Edit the
CloudCredentialresource to add anupgradeable-toannotation within themetadatafield:$ oc edit cloudcredential clusterText to add
... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ...Where
<version_number>is the version you are upgrading to, in the formatx.y.z. For example,4.8.2for OpenShift Container Platform 4.8.2.It may take several minutes after adding the annotation for the upgradeable status to change.
-
Log in to the OpenShift Container Platform CLI as a user with the
Verify that the CCO is upgradeable:
- In the Administrator perspective of the web console, navigate to Administration → Cluster Settings.
- To view the CCO status details, click cloud-credential in the Cluster Operators list.
-
If the Upgradeable status in the Conditions section is False, verify that the
upgradeable-toannotation is free of typographical errors.
When the Upgradeable status in the Conditions section is True, you can begin the OpenShift Container Platform upgrade.
6.3.4. Mint mode Copy linkLink copied to clipboard!
Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS and GCP.
In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO to process the CredentialsRequest objects in the cluster and create users for each with specific permissions.
The benefits of mint mode include:
- Each cluster component has only the permissions it requires
- Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades
One drawback is that mint mode requires admin credential storage in a cluster kube-system secret.
6.3.5. Mint mode with removal or rotation of the administrator-level credential Copy linkLink copied to clipboard!
Currently, this mode is only supported on AWS and GCP.
In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation.
The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired.
Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked.
The administrator-level credential is not stored in the cluster permanently.
Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade.
6.3.6. Next steps Copy linkLink copied to clipboard!
Install an OpenShift Container Platform cluster:
- Installing a cluster quickly on GCP with default options on installer-provisioned infrastructure
- Install a cluster with cloud customizations on installer-provisioned infrastructure
- Install a cluster with network customizations on installer-provisioned infrastructure
6.4. Installing a cluster quickly on GCP Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster on Google Cloud Platform (GCP) that uses the default configuration options.
6.4.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured a GCP project to host the cluster.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
6.4.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
6.4.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the full path to your service account private key file.$ export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>"Verify that the credentials were applied.
$ gcloud auth list
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
6.4.4. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
6.4.5. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations:
-
The
GOOGLE_CREDENTIALS,GOOGLE_CLOUD_KEYFILE_JSON, orGCLOUD_KEYFILE_JSONenvironment variables -
The
~/.gcp/osServiceAccount.jsonfile -
The
gcloud clidefault credentials
-
The
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
Provide values at the prompts:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select gcp as the platform to target.
- If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
- Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
- Enter a descriptive name for your cluster. If you provide a name that is longer than 6 characters, only the first 6 characters will be used in the infrastructure ID that is generated from the cluster name.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
Optional: You can reduce the number of permissions for the service account that you used to install the cluster.
-
If you assigned the
Ownerrole to your service account, you can remove that role and replace it with theViewerrole. -
If you included the
Service Account Key Adminrole, you can remove it.
-
If you assigned the
6.4.6. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
6.4.7. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
6.4.8. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
6.4.9. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
6.5. Installing a cluster on GCP with customizations Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a customized cluster on infrastructure that the installation program provisions on Google Cloud Platform (GCP). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
6.5.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured a GCP project to host the cluster.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
6.5.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
6.5.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the full path to your service account private key file.$ export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>"Verify that the credentials were applied.
$ gcloud auth list
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
6.5.4. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
6.5.5. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select gcp as the platform to target.
- If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
- Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
6.5.5.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
6.5.5.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
6.5.5.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
6.5.5.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
6.5.5.1.4. Additional Google Cloud Platform (GCP) configuration parameters Copy linkLink copied to clipboard!
Additional GCP configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The name of the existing VPC that you want to deploy your cluster to. | String. |
|
| The name of the GCP region that hosts your cluster. |
Any valid region name, such as |
|
| The GCP machine type. | The GCP machine type. |
|
| The availability zones where the installation program creates machines for the specified MachinePool. |
A list of valid GCP availability zones, such as |
|
| The name of the existing subnet in your VPC that you want to deploy your control plane machines to. | The subnet name. |
|
| The name of the existing subnet in your VPC that you want to deploy your compute machines to. | The subnet name. |
|
| A list of license URLs that must be applied to the compute images. Important
The | Any license available with the license API, such as the license to enable nested virtualization. You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. |
|
| The size of the disk in gigabytes (GB). | Any size between 16 GB and 65536 GB. |
|
| The type of disk. |
Either the default |
|
| The name of the customer managed encryption key to be used for control plane machine disk encryption. | The encryption key name. |
|
| For control plane machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. |
|
| For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. |
|
| For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. |
|
| The name of the customer managed encryption key to be used for compute machine disk encryption. | The encryption key name. |
|
| For compute machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. |
|
| For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. |
|
| For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. |
6.5.5.2. Sample customized install-config.yaml file for GCP Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-ssd
diskSizeGB: 1024
encryptionKey:
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-standard
diskSizeGB: 128
encryptionKey:
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production
region: us-central1
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
- 1 10 11 12 13
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as
n1-standard-8, for your machines if you disable simultaneous multithreading. - 5 9
- Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the
service-<project_number>@compute-system.iam.gserviceaccount.compattern. For more information on granting the correct permissions for your service account, see "Machine management" → "Creating machine sets" → "Creating a machine set on GCP". - 14
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 15
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
6.5.5.3. Using custom machine types Copy linkLink copied to clipboard!
Using a custom machine type to install a OpenShift Container Platform cluster is supported.
Consider the following when using a custom machine type:
- Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines.
The name of the custom machine type must adhere to the following syntax:
custom-<number_of_cpus>-<amount_of_memory_in_mb>For example,
custom-6-20480.
As part of the installation process, you specify the custom machine type in the install-config.yaml file.
Sample install-config.yaml file with a custom machine type
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
gcp:
type: custom-6-20480
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
gcp:
type: custom-6-20480
replicas: 3
6.5.5.4. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file.
Prerequisites
-
You have an existing
install-config.yamlfile. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.NoteThe
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
Procedure
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port>1 httpsProxy: https://<username>:<pswd>@<ip>:<port>2 noProxy: example.com3 additionalTrustBundle: |4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ...- 1
- A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. - 2
- A proxy URL to use for creating HTTPS connections outside the cluster.
- 3
- A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. - 4
- If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.
NoteThe installation program does not support the proxy
readinessEndpointsfield.- Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.
Only the Proxy object named cluster is supported, and no additional proxies can be created.
6.5.6. Using a GCP Marketplace image Copy linkLink copied to clipboard!
If you want to deploy an OpenShift Container Platform cluster using a GCP Marketplace image, you must create the manifests and edit the compute machine set definitions to specify the GCP Marketplace image.
Prerequisites
- You have the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Generate the installation manifests by running the following command:
$ openshift-install create manifests --dir <installation_dir>Locate the following files:
-
<installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml -
<installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-1.yaml -
<installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-2.yaml
-
In each file, edit the
.spec.template.spec.providerSpec.value.disks[0].imageproperty to reference the offer to use:- OpenShift Container Platform
-
projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 - OpenShift Platform Plus
-
projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 - OpenShift Kubernetes Engine
-
projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145
Example compute machine set with the GCP Marketplace image
deletionProtection: false
disks:
- autoDelete: true
boot: true
image: projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145
labels: null
sizeGb: 128
type: pd-ssd
kind: GCPMachineProviderSpec
machineType: n2-standard-4
6.5.7. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations:
-
The
GOOGLE_CREDENTIALS,GOOGLE_CLOUD_KEYFILE_JSON, orGCLOUD_KEYFILE_JSONenvironment variables -
The
~/.gcp/osServiceAccount.jsonfile -
The
gcloud clidefault credentials
-
The
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
Optional: You can reduce the number of permissions for the service account that you used to install the cluster.
-
If you assigned the
Ownerrole to your service account, you can remove that role and replace it with theViewerrole. -
If you included the
Service Account Key Adminrole, you can remove it.
-
If you assigned the
6.5.8. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
6.5.9. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
6.5.10. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
6.5.11. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
6.6. Installing a cluster on GCP with network customizations Copy linkLink copied to clipboard!
In OpenShift Container Platform version 4.8, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud Platform (GCP). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster.
You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.
6.6.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
- You configured a GCP project to host the cluster.
- If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
6.6.2. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
6.6.3. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the full path to your service account private key file.$ export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>"Verify that the credentials were applied.
$ gcloud auth list
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
6.6.4. Obtaining the installation program Copy linkLink copied to clipboard!
Before you install OpenShift Container Platform, download the installation file on a local computer.
Prerequisites
- You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
- Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
- Select your infrastructure provider.
Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files.
ImportantThe installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster.
ImportantDeleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar xvf openshift-install-linux.tar.gz- Download your installation pull secret from the Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
6.6.5. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select gcp as the platform to target.
- If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
- Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
6.6.5.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
6.6.5.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
6.6.5.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
6.6.5.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
6.6.5.1.4. Additional Google Cloud Platform (GCP) configuration parameters Copy linkLink copied to clipboard!
Additional GCP configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The name of the existing VPC that you want to deploy your cluster to. | String. |
|
| The name of the GCP region that hosts your cluster. |
Any valid region name, such as |
|
| The GCP machine type. | The GCP machine type. |
|
| The availability zones where the installation program creates machines for the specified MachinePool. |
A list of valid GCP availability zones, such as |
|
| The name of the existing subnet in your VPC that you want to deploy your control plane machines to. | The subnet name. |
|
| The name of the existing subnet in your VPC that you want to deploy your compute machines to. | The subnet name. |
|
| A list of license URLs that must be applied to the compute images. Important
The | Any license available with the license API, such as the license to enable nested virtualization. You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. |
|
| The size of the disk in gigabytes (GB). | Any size between 16 GB and 65536 GB. |
|
| The type of disk. |
Either the default |
|
| The name of the customer managed encryption key to be used for control plane machine disk encryption. | The encryption key name. |
|
| For control plane machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. |
|
| For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. |
|
| For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. |
|
| The name of the customer managed encryption key to be used for compute machine disk encryption. | The encryption key name. |
|
| For compute machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. |
|
| For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. |
|
| For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. |
6.6.5.2. Sample customized install-config.yaml file for GCP Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-ssd
diskSizeGB: 1024
encryptionKey:
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-standard
diskSizeGB: 128
encryptionKey:
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production
region: us-central1
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
- 1 10 12 13 14
- Required. The installation program prompts you for this value.
- 2 6 11
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as
n1-standard-8, for your machines if you disable simultaneous multithreading. - 5 9
- Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the
service-<project_number>@compute-system.iam.gserviceaccount.compattern. For more information on granting the correct permissions for your service account, see "Machine management" → "Creating machine sets" → "Creating a machine set on GCP". - 15
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 16
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
6.6.7. Network configuration phases Copy linkLink copied to clipboard!
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork networking.machineNetworkFor more information on these fields, refer to Installation configuration parameters.
NoteSet the
networking.machineNetworkto match the CIDR that the preferred NIC resides in.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration.
You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2.
6.6.8. Specifying advanced network configuration Copy linkLink copied to clipboard!
You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster.
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
Prerequisites
-
You have created the
install-config.yamlfile and completed any modifications to it.
Procedure
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
<installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following examples:Specify a different VXLAN port for the OpenShift SDN network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800Enable IPsec for the OVN-Kubernetes network provider
apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}-
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files.
6.6.9. Cluster Network Operator configuration Copy linkLink copied to clipboard!
The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster. The CR specifies the fields for the Network API in the operator.openshift.io API group.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed:
clusterNetwork- IP address pools from which pod IP addresses are allocated.
serviceNetwork- IP address pool for services.
defaultNetwork.type- Cluster network provider, such as OpenShift SDN or OVN-Kubernetes.
You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
6.6.9.1. Cluster Network Operator configuration object Copy linkLink copied to clipboard!
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The name of the CNO object. This name is always |
|
|
| A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
You can customize this field only in the |
|
|
| A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
| Configures the Container Network Interface (CNI) cluster network provider for the cluster network. |
|
|
| The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. |
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
Either Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. |
|
|
| This object is only valid for the OpenShift SDN cluster network provider. |
|
|
| This object is only valid for the OVN-Kubernetes cluster network provider. |
Configuration for the OpenShift SDN CNI cluster network provider
The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
|
Configures the network isolation mode for OpenShift SDN. The default value is
The values |
|
|
| The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all VXLAN packets. The default value is If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number.
On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port |
Example OpenShift SDN configuration
defaultNetwork:
type: OpenShiftSDN
openshiftSDNConfig:
mode: NetworkPolicy
mtu: 1450
vxlanPort: 4789
Configuration for the OVN-Kubernetes CNI cluster network provider
The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider.
| Field | Type | Description |
|---|---|---|
|
|
| The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes.
If your cluster requires different MTU values for different nodes, you must set this value to This value cannot be changed after cluster installation. |
|
|
|
The port to use for all Geneve packets. The default value is |
|
|
| Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. |
|
|
| Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
| Field | Type | Description |
|---|---|---|
|
| integer |
The maximum number of messages to generate every second per node. The default value is |
|
| integer |
The maximum size for the audit log in bytes. The default value is |
|
| string | One of the following additional audit log targets:
|
|
| string |
The syslog facility, such as |
Example OVN-Kubernetes configuration
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig: {}
kubeProxyConfig object configuration
The values for the kubeProxyConfig object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
|
The refresh period for Note
Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the |
|
|
|
The minimum duration before refreshing
|
6.6.10. Deploying the cluster Copy linkLink copied to clipboard!
You can install OpenShift Container Platform on a compatible cloud platform.
You can run the create cluster command of the installation program only once, during initial installation.
Prerequisites
- Configure an account with the cloud platform that hosts your cluster.
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster.
Procedure
Change to the directory that contains the installation program and initialize the cluster deployment:
$ ./openshift-install create cluster --dir <installation_directory> \1 --log-level=info2 NoteIf the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadminuser, display in your terminal.Example output
... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22sNoteThe cluster access and credential information also outputs to
<installation_directory>/.openshift_install.logwhen an installation succeeds.Important-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. - It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
ImportantYou must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
6.6.11. Installing the OpenShift CLI by downloading the binary Copy linkLink copied to clipboard!
You can install the OpenShift CLI (oc) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS.
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform 4.8. Download and install the new version of oc.
Installing the OpenShift CLI on Linux
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Linux Client entry and save the file.
Unpack the archive:
$ tar xvzf <file>Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
Installing the OpenShift CLI on Windows
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 Windows Client entry and save the file.
- Unzip the archive with a ZIP program.
Move the
ocbinary to a directory that is on yourPATH.To check your
PATH, open the command prompt and execute the following command:C:\> path
After you install the OpenShift CLI, it is available using the oc command:
C:\> oc <command>
Installing the OpenShift CLI on macOS
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
- Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal.
- Select the appropriate version in the Version drop-down menu.
- Click Download Now next to the OpenShift v4.8 MacOSX Client entry and save the file.
- Unpack and unzip the archive.
Move the
ocbinary to a directory on your PATH.To check your
PATH, open a terminal and execute the following command:$ echo $PATH
After you install the OpenShift CLI, it is available using the oc command:
$ oc <command>
6.6.12. Logging in to the cluster by using the CLI Copy linkLink copied to clipboard!
You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation.
Prerequisites
- You deployed an OpenShift Container Platform cluster.
-
You installed the
ocCLI.
Procedure
Export the
kubeadmincredentials:$ export KUBECONFIG=<installation_directory>/auth/kubeconfig1 - 1
- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
Verify you can run
occommands successfully using the exported configuration:$ oc whoamiExample output
system:admin
6.6.13. Telemetry access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level.
6.6.14. Next steps Copy linkLink copied to clipboard!
- Customize your cluster.
- If necessary, you can opt out of remote health reporting.
6.7. Installing a cluster on GCP in a restricted network Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC).
You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs.
6.7.1. Prerequisites Copy linkLink copied to clipboard!
- You reviewed details about the OpenShift Container Platform installation and update processes.
- You read the documentation on selecting a cluster installation method and preparing it for users.
You mirrored the images for a disconnected installation to your registry and obtained the
imageContentSourcesdata for your version of OpenShift Container Platform.ImportantBecause the installation media is on the mirror host, you can use that computer to complete all installation steps.
You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements:
- Contains the mirror registry
- Has firewall rules or a peering connection to access the mirror registry hosted elsewhere
-
If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to
*.googleapis.comandaccounts.google.com. -
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain IAM credentials.
6.7.2. About installations in restricted networks Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
6.7.2.1. Additional limits Copy linkLink copied to clipboard!
Clusters in restricted networks have the following additional limitations and restrictions:
-
The
ClusterVersionstatus includes anUnable to retrieve available updateserror. - By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
6.7.3. Internet access for OpenShift Container Platform Copy linkLink copied to clipboard!
In OpenShift Container Platform 4.8, you require access to the internet to obtain the images that are necessary to install your cluster.
You must have internet access to:
- Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
- Access Quay.io to obtain the packages that are required to install your cluster.
- Obtain the packages that are required to perform cluster updates.
If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry.
6.7.4. Generating a key pair for cluster node SSH access Copy linkLink copied to clipboard!
During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication.
After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs.
Procedure
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>1 - 1
- Specify the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.
NoteIf you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the
x86_64architecture, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm.View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pubAdd the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.NoteOn some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example output
Agent pid 31874NoteIf your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>1 - 1
- Specify the path and file name for your SSH private key, such as
~/.ssh/id_ed25519
Example output
Identity added: /home/<you>/<path>/<file_name> (<computer_name>)Set the
GOOGLE_APPLICATION_CREDENTIALSenvironment variable to the full path to your service account private key file.$ export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>"Verify that the credentials were applied.
$ gcloud auth list
Next steps
- When you install OpenShift Container Platform, provide the SSH public key to the installation program.
6.7.5. Creating the installation configuration file Copy linkLink copied to clipboard!
You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP).
Prerequisites
- Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
-
Have the
imageContentSourcesvalues that were generated during mirror registry creation. - Obtain the contents of the certificate for your mirror registry.
- Obtain service principal permissions at the subscription level.
Procedure
Create the
install-config.yamlfile.Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the directory name to store the files that the installation program creates.
ImportantSpecify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
At the prompts, provide the configuration details for your cloud:
Optional: Select an SSH key to use to access your cluster machines.
NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.- Select gcp as the platform to target.
- If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file.
- Select the project ID to provision the cluster in. The default value is specified by the service account that you configured.
- Select the region to deploy the cluster to.
- Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster.
- Enter a descriptive name for your cluster.
- Paste the pull secret from the Red Hat OpenShift Cluster Manager.
Edit the
install-config.yamlfile to provide the additional information that is required for an installation in a restricted network.Update the
pullSecretvalue to contain the authentication information for your registry:pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'For
<mirror_host_name>, specify the registry domain name that you specified in the certificate for your mirror registry, and for<credentials>, specify the base64-encoded user name and password for your mirror registry.Add the
additionalTrustBundleparameter and value.additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry.
Define the network and subnets for the VPC to install the cluster in under the parent
platform.gcpfield:network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>For
platform.gcp.network, specify the name for the existing Google VPC. Forplatform.gcp.controlPlaneSubnetandplatform.gcp.computeSubnet, specify the existing subnets to deploy the control plane machines and compute machines, respectively.Add the image content resources, which look like this excerpt:
imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/releaseTo complete these values, use the
imageContentSourcesthat you recorded during mirror registry creation.
-
Make any other modifications to the
install-config.yamlfile that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the
install-config.yamlfile so that you can use it to install multiple clusters.ImportantThe
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
6.7.5.1. Installation configuration parameters Copy linkLink copied to clipboard!
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster’s platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform.
After installation, you cannot modify these parameters in the install-config.yaml file.
The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct.
6.7.5.1.1. Required configuration parameters Copy linkLink copied to clipboard!
Required installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
|
The API version for the | String |
|
|
The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the |
A fully-qualified domain or subdomain name, such as |
|
|
Kubernetes resource | Object |
|
|
The name of the cluster. DNS records for the cluster are all subdomains of |
String of lowercase letters, hyphens ( |
|
|
The configuration for the specific platform upon which to perform the installation: | Object |
|
| Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. |
|
6.7.5.1.2. Network configuration parameters Copy linkLink copied to clipboard!
You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults.
Only IPv4 addresses are supported.
| Parameter | Description | Values |
|---|---|---|
|
| The configuration for the cluster network. | Object Note
You cannot modify parameters specified by the |
|
| The cluster network provider Container Network Interface (CNI) plugin to install. |
Either |
|
| The IP address blocks for pods.
The default value is If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use An IPv4 network. |
An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between |
|
|
The subnet prefix length to assign to each individual node. For example, if | A subnet prefix.
The default value is |
|
|
The IP address block for services. The default value is The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. | An array with an IP address block in CIDR format. For example:
|
|
| The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. | An array of objects. For example:
|
|
|
Required if you use | An IP network block in CIDR notation.
For example, Note
Set the |
6.7.5.1.3. Optional configuration parameters Copy linkLink copied to clipboard!
Optional installation configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. | String |
|
| The configuration for the machines that comprise the compute nodes. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of compute machines, which are also known as worker machines, to provision. |
A positive integer greater than or equal to |
|
| The configuration for the machines that comprise the control plane. |
Array of |
|
|
Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are | String |
|
|
Whether to enable or disable simultaneous multithreading, or Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. |
|
|
|
Required if you use |
|
|
|
Required if you use |
|
|
| The number of control plane machines to provision. |
The only supported value is |
|
| The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. |
|
|
|
Enable or disable FIPS mode. The default is Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the Note If you are using Azure File storage, you cannot enable FIPS mode. |
|
|
| Sources and repositories for the release-image content. |
Array of objects. Includes a |
|
|
Required if you use | String |
|
| Specify one or more repositories that may also contain the same images. | Array of strings |
|
| How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. |
|
|
| The SSH key or keys to authenticate access your cluster machines. Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your | One or more keys. For example:
|
6.7.5.1.4. Additional Google Cloud Platform (GCP) configuration parameters Copy linkLink copied to clipboard!
Additional GCP configuration parameters are described in the following table:
| Parameter | Description | Values |
|---|---|---|
|
| The name of the existing VPC that you want to deploy your cluster to. | String. |
|
| The name of the GCP region that hosts your cluster. |
Any valid region name, such as |
|
| The GCP machine type. | The GCP machine type. |
|
| The availability zones where the installation program creates machines for the specified MachinePool. |
A list of valid GCP availability zones, such as |
|
| The name of the existing subnet in your VPC that you want to deploy your control plane machines to. | The subnet name. |
|
| The name of the existing subnet in your VPC that you want to deploy your compute machines to. | The subnet name. |
|
| A list of license URLs that must be applied to the compute images. Important
The | Any license available with the license API, such as the license to enable nested virtualization. You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. |
|
| The size of the disk in gigabytes (GB). | Any size between 16 GB and 65536 GB. |
|
| The type of disk. |
Either the default |
|
| The name of the customer managed encryption key to be used for control plane machine disk encryption. | The encryption key name. |
|
| For control plane machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. |
|
| For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. |
|
| For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. |
|
| The name of the customer managed encryption key to be used for compute machine disk encryption. | The encryption key name. |
|
| For compute machines, the name of the KMS key ring to which the KMS key belongs. | The KMS key ring name. |
|
| For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google’s documentation on Cloud KMS locations. | The GCP location for the key ring. |
|
| For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. | The GCP project ID. |
6.7.5.2. Sample customized install-config.yaml file for GCP Copy linkLink copied to clipboard!
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-ssd
diskSizeGB: 1024
encryptionKey:
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
gcp:
type: n2-standard-4
zones:
- us-central1-a
- us-central1-c
osDisk:
diskType: pd-standard
diskSizeGB: 128
encryptionKey:
kmsKey:
name: worker-key
keyRing: test-machine-keys
location: global
projectID: project-id
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
gcp:
projectID: openshift-production
region: us-central1
network: existing_vpc
controlPlaneSubnet: control_plane_subnet
computeSubnet: compute_subnet
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}'
fips: false
sshKey: ssh-ed25519 AAAA...
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <local_registry>/<local_repository_name>/release
source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
- 1 10 11 12
- Required. The installation program prompts you for this value.
- 2 6
- If you do not provide these parameters and values, the installation program provides the default value.
- 3 7
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - 4 8
- Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.ImportantIf you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as
n1-standard-8, for your machines if you disable simultaneous multithreading. - 5 9
- Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the
service-<project_number>@compute-system.iam.gserviceaccount.compattern. For more information on granting the correct permissions for your service account, see "Machine management" → "Creating machine sets" → "Creating a machine set on GCP". - 13
- Specify the name of an existing VPC.
- 14
- Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified.
- 15
- Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified.
- 16
- For
<local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example,registry.example.comorregistry.example.com:5000. For<credentials>, specify the base64-encoded user name and password for your mirror registry. - 17
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.Important
The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the
x86_64architecture. - 18
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.NoteFor production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - 19
- Provide the contents of the certificate file that you used for your mirror registry.
- 20
- Provide the
imageContentSourcessection from the output of the command to mirror the repository.
6.7.5.3. Create an Ingress Controller with global access on GCP Copy linkLink copied to clipboard!
You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers.
Prerequisites
-
You created the
install-config.yamland complete any modifications to it.
Procedure
Create an Ingress Controller with global access on a new GCP cluster.
Change to the directory that contains the installation program and create a manifest file:
$ ./openshift-install create manifests --dir <installation_directory>1 - 1
- For
<installation_directory>, specify the name of the directory that contains theinstall-config.yamlfile for your cluster.
After creating the file, several network configuration files are in the
manifests/directory, as shown:$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yamlExample output
cluster-ingress-default-ingresscontroller.yamlOpen the
cluster-ingress-default-ingresscontroller.yamlfile in an editor and enter a custom resource (CR) that describes the Operator configuration you want:Sample
clientAccessconfiguration toGlobalspec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global1 type: GCP scope: Internal2 type: LoadBalancerService
6.7.5.4. Using custom machine types Copy linkLink copied to clipboard!
Using a custom machine type to install a OpenShift Container Platform cluster is supported.
Consider the following when using a custom machine type:
- Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines.
The name of the custom machine type must adhere to the following syntax:
custom-<number_of_cpus>-<amount_of_memory_in_mb>For example,
custom-6-20480.
As part of the installation process, you specify the custom machine type in the install-config.yaml file.
Sample install-config.yaml file with a custom machine type
compute:
- architecture: amd64
hyperthreading: Enabled
name: worker
platform:
gcp:
type: custom-6-20480
replicas: 2
controlPlane:
architecture: amd64
hyperthreading: Enabled
name: master
platform:
gcp:
type: custom-6-20480
replicas: 3
6.7.5.5. Configuring the cluster-wide proxy during installation Copy linkLink copied to clipboard!
Production env